Download as pdf or txt
Download as pdf or txt
You are on page 1of 275

"

÷
l

.
ÎIÏË ÏÎÏËÈË ÏÏËËÏËIÏËÏÎË
":!¥ : Ï ÷: ¥:
: :*

Understand in g
in a
¥
Mrad

*ÏË:¥ :¥ ï ;÷Ï Ë÷¥÷÷¥÷:±÷;÷÷ :


::$:*:$*

visual
:*

Kubernetes

way
⇐*☒

Learn & dis cover Kubernetes in sketch notes


- with some ttiippss included -

Aurélie Vache

ÏÊÏIÏ:¥ÏÏ:ËË÷:;Ï:ËÏÎËÏËÏËÏ ËÏÎI÷Ï ÷±ÏË÷ ÷ÏË☒ ÏË


"
BREAM
"
;:¥Ï±:¥ ÷;!ÏË÷¥ ÷ :÷ :÷
:*
::* ::**
:*

Understand in g Kubernetes
in Visual
a
way

Aurélie Vache
Speciaelhanfstoeo
Laurent husband Alexandre and all
,
my ,
, my son ,

the caring people who believe in me


everyday ¥
Thank so much ! ! ! ↓↓
you

Reviewers
Thanks to Gaede Aaas
,
Denis Germain &
Stephane
Philip part who took the time to review this book .

Changeling

Th"
Release Date 8 31/05/2020

Updated Date : 19/12/2022

Release Version 8

Changeling 8 1.26 Release

length : 272 pages


Knbernetes 8 1.26

Licences
Creative Commons BY .
NC -
ND
2
http://creativecommons.org/licenses/by-nc-nd/3.0/
ÏË:¥ËÎ!÷:ÏÏëË Ï!ÎËIÇË:*Ï÷,;¥ ÏᵗᵐË*ï; ~
:¥*
.
¥¥:*
:*:*
. :*
±÷ * *±
Kable of

:*
.

Kubernetes Components
Kubeconfig file
"""" " " "

Namespace
Resource Quota ④
"
"*

Pod }
280
Lifecycle I

Délétion

Liveness & Readiness probes


startup probe ⊕
a

container lifecycle events FIL ⊕


Execute Commands in containers €8
Init containers

Job

→ Cron Job
!④
Config MAP ④ 3
?⃝
⑤ Secret
*

Deployment > ⊕
,

Rolling Update
Pull
images configuration
Replica set ⑤
y

DaemonSet ④
Service

Label & Selector

Ingress \

PV
,
PVC & StorageClass

Horizontal Pod Autos caler ( HPA )



Limit Range

Resource & limites


's
request
Pod Disruption Budget ( PDB)

Quality of Service ( Quos) ¥


Network Policies
le

RBAC ⑤
Pod Security Polig ( PSP )

Pod ( Anti ) Affinity & Node ( Anti ) Affinity


4
?⃝
?⃝
?⃝
?⃝
?⃝
?⃝
?⃝
Node >
\

Lifecycle
Node
Operations

Debugging / Troubleshooting ④
Kubectl convert
Tools } 210


Kubectx

Rubens


Stern

IË ÏË ËÊÏË ÏË÷˱¥
*
.
Krew

€ÆË¥ËËÏ¥ÆËÊÆ / ✗ 9s
WE

Kiss

Skaffold ④
Ë Ë Ë Ë ÏëË É Î Kustom :c >

Tips
☐ Knbeseal
ËÈE☒ËâëÊË%Ê÷ï Trivy
-

*⑤Tᵐˢ≈ ""°

ᵉᵐï Ë ÷Ë ÷ ¥± Popeye
5
?⃝
?⃝
?⃝
?⃝
?⃝
?⃝
Kyverno D
23

changess O238

Kubernetes 1.18 O238

Kubernetes 1.19 C

4z
Kubernetes 1.20 Ó
245

Kubernetes 1.21
248
O
Kubernetes 1. 22 àsı
Kubernetes 1.23
i
O

53
s

Kubernetes 1. 24 O
st

Kubernetes 1.25 Él
Kubernetes 1. 26
Ó
¿6

s
depreca
Docker tion
i
Ò
6g
Glossary Ciz

6
Ë¥ËËË

É¥API-

Ï ÏËËÏËÏ I
"

lˢcheÏ^
"☒

☒☒ *☒
Node

7
Es
→ Distrib uted hey - value database
→☒

→ stores & repli cates cluster state


•☒

→ Distrib uted consensus based the model


on
quorum

Master Point Failure µ


'

s etcd : single of

API-Î{

Î""""
→ Exposes Kubernetes API

→ Kubernetes Control Plane Frontend

talk ing to it 's cool µ


Euery one is me ,

I'm designed to s cale horizontal ly ,

so balance traffic between instances


you can .

8
Schechter
→ Responsible for find ing the best Node for nearly
created Pods

→ Existing Nodes need to be fietered according to

the specific schedule ing requirements

:
-

serrer No problem ,

÷:: [[
my mission is to find a

I want a
sui table Node for
this new Pod

www.Man-ager/y
-

to make arder to
→ Responsible changes in more the

Current state to the des ired state

9
→ Runs several separate controller processes :

Tots
→ Watches Nodes & do action

when they are down

fReplicat
→ en sures the desiree number of Pods are

always up and available

flairer/
pop
ulates the end point Objects
( responsible for Services & Pods connections )

nfservicertcount/

&

ÏË
→ create default accounts and

API access tokens for new


namespace
10
Knbeletj
→ Responsible for running State on each Node

→ Checks that all containers on each Node

are
healthy
→ Can create and delete Pods

→ One instance of Kubelet per Node

|☒ù|
**
Oh, oh ! Ineed

A Pod

failed ! | one
!

Ï÷-
Prox
→ Runs on each Nodes

→ Allows network communication to Pods 11


ËÏ¥ÏËf file

Ë¥Ë in
my .
ns
namespace

¥0 $ kubectl get pods -n my-ns

bui
Ok
,

which cluster

¥9
→ Kubectl hnows where ( in which cluster )

to execute the Command thanks to Kube config file

→ Kube config
files are structured in YANL files

ü 12
Now herbe config files are loaded ?
-

① -
-
herbeconfig flag , if specified

② KUBECONFIG environnent variable


, if set

③ $ HOME / kube / config


.

, by default

Youcan.tjustap@endJYAnLbewnfigfilesinonehubeconfigfile.i
i
Tips
Etr
'

It to files
s
possible specify several in

KUBE CONFIG environnent variable :

$ export KUBECONFIG=file1:file2:file3

13
ËEïË
kubectlvers.in/-

.ˢʰ◦uldIinstall& of kubectl
Which version

<
?

↓!
A
""
Kubectl is
supported one minor version

of Huber netes API -


Server ( Older and newer )

Example :
API Server
.
E 1.23

Kubeetl : 1.22 , 1.23 or 1.24

kubectlct-TPKubectl-Averylogic-a.CI?/ 14
?
<
.Ltaay
"
M' deploy
"
to 5 repli cas

µ $ kubectl scale deploy my-deploy —-replicas=5

tËÊ the Command ls

$ kubectl exec my-pod -it —- ls

?
¥
,

$ kubectl api-resources —-namespaced=false

ttaU
that have the status ! =
"

Running
"

✓ ?
$ kubectl delete po
—-field-selector=status.phase!=‘Running’ ↓! 15
?
t'es ordered by status

$ kubectl get pod —-sort-by=.status.phase

Iwantthe@stf1Namespaces.onlytheir name


?
$ kubectl get ns
-o custom-columns="NAME":".metadata.name" —-no-headers

?
.ᵗt in
my
- service Service

↓! $ kubectl patch service my-service —-type json


-p '[{"op": "remove", "path": "/spec/selector/version"}]'

16
This option aims to bisten for changes to a particular object

Example :
$ kubectl get pod —w

This kubectl option includes additional informations

Example :
$ kubectl get pod my-pod -o wide

This kubectl option display s


only the type
& the name
of the resource

Example :
$ kubectl get secret
-l sealedsecrets.bitnami.com/sealed-secrets-key -o name
17
cc
This option allows to delete
only resource

( not child Objects )

Ee :
$ kubectl delete job my-job cascade=false -n my-namespace

18
Ë:¥Eïf

°

per project
→ Way of isolation
§ > per

per
team

family of component

| | |
⑤ï* ⊕ï*
"*

.*
.

*
.

⊕æ* **
**
"*

-- meme

projectA project B project c kube system-

→ Resources normes must be unique inside a


namespace ,
but can be ident ical a cross
namespace :
→☒

SVC / my SVC SVC / my

| ÏïEü
- - SVC

pod / my pod pod / my pod2


*
*
- -

. ma

-
deploy / my.de p*Ëy
-

nsl 2
my -

my
- ns

19
Each
→ ressources can
appear in
Orly one
namespace

Ifa namespace is deleted all , ressources inside

are deleted toc


Ë÷ËÏ*Ï
"
*

NotaUres@urcesJenamespaedoNde.PV ,
PSP 000

A Pod in
namespace A can
'
Tread
a secret from namespace B

| ¥3 |
*
.

'

**

a- ÷ 20
Special namespace :

default → namespace by default


kube system
-

→ for objects created by Kubernetes

hnbe public
-
→ reserved mainly for cluster use

& in case specific resources

should be publicly available

kube node lease Node


'

heart beat lease


-
-

→ s object

'

HewtbeatË

oupdapes.gv.desw.SU
ability
determine the avait of a Noode .

2
forms of heartbeats :

◦ lease Object

21
ËjHuÏ
Il-7
} Switch to my _ ns
namespace

$ kubectl config set-context —-current —-namespace=my-ns

or install and use habens Tool ( p


-211 )

$ kubens my-ns

J View current namespace

$ kubectl config view | grep namespace

} List all namespace

$ kubectl get namespaces

I List all Pods in all namespace

$ kubectl get pods —-all-namespaces

Of

$ kubectl get pods —A

I List all resources types that


"
d
"

are
namespace
$ kubectl api-resources —-namespaced=true

22
"

}
"

Delete a
namespace
stucked in Termina ting state

Ëa please help me µ
'

ter-inat.no#y.stucked-nsKubernetesversion(y
bb
-
-
Works eren in Old

$ kubectl edit ns my-stucked-ns

and remove entries in


finaliser section

23
EËËËË
0

o v
°

→ Limit usage and number of resources in a


namespace

T-faqvotaise%6E-hemryiy.ae
need to
specify request & limit for Pods

24
Ëjw_
Il

} Create a Resource Quota


$ kubectl create quota my-quota —-hard-cpu=1,pods=2

} Describe a Resource Quota & display what is consume d

and the limit

$ kubectl describe quota my-quota —n my-namespace

25
ËË÷ Pod
0

Ë÷µ÷☒±⇐*÷ç *⇐ ÷☒
Smallest unit

"" " " " " " "


"
in a cluster

Can contain
several containers


One IP address

per Pod

26
?⃝
ftp.wto?--Y
Il

} Create a Pod with lousy box image in


my
-

namespace

$ kubectl run busybox —-image=busybox —-restart=Never


-n my-namespace

} Create a Pod with lousy box image


'

&
'

rien a Command env

$ kubectl run busybox —-image=busybox —-restart=Never


-n my-namespace -it —- /bin/sh -c ‘env’

} Copy a
file stored locally to a Pod

$ kubectl cp myfile.txt my-pod:/path/myfile.txt

} List all Pods & in which Nodes they are


running
$ kubectl get pod -o wide —-all-namespaces

27
ËÏ¥Ü
Pod

*.

¥1
- e-

Œï¥ {←ŒŒÈ
"

28
-
Pendi
Container images have not yet been created .

-
Runni
Bound to a Node .
All containers are created .

At least one is
running .

-
Svcceed
All containers are terminated with success .

Will not be re started .

-
rail
At least one container is in fàilure ( exited with

non -
zero code or terminale d by System ) .

29
status Pool
of could not be obtained .

?
Why
7

Tempo rary problem of communication with

the host of the Pod .

30
ËÏ¥Ë
Pod

→ A Pod can be deleted in corder to ask



Kubernetes to start and her one

( with a new
configuration or code for ex no
)

→ Sometime Pods are stucked in

Termina Ting / Unknown State

ŒÂ {
→ The délétion can be manual
Cy forced

nanuaef%6Ft.se used carefully

31
ËI↳Ï→
}

PodgracefulyIwaitforwnfigurati@8omknbe tn-P.oQhubEteln
Delete a

$ kubectl delete pod my-pod

"=
-

:ï:ï÷÷÷Ï÷ï
÷
I removed it &

deare
Pool name

*☒ *☒ *☒

32
?⃝
} Delete a Pod instant ly ( nanually forced)

$ kubectl delete pod my-pod -—grace-period=0 —-force

whenyouforcGPodfdelet.name
is automatically freed from the API Server

J Delete a Pod stick on Unknown state

$ kubectl patch pod my-pod -p '{"metadata: {"finalizers": null}}'

33
Ë:¥Ï &
Readiness

| |
Live ness & Readiness
probes should be configured
-

1AM .nl .
ludwika

Areyouae.ve?I@
c

ÜÏ live Probe : Ok

Knb④
ness

live ness Probe : OK

34
# live ness Probe
Ù④ : Nok


→ Knbelet uses Lireness probes to know when to

restart a container

→ period Seconds : Ask Kubelet to perform a

Liveness probe euery ✗✗ seconds

→ initialDelay Seconds : Ask Kubelet to wait ✗✗ seconds

before to
perform the first probe

35

Different types of Li veness probes exist :

a-

Define a Liveness probe that sends a HTTP


request
container
'

on s Server

on
port 8080

on / heal the

live ness Probe

knb ← _
200
If HTTP code ) =

and < 400

apiVersion: v1
kind: Pod

spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 60
periodSeconds: 20

36
Define a Liveness probe that execute the Command

Cat / tmp / healthy in the container

live ness Probe

knb
If return code = = 0

apiVersion: v1
kind: Pod

spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5

37
Liveness which connecta to
Define a
probe port 8080

apiVersion: v1
kind: Pod

spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 60
periodSeconds: 30

-di

Areyouready
c

readiness Probe : Ok

Knb④
-1s readiness Probe : OK

Service
38
?⃝
Kubelik
→ readiness Probe : Nok

s*
Kubelet to know when
→ uses Readiness probes a

container is
ready to start accepting traffic .

Kubelet the Link


If not , can remove

between Service and Pod .

→ Readiness Probe configuration is


quite similar

to Live ness Probe

39
Ë:¥ËË!

U V

→ Hold off all the other probes until the Pod

finish es its startup

→ Gire time to a container to finish its startup

ËÏ *
readiness Probe ?

Knb④ - startup Probe : OK

T-t.sasokt.io#forsl6startingPds.
40
} Don't test for lireness until HTTP end point

is available

apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:1.0
ports:
- name: liveness-port
containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 1
periodSeconds: 10

}
startupProbe: the have
httpGet:
a
pp
path: /healthz
port: liveness-port
5min ( 30 ✗ 10s )

failureThreshold: 30
to
periodSeconds: 10 finish its startup

41
IËïÏË
0

"
K
events
u v
.

→ Sometime you reed to telle Kubernetes that


your
Pod should Orly start when a condition is satisfied

→ Sometime you want to execute Commands before


Kubernetes terminales a Pod

? ?
r.
O
¥

42
Iida

0 00

Not as fast as
you
,

up ? sorry µ

43
O OO

toi µ

44
2.

ftp./HowTo?--Y
) Create a
Deployment with 3 repli cas .

Its Pod will start only when ist i o -

proxy
(
envoy )

is
ready
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-image:latest
lifecycle:
postStart:
httpGet:
path: /healthz/ready
port: 15020

45
Ë

wil berestartedasbngastheCho @ststartfails.l


App container

Create a
Deployment with a container that

exécutes a Command at the start of the Pod


spec:
containers:
- name: my-app
image: my-image:latest
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello
Kubernetes lovers"]

46
to goodbye
Sag
Wait,
you reed
to

d)

ËÏH◦Ï→
) Create a
Deployment that hills gracefully
app bef.ie the Pod terminates
my
-


spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "my-app -kill;
do sleep 1; done"]
47
LËËÏï
0

÷ 0
in containers
.


hubectl exec Command allows to run Commands inside
you
a Pod in a container

→ Useful in arder to debug in container


your

a
$ hubectl pod container

ÏË¥÷**¥ËJ÷Ï÷÷ÏËÏÜ
exec
my e
my
- - -

-
it - - sh

0

py.gg?y--
( \ .
À
$ ls -l
$ cat path/to/a/file
?

µ,
} Connect to a
specific container & open an interactive shell

$ kubectl exec my-pod -c my-container -it —- sh

w.it/--stdin--tty : interactive shell

> ls -l
> tail -f /var/log/debug.log 48
-

ii
'

*
specify a default container

:
y

apiVersion: v1
kind: Pod
metadata:
name: my-pod
annotations:
kubectl.kubernetes.io/default-container: my-container-2
spec:
containers:
- name: my-container
image: my-image
- name: my-container-2
image: my-image-2
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done

} Connect to my container -2 container


_

( no need to specify -
c
option thanks to the annotation )

$ kubectl exec my-pod -it —- sh

49
贕f
0

o v
.

→ In it containers run
before app
containers in a Pod

→ Allows to prepare the main container (s) &


you
Separate your a
pp from the initialization log :c

→ Containers & Init containers Share the same volume


ËËËËÏÏËÏ
&oneormoreinitconLainers÷
A Pod can have one or more containers

50
gIruneachinit-@ainerssequenc.ial y
d'S
& then containers
Usual

ÏüÉË -
① container .

¥Œ ✓

② ]ËË
| Init container 2

K¥7 ✓

ÈË¥Ë¥Ë Ë ËÏËÈ¥I
☐fÏËËËËË ÎFµËËÏË containers

r r r
-

¥

F-achinitcontainersneeda-tartsuesfdybeforeexecut.mg
the next one .

51
µ,*"ç
]ÊÈ
| Init containers

② Init container 2

Ca it
Ë ✗

until
succeed


☐ÏËÏËË:
52
,±÷,*"÷a
Ï
]ÊÈ
| Init containers


FËË÷Æ Init container 2


. FÏ÷ Ï÷¥Ë
¥ 53
→ Useful for :

D Init DB schémas

} Set up
permissions QQ

> Clone a Git repository to a shared volume ËËËËË


'

} Sand data to the main


app ☐

} Check the access


ibility of a Service ✓
000

54
Ëjæ_
Il

} Create a Pod with


my app
. container & an in it container

that check a Service is accessible

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: my-app:1.0
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: check-service
image: busybox
command: ['sh', '-c', "until nslookup my-svc.$(cat /var/run/
secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local;
do echo waiting for my-svc; sleep 5; done"]

55
} Create a Pod with :

o an in it container that execute a Git clone to

lusr / share / nginxlhtme folder


o a
ngninx container


Shared volume
0 a

↓ÀËËËË"
e

ⁿÏÊËËËËÀ¥ÏiË¥¥Ï¥ËËË÷ï
-

¥Ë¥¥¥Ë¥ËÏÏ
https://github.co
website git
my
.
.

apiVersion: v1
kind: Pod
metadata:
name: my-website
spec:
initContainers:
- name: clone-repo
image: alpine/git
command:
- git
- clone
- --progress
- https://github.com/scraly/my-website.git
- /usr/share/nginx/html
volumeMounts:
- name: website-content
mountPath: "/usr/share/nginx/html"
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- name: website-content
mountPath: "/usr/share/nginx/html"
volumes:
- name: website-content 56
emptyDir: {}
'

t.spe.i.spaficanpentnt.websena.JP

Eren if hosting / running a


website / blog in Kerber netes can be

Overkill if want to can see solution can be


,
you , as
you ,
a

Thanks to that you no longer have to put your


website content inside a Docker image .

A Pod is mortal ,
so
everytime a new one is

starting it Will content of


,
have a
fresh your

Git repository retrieved by the in it container .

0 ☒
*

*
ÏËËË"
NT
Commit
Developers

Ë }
• ☐¥ Ï Ë Ë Ê Ë Æ Ë Ë
clone the
repository ÏËÊÊËËÆËËÏËE÷ÉÏÏ?Ëô☒Ë ËÉËÏË¥Ï ☐ ☐ËÊËËË÷¥:Ï
€_ÉEŒŒsE

ËÉËÊ Ë ÆʱËÊÏ ☐☐ ËÊËË¥÷ÊË÷Ë:ÊË



±ËËËÆ
€ÉÆ-

€ÉÆ-

-
Pulling ;n× ,
Ëi⇐Ë÷Ë ËŒæËÏ¥Ï
Docker image 57
:
贆
?
*
IL Ëob

} A
process that runs a certain time to complétions
batch process

backup
database migration / clean

} A Job runs one or more Pods and en sures a


specified
number of them successfully terminale

"
*

58
When the Pod launched
J by a Job is
finished ,

its status will be Completed .

the Pad failed until


If ,
an other Pod will run

number of complétions is done .

I Job can be launched by a Cron Job

tobcanbeone.timed-qun.ae parallel

rormÎ

j'ÏÏ <
Based on Cron
format
i.

pp
[to
→☒

$ kubectl get cronjob


NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
my-cj 0 5 */1 * * False 0 3h53m 10s

59
ËjHouÏ
-7Il

} Create a Job from boy box image ( Imperative way )

$ kubectl create job my-job -—image=busybox

} Create a Job from lousy box image ( Déclarative way )

apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:

}
backoffLimit: 5
activeDeadlineSeconds: 120 Optimal
completions: 1
parallelism: 1
template:
spec:
containers:
- name: busybox
image: busybox
restartPolicy: onFailure ) Required

} Create a Job from a [ ton Job

$ kubectl create job my-job -—from=cronjob/my-cronjob

60
} Create Job that will be deleted automatic
a
ally
after the time defined

apiVersion: batch/v1 d. 2233


kind: Job
metadata:
name: my-job-with-ttl
spec:
ttlSecondsAfterFinished: 100 ) TTL = Time To Leaven
template:
metadata:
name: my-job-with-ttl
spec:
containers:
- name: busybox
image: busybox

} Delete a Job ( and all its childs Pods )

$ kubectl delete job my-job

} Delete a Job
only
$ kubectl delete job my-job cascade=false

61
?

¥
7-restart.li

starts Pod
If the Pool failed ,
Job controller a new .

o-OnFai@eIfthePodfailedlforanyreasonJ.thePodstays
on the No de but containers will re -
run .

manea

container

62
restartpolicyisapQ.la?dntaJb
Ïᵉᵗ↳æaa
| When a
Pod terminales with success
,

ce Job will neuer re - run / restart ce Pod .

bacleofflimitvumberofretr.es before considering a Job as failed .

By default back to
off Limit is e
quae 6 .

active-D-a-4-d.ee
Once a Job reaches the number of active Deadline Seconds,

all of running Pods will be killed :

⑤ →

status : Deadline F- ✗ ceeded


63
'

±J_
f.xchofflinitisnotyetreach.ec/-)
deploy
A Job will not additional Pods when

tire Deadline Seconds is reached , einen if the

Ë
By default complétions is
equal to 1
,
which micans the

normal behari or for a Job is to run Pods until one Pod

terminales
successfully -

If we set complet ions equals to 5 for ex ,


.
Job Controller

will Spawn Pods until the number of complete Pods

will reach this value .

paral elismparal el.snallows


you
to run multiple Pods at the

same time .

By default parallélisme is set to 1- .

64
?⃝
T-fcompletions.IQ#efineditisequal
to the
parallélisme number .

65
ini
ÎÊË
.
Jobs Gon

bË ⑤
→ Create Job on a time based schedule
-

°o°
22220

Orperiodicaly.o 0IVstimetowaheap@Oi_.o

66
form.FI Based on Cron
format •☒

$ kubectl get cronjob


NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
→☒

my-cj 0 5 */1 * * False 0 3h53m 10s

ËÏw_
Il

} Create a
Cron Job
from lousy box image ( Déclarative way )

apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
failedHistoryLimit: 3 } number of failed Jobs
successfulJobHistoryLimit: 1 } number of successful Job
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello readers
restartPolicy: onFailure
67
'

T-fstartingdeadknesecondsa-thanndsic.vn
Job
may
not been scheduled Y.

> Create a Cron Job from busybox image


& run
euery
5 minutes

$ kubectl create cronjob my-cj —-image=busybox


—-schedule="*/5 * * * *" -n my-namespace

} Change Consob schedule Ing

$ kubectl patch cronjob my-cronjob


-p '{"spec":{"schedule": "0 */1 */1 * *"}}'

J Lau neh a Job from an existing Cron Job

$ kubectl create job —-from=cronjob/my-cronjob my-job


-n my-namespace

68
Ë:Ü
0

÷ .


:

Ô
¥ {putnon-sensitivedatainlonfigrlapinstead.tl/
-
Don't hardcode configuration data in
app ,

Twelve Factor App pliant - corn

→ Aim : Separate configuration from Pods

to
→ Allows deploy a same
app indifferent environments

|
*
.

¥
.
*

a
☒ my
-
Cn -

prod

my - Cn -

stag
-

staging production

69
3 to create
→ way s a
Config Map

from hey and value
{ Ça n en **
.
file
from a
fi le

70
Ëjw_
Il

} Create a
Config Map from hey and value

$ kubectl create cm my-cm-1 —-from-literal=my-key=my-value


-n my-namespace

I Create a
Config Map from a file
$ kubectl create cm my-cm-1 —-from-file=my-file.txt
-n my-namespace

} Create a
Config Map from an env
file
$ kubectl create cm my-cm-1 —-from-env-file=my-envfile.txt
-n my-namespace

} Create a Pod with Config Map mon ted as a volume


apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: busybox
volumeMounts:
- name: my-cm-volume
mountPath: /etc/myfile.txt
volumes:
- name: my-cm-volume
configMap:
name: config
71
J Create a Pod and load the value from variable

my -

hey in an env variable MY ENV KEY


- -

apiVersion: v1
kind: Pod
metadata:
name: my-pod-2
spec:
containers:
- name: my-container
image: busybox
env:
- name: MY-ENV-KEY
valueFrom:
configMapKeyRef:
name: my-cm
key: my-key

} Create a Pod & fill env vars from a


Config Map
apiVersion: v1
kind: Pod
metadata:
name: my-pod-3
spec:
containers:
- name: my-container
image: busybox
envFrom:
- configMapRef:
name: my-cm

72
ËÏËÜ .


Save sensitive data in secrets

→ Base 64 encoded "


at rest "

lencrsrteg.iq?jen*ed--

*

encoded

\ automatic a

decided
leg
when attached
to a Pod

73
→ 3 to create Secret
ways a :

-
from hey and value
+ from an
.
en.** file
from a
fi le

"

docker registry
→ Several types of secret
-

74
ËIH◦Ï→
} Create a Secret from hey and value

$ kubectl create secret generic my-secret


—-from-literal=password=‘my-awesome-password’

I Create a Secret from a file in


my namespace
-

$ kubectl create secret generic my-secret


—-from-file=password.txt -n my-namespace

} Retrieve a secret open it , extract desiree data


,

& decode it

$ kubectl get secret my-ca-secret


-o=go-template='{{index .data "ca.crt" | base64decode}}'

75
ËË
iii. Deployment
.

→ Deployment handles Pod Creation

→ Deployment are
responsible for Pods

& must ensure


they are
running

→ Deployment create a Replica set that create Rod

according to the number of specified repli cas

*
.

Deployment → Replica →
☒"

.
*

76
→ Features :

Rolling update / Roll out

Deployment history

77
ËjHuÏ
Il-7
} Create a
Deployment that create one Pod with busybox
image ( Imperative way )

$ kubectl create deploy my-deploy -—image=nginx

} Create a
Deployment that create 3 replica of Pod

image ( Déclarative
way )

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: my-deploy
template:
metadata:
labels:
app: my-deploy
spec:
containers:
- name: nginx
image: nginx

78
Deployment


÷¥ ¥ ˵¥ Ë÷ÊË
ËË¥ËË
} Seale a
Deployment to 5 repli cas

$ kubectl scale deploy my-deploy -—replicas=5

79
ËÏ¥ËüË Deployment

→ Rolling Update allows zero - down time


during

Deployment update

→ Increment
ally update instances by
creating new one

→ Updates are version ed &


any update can be

reverted to a
previous version

80
image : busybox image : busybox : 1.2.93

|
-⑧**⊖ ⊕ _⑧→* ⊕
" "
myy myy
↑ Create
.mn , ↑ Update
÷.

O O

¥ ¥
① ②

81
image : busybox : 1.2.93

-⑧* ④ ④
\ /→
•☒

a new Pod

is created

nyy
1

0¥3

82
image : lousy box : 1.29.3

* ⊕
\ ]
The Pool is removed

when the new is started

my-

0
¥

83
"
"
"


Î
I
same Hring for
2nd replica / Pod

84
¥0
Rollback to
2nd revision

85
ËjHouÏ
Il-7
① Create a
Deployment in a
file named
my
-

deploy yaml
-

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: busybox
spec:
replicas: 3
selector:
matchLabels:
app: busybox
template: ☒
metadata:
labels:
app: my-deploy
spec:
containers:
- name: busybox
image: busybox

{
100% strategy:
of Pods will be
rollingUpdate:
created & then old Pods maxSurge: 100%
maxUnavailable: 0%
will be deleted type: RollingUpdate

② Apply manifest YANL file


$ kubectl apply -f my-deploy.yaml

86
③ Update deployment container
'

s
image name /
tag
$ kubectl set image deploy my-deploy
busybox=busybox:1.29.3 —-record

④ show
deployment rolling update logs
$ kubectl rollout status deploy my-deploy

⑤ Show history of deployment creation / updates


$ kubectl rollout history deploy my-deploy

⑥ Show history of deployment creation / updates

of the 2ⁿᵈ revision

$ kubectl rollout history deploy my-deploy —-revision=2

Rollback to previous deployment


$ kubectl rollout undo deploy my-deploy

87
?⃝
'

Ë
000

:
} Restart a
deployment / Kiel existing Pods and

start new Pods

$ kubectl rollout restart deploy my-deploy

} Pause and then resume the deployment of the resource

$ kubectl rollout pause deploy my-deploy


$ kubectl rollout resume deploy my-deploy

88
ËË¥Ï
Pull

Configuration

→ In corder to pull container images ,


Kubernetes

heed several configuration

→ Theses configurations can be set on :

☐ Pod ☐ Replica set



Deployment ☐ Daemons et

☐ Cron Job ☐
statefueset
☐ Job

000
Euery resource type including container
Spec

89
ETÉ
-


Thanksto hiswnfiguratio62Iknow henI e dtopul@bel containerimag.eI
containers:
- name: my-container
image: my-registry.com/my-app:tag
imagePullPolicy: Always

Ifivotpresentldefauetvalue :

Kerbel et doesn't pull image from registry


if image already exists in the Node

Always :

Âges pulled everytime the Pool is started

Neyer :

Image is assume d to exist locally .

attempt the
No is made to pull image .
90
ÉTÉ
-

Thankstothiswnfiguratio@2IknowIneedtouseasecre.t
I

imagreg.sk#
to access to the

91
ËIH◦Ï→

containers:
- name: my-container
image: my-registry.com/my-app:tag
imagePullPolicy: Always
imagePullSecrets:
- name: registry-secret

> Kubernetes uses docker -

registry secret type


Jor image registry authentication

$ kubectl create secret docker-registry registry-secret


—-docker-server=<host>:8500
—-docker-username=<user_name>
—-docker-password=<user_password>
—-docker-email=<user_email>

} You can also create a secret from an


existing
-
docker / config -

g-son file :

$ kubectl create secret generic registry-secret


—-from-file=.dockerconfigjson=<path/to/.docker/config.json>
—-type=kubernetes.io/dockerconfigjson

92
Ë¥I Replicaset

→ ReplicaSet create Pods according to the

number of specified repli cas

"
*

Dep → Replica ⊕ï☒


.
*

→ Replica set can be created nanvally


automatically Deployment
or
by a

'

It'susefueËtheUËa
but not
mandatory to
mariage
them manu
ally .

93
贕f 0

→ En sures that all Eligibles Nodes run a

copy of a Pod defined by a Daemon set

- - -

ËüËiëË
↳rnetesscheÈJnaagt
but Daemon Set controller Will handle then instead .

→ If a new No de is added in the cluster ,

Daemon Set Controller will create a Pool on it .

94
Et-_at9ieso-RUingUpdate@Updatesrategybdefauet.OldPo s
will be
will

created
be automatic

: one Pod
ally
per
Node
killed and

.
new ones

a-
t
Existing Pods need to be manually deleted in arder

for the neurones to be created .

95
ËIH◦Ï→
① Create a Daemon Set that create one

busybox Pod per


Node

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
selector:
matchLabels:
name: my-app
template:
metadata:
labels:
name: my-app

}ˢPᵉ%
Daemon set
spec:
tolerations: Will run on
- key: node-role.kubernetes.io/master
effect: NoSchedule Master Nodes
containers:
- name: busybox
image: busybox

96
② Create a Daemon Set that create Pods

only on desired Nodes

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
selector:
matchLabels:
name: my-app
template:
metadata:
labels:
name: my-app
spec:
nodeSelector:
my-key: my-value
containers:
- name: busybox
image: busybox

97
ÊÎ
DaemonSet want to create Pods on each No de

but one of them doesn't have enough capacites

Pod is stack in

"
star.
"
"" ↳

98
:
Quick solutions

ces

List all the Nodes in the cluster

$ kubectl get nodes

→ Find the Nodes that have Pods


your
$ kubectl get pods -l name=my-app -o wide -n my-ns


Compare the lists in arder to find Node in trouble .

→ Delete manual Lyon this Node ,


all the Pods

not controlled by Daemon set

99
tËï "

u
K
v
Service

( Pods are mortal )

→ Allows to reach are


application
with a
single IP address

Assigns to
→ a
group of Pods a
unique DNS name


(t -

ns.svc cluster local


my awesome-app.my
- -
.
.

↓ •Ï
123
my auesome app
- - -

s →

- •☒☒
/
Ë
456
my auesome app
-
-

\,
-


-789
my auesome app
- -

100
?⃝
→ The set of Pods targeted by a
Service is
usually
determined by a Selector


(Y Selector

§app:my-a
ns.svc cluster local
my awesome-app.my
- -
.
.

↓ •ÏŒ. app 123


my auesome
- - -

s →

⊕µ☒app:my-appJ
/
Ë
456
my avesone
app
-
-

\,
-

-⑧app:my-app app -789


my auesome
- -

101
→ Several Lind of services :

IP
? ?
→ "

0 i
nExt
◦ #
[b

↳adBalan

102
JE

-P •

→ Exposes the service on a cluster internal IP

-⑧I*
$ Curl <
spec .
clusterIPS :{ Spec .
ports [ I. port )

↳ *¥ →

s →

Æ*

pËËÉ%|
-

103
rt

→ Exposes the service on each Node 's IP

{Æ=P)8Çpec.ports[o].nodePo§
Reachableoutsideoftheclusterbyrequestingf


'

Node 1 Node 2 Node 3

104
,↳adBalan

→ Create an external load Balancer

→ Assigns a
fixed external I P address

pËË?Ë%neÏ

¥

yaaypw.ge.wpp.in
clusters
Orly for managed
&

105
External
ot

→ Protides an infernal alias for an external Drs name

Will be directed to the external


→ Request re name

⊕ˢÆ☒
service cluster local
my my namespace svc
-

. . .
-
.


e ÷>
scralyiomhz

106
À

real type
<
ËÏn
somecas
y ,

À
bb

→ Useful in arder to interface with other

service
discovery mechanism

→ A cluster IP is not allocated

→ Kube -

proxy
doesn't handle the service

→ Exposes all repli cas as DNS


entry

/shouldbeequalts
Spec .
cluster IP

107
ËÏH◦Ï→
} Create a Service that
exposes a
Deployment
on port 80

$ kubectl expose deploy my-deploy —-type=LoadBalancer


—-name=my-svc —-target-port=8080 -n my-namespace

} Create a Pod and


expose
it through a Service

$ kubectl run my-pod —-image=nginx —-restart=Never


—-port=80 —-expose -n my-namespace

icauseclusterIPfieldismnutable.si/Y;betre-. -
You can
' t edit an
existing Service Cluster IP "
None "

¥
108
Ë¥ËË
ËÎ value pairs attached to an
Object

§app:my-appJ

Several objects / ressources can have the

same label

-2
my pod mg pod
-
-
1 -

" "
☒ ☒

versioniv1@N4app-.my.appJ §app:my-app

|reservedforkubernetescorompnenty
hubernetes.io/k8s.ioprefixesin Keys
label are

109
ËI↳Ï→
> Create a
Pod with two labels

apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
version: v1
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

110

Selections
Selector use labels to filter
or select Objects / resources

→ Selector can be :

-
bels

exact match : =
,
= =
,
! =

-
_Œsi

match expressions : in ,
not in ,
exists

111
2.

ftp./HowTo?--Y
> Show labels for all of my
Pods

$ kubectl get pod —-show-labels -n my-namespace

> Create a
Deployment that will manage Pods
label app :
my app
-

apiVersion: v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-app

}
spec:
selector: quality based
matchLabels:
app: my-app

112
> Create a
Deployment that mariage Pods that
have label version value equals to v1 or v2

& app =
my app
-

apiVersion: v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app

}
matchExpressions:
"
- {key: version, operator: In, values: ["v1","v2"]} based

List Pods that have labels &


> app
:
my app
-
version : v2

$ kubectl get pod —l app=my-app,version=v2

List Pods that have labels


> app
:
my app
-

& version = v1 ,
v2 or v3

$ kubectl get pod —l app=my-app,version in (v1,v2,v3)

113
Ë¥Ej

Ingress
°

→ Allows access to
your
Services from outside

the cluster

→ Avoid creating load Balancer Service


a
per

→ Single external end point ( Secured through SSL /TLS)

÷ÏËËÏË÷ËÏÏ .Ë÷÷Ë;ËËÏ÷;÷ÏË÷
÷:÷ ÷ ±Ï÷:÷±÷Ï÷*
MGM
p

^ ^

114
→ An Ingress is implemented by a
3ʳᵈ party :

Controller
an
Ingress
↳ ex tends
specs
to

support additional
features

Consolidate several routes in a
single resource

115
Ëjw_
Il

> Create a
Nginx based
Ingress which define two routes :

/ ap / v11
to Service
my myapi v1
-
- : -

/ ap / v21
to v2 Service
my myapi
-
- : -

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules: no host is If
, specific
- host: scraly.com
http:
}
roles are appli ed to
paths: inbound HTTP
- path: /my-api/v1
all
traffic
pathType: Prefix
backend:
service:
name: my-api-v1
port:
number: 80
- path: /my-api/v2
pathType: Prefix
backend:
service:
name: my-api-v2
port:
number: 80

116
-

I
In arder to should be handled
specify which Ingress

by controller


metadata:
name: my-gce-ingress
annotations:
kubernetes.io/ingress.class: gce

*
PÉ .
" "" " ""

with
Exact : Matches the URL case sens
itivity

Préfix : Matches the URL path préfix split by /

/ matches /
my api / v11 my api / v11
-
-

/ ap / v1 / toto
my
- :

117
> Create an
Ingress Secureet through TLS

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-with-tls
spec:
tls:
- hosts:
- scraly.com
secretName: secret-tls
rules:
- host: scraly.com

118
-

But
Els
Iarerithomogeneou@t.K
" "÷

d. b.

À€ Ingress is a
Gateway
" "° -

÷ny; ±iÆE%Y$B
Contour

Contour
'

s name I
Ingress Route

*œÆÆ

|
I'm have understood it
sure
you

|dependingonthechoosay
,

i.
. 119
Ë¥i &
Storage Class

In short :
-


Pods are mortal
by default
& Nodes Will not live forever to

À
-
Et
To store data Persistent Volume
→ permanent ly ,
use

→ Create a Persistent Volume claim to


request
this phys ical storage

→ Pods Will use this PVC as ce volume

120
Static ( with out Storage Class )
t
provisioning :

- - - - - - -
- - -

.
-
-

!
,

- ☐ - '

i volume (
^

f.
I Poet l
'
- - - - - - -
-
BÏ? -1

Namespace

¥
E
storage

121
Dynamic
provisioning

Vênê
l

-s
o PVC i

í
i
(
Poct
l i n ke a --
-

,
-
-
- -
-
-
-
-
-

I
Namespace to

dynamically
creares

v -
I
storageclasss

¿
E

It
storage

122

Pe-rsistentvolumemspron.de
a
storage location that has a
lifetime
independant of Podor Node
any

PV not sticked
→ is in a
namespace

→ Supports different persistent storage types :

NFS
,
Cloud provider specific Storage System °o°

→ After Creation, PV has a STATUS of Available .

Mears not bound to PVC


yet a .

2 sorts

of provisioning :

d
µ↑ µ↑
77 77

Storage # Name s
Storage Class Name

123
-1
ACCÈS

React Write Once : the volume can be mounted as

↳ 12hr0 read - write


by a
single Node

Read Write the volume be mounted


Many I can as

↳ RWX read - write


by many
Nodes

Readonly Many I the volume can be mounted as

↳ ROX read
only by Nodes
many
-

Road Write Once Pod : the volume can be mounted as

Ès• by
write
si
read -
ce
single Pod .

1. 22

I'm the only one Pod

who can rend that PVC

☐ or write to it

124
-1
ReclàmP

Re tain : Appropriait for precious data .

PV is not deleted if an user delete the PVC .

Delete : Default Reclaim Poli


cy for dynamic provisioning .

PV is automatically deleted if an user

delete the PVC .

status of the PV will change to Released

and all the data can be manually recovered .

125
?

Q IX
How to

?
create a PVA NFs linked to
with WSEFS
type a

,
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /
server: fs-xxx.efs.eu-central-1.amazon.com
readOnly: false

S Change Reclain Policy for my


to Retain
-pu
$ kubectl patch pv my-pv
-p ‘{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}’

126

Storage Class
-

arder to set
→ In -

up dynamic provisioning

→ Different provision ers exists :

e- ↓
GCE PersistentDisk Azure Disk AWS EBS 000

→ volume Binding Mode : Immediate means that

volume &
binding dynamic provision will

occur when the PVC is created

ftp.wto?--t
Il

> Create a
Storage class with SSD disks for

Google Compute Engine ( GCE )

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: faster
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
volumeBindingMode: Immediate
127
Persister
-

→ Clain / Request a soi table Persistent Volume

→ Pod use PVC to


request phys ical storage

After Control Plane


→ PVC
Deployment ,
Search

sui table with the


a PV same
Storage Class
→ When PV is
found & PVC is attached ,

PV status changes to Bound

?
ft ftp.wto?-
Il _

> Create a PVC that request a


storage with

100 Gb of storage & rend write access


.

many
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ""
128
volumeName: my-pv
AHach_toaPo
accessible for all containers

in a Pod

→ A container must mount the volume to access to it

ftp.uto?--t
Il

> Mount PVC as ce volume in a Pod

with Kad -

only access

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: "/data"
name: nfs-storage
readOnly: true
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: my-pvc

129
Ë¥ËËËï Pod

→ Scales the number of Pods automatic ally

on observe d CPU

- - -

130
Ël• .

autoscaling
/v2 beta2

→ by Custom, external & multiples metrics

- - -

V1
-
-

131
→ Available for ReplicaSet Deployment
,

&
statefnlset

Reé
/
Ï
OR

☒ Dt
*.

Ès _⑥* ï*✗
\ ᵉˢé OR

DæmËnÇJY
132
?⃝
ËIH◦Ï→
> Autoscale / create a HPA for a
deployment
that maintains an
average
CPU
usage
accro ss all Pods of 80 %

$ kubectl autoscale deploy my-deployment


—-min=3 —-max=10 —-cpu-percent=80

133
-

| -
New Features
}
> Create a HPA with behari or ?

Seale 90% Pods euery 15sec


up of
Seale down 1 Pod 10 min
erery

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deploy
minReplicas: 3
maxReplicas: 10 1. 18
targetCPUUtilizationPercentage: 80
behavior:
scaleUp:

Ici
policies:
- type: Percent
value: 90 EST

periodSeconds: 15
scaleDown:
policies:
# scale down 1 Pod every 10 min
- type: Pods
value: 1
periodSeconds: 600

134
L贕j
Limit

Ë⑨ :

Ë
JDefaultmemoryrequestlrli-nitvalues.GL
] for containers in a
namespace

ËÏH◦Ï→ :

> Create a
Limit
Range
apiVersion: v1
kind: LimitRange
metadata:
name: memory-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container

135
?
µ 1-
Il Eris
① Pod creation with one container
without & limit
memory request

gÏË④ÎnÎm}
scheduled on a Node

g.w.g.ya.i.an.gg
-

apiVersion: v1 exists
kind: Pod
punir range in

metadata:
your namespace
{
,

name: my-pod
spec:
containers:
} I
append configuration with

minimum}
your
- name: my-container
values
image: nginx default ,
now I can

Lmd
resources:
requests:
memory: 256Mi Il
limit:
memory: 512Mi
Limit Range
f) admission Controller
11
136
?⃝
Pod creation with
② one container
with
memory limit oney
-

< lobe scheduled on a Node

ï:::::÷"
apiVersion: v1
kind: Pod
metadata:
namespace , I add
§ name: my-pod
spec:
memory
containers:
- name: my-container

ᵈHⁿeË
image: nginx
resources:

ffmnmmhs.gl
limit:
memory: 512Mi

requests:
← Limit Range

"

memory: 256Mi
admission Controller
[(

137
?⃝
Pod creation with
③ one container
with
Memory request Orly

scheduled on a Node
<
-

çᵐᵐᵐᵐᵐʰ } üæ
apiVersion: v1
kind: Pod
metadata:
namespace , I add
memory
{ name: my-pod
spec: limit equals to the one
containers:
- name: my-container
image: nginx
resources:
{ ᵉᵈËï

[mnmmYg
requests: }
memory: 100Mi

limit:
← Limit Range
memory: 512Mi
[| admission Controller
[(

138
?⃝
贕 Request & Limites

ËË
°

¥2 ÷
"

/T-fpodsusestoomchr-ne.my/y
00M Killer can des troy them

IE~X-TE-R.MY NAÎT -

Ï ËÏ ËÏ"ᵐoᵐÏ
Ï .

res
" "
ces

-
-


139
→ A 00M ( Out of Memory ) Killer runs on each Node

ËÏÏÏËÏÏÏ÷ Ë¥ÏÏ Ï ÷÷Ë ËÏ¥J Ë¥Ï*


ÏËÏ
÷÷ÏÏ ÷÷Ë ËÏ¥
Node 1 Node 2 Node 3

In arder to avoid this situation



,

should
define request and limites CPU &
you Memory

resources:
requests:
memory: 64Mi
cpu: 250m
limit:
memory: 128Mi
cpu: 500m
uhmrwhnrvnmvhn

140
→ Scheduler uses these information to decide

in which Node the Pool can be

Mme
}
resources:
requests:
memory: 64Mi

ni cpu: 250m
limit:
memory: 128Mi

%¥Î÷: §ᵐᵐ
ï:Ë¥ï÷¥Ë
cpu: 500m

ËÏÏÏÏÏÏ÷Ï
ÏÏÏÏ

Nodet Node 2

141
"

r.
between request & limit ?

-
REQUEST

(nininumneededforpodt-b.hu
=

DLiM, =

142
<betæeny? difference
Is there a


Et

€0 , , 1
"

CPU is a compressible ressource .

Once ce container reaches the limit , it will

heep running -

milliCore
Defined in : l Core = 1000m

- - -
- - -
- - - - - -
- - - - -
.
- -
- . _

A A

ÆÆËÆÆ
IN

Memory is a none compressible Resource .

Once container reach es the limit


memory
ce
,

it will be terminale d ( 00M Killed ) .

Defined in
bytes .

143
Defining[Ëy
applications performances Yo

144
ËIH◦Ï→
> Create a Pod with defined requests & limites

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: my-image:1.0.0
resources:
requests:
memory: 64Mi
cpu: 250m
Cait be lower

}
limit:
memory: 128Mi
cpu: 500m than request MM.

> Display Pod 's memory


& CPU consomption

$ kubectl top pod

> Display Containers 's


Memory & CPU consomption
$ kubectl top pod —-containers

> Display Node 's


memory & CPU consomption
$ kubectl top node

145
?
◦ _

Et
HÉ>
limites should be
→ Request and defined for each

individuel Pods ,
in arder to be sealed

correct ly by Horizontal Pod Autos caler ( HPA )

→ If Pods have been killed by 00M Killer ,

it will have the status 00M Killed .

☒±

146
S OTO
[ËËËË
0

fisitsufficientfornyapplicat.in
defined replica for my Deployment
I 3 ,

y arailability ?
#
"

Unfortunately ,
non .

→ PDB is a
way to increase application avaitability

→ PDB
priori des protection against volontary Evictions 8

→ Node drain

→ Rolling upgrade
→ Delete a Deployment
→ and ooo delete a Pod

147
¥.:÷ïïïï |
Deployment : -

Spec .

repli cas :3

IF-victionn.PT#
ÏËËËÏÏËÏËÏËËËÏ Ï
this Pod , but ma ✗ Una: valable = 1

PDBcannotpre%6E-arydisrup.is
148
Ëjw_
Il

} Create Pod Disruption Budget ( PDB ) that allow


a
only
one
disruption
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
.

name: ingress-nginx-controller-pdb
spec:
minAvailable: 1 } how many Pods must always be available
maxUnavailable: 1} how Pods can be evicted
many
selector:
matchLabels:
app: my-app }
match only Pods with label app my app : -

IEsalsopossible-L-sefi-nailableasapercenta.ge
: minAvailable: 50%

} Show PDB in
my namespace
$ kubectl get pdb -n my-namespace

149
Ë:¥Ï

U U
Service
0f
o

→ Scheduler uses
request to make decisions about

schedule ing Pods onto Nodes .

thisdeedsM.fm
( OK I can put

v
it in this Node .

(Thiayqa
limites ,
I don't know if it 's


a big app or not
.

'

Ô
-

?
a.

et
-

"
H

150
→ Depend ing on
request and limites defined ,

ce Pod is
classified in a certain QoS class .

TES
3 QoS classes 5

⊕ᵐÆ☒
Guaranteed

☆ Scheduler assigns Guaranteed Pods only to

Nodes which have enough resources corn


pliant
with CPU & request
memory .

☆ Pods are
guaranteed to not be killed until

their exceed their limites .

151
"

Burstab.LT/
☆ When they reached their limit ,

these Pods are killed be foie Guaranteed ores ,

if there aren't any Best Effort Pods ,

R☒
BestEf
☆ Containers can use
any
amount of free Memory
and CPU in the Node .

☆ If system don't have enovgh Resources,

Poets with Best Effort QoS will be the first killed .

152
A-ssignaspecificQosclasstoaPod.us
"

Guaranteed
☆ Request and limites must be equals
in all containers ( even Init containers )

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: nginx
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "20m"
limit:
memory: "64Mi"
cpu: "20m"

153
→"
*

Burstab ✓
☆ At least one container in the Pod must have

a
memory or CPU request defined

apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: nginx
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
limit:
memory: "128Mi"

154
?⃝
?⃝
→"

BestE✓
☆ If request and limit are not set ,

for all containers Pods Will be treated as

low est Pods


priority and are
Classified
as Best Effort .

155
?⃝
ËIH◦Ï→
> Get QoS class for my pod
-
Pod

$ kubectl get pod my-pod -o 'jsonpath={...qosClass}'

156
ËËÏËË!
Network ☒

d. 1-

°o°

O
#
bb ÷
ÎïïïüËï ) ,

Network Policies allow to Control communication

between Pods in a cluster

→ ←

→ Scope d by Nam
espaces

→ stable since 1.7 ( June 2017M )


.

157
→ Limited to IP addresses ( no domain name )

→ Pre -

requis ites :

Use a CNI plugin which supports NP

→ Network Policies we labels to Select Rods

and define ru les which


specify
what traffic is allowed

ï ¥

niÏ
Ok are allowed
,
you

t
d. 1-

158
ËIH◦Ï→

t
d. 1-

-
-
-

|r˥/ |˥
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: my-namespace
spec: Ifempty pod Selector ;

}
podSelector: Select
matchLabels: all Pods in
namespace
role: my-role Pods
policyTypes:
- Ingress
- Egress
ingress:

}
Allow connections
- from:
- podSelector:
matchLabels: to all Pods labels
role: allow-role
ports: role : allow role
-

- protocol: TCP
port: 6379 to communicante to our Pods

159
② ☒
- t
-

lroleimy.ro#-J/

}
egress: Allow connections
- to:
- podSelector:
matchLabels: for our Pods

role: allow-to-role
ports:
i
to communia ate to Pods
- protocol: TCP
port: 5978 with labels role : allow to role
- -

(WhenaNetwËyÇ
targeted
.

Ê
&reJectal trafficnotdefinedinanyNPJ
Pods become isolated

160
③ Derry all Ingress traffic to our Pods

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: my-namespace
spec:
podSelector: {} } all Pods in
policyTypes: my
-

namespace
- Ingress

④ Allow all F-
gress traffic from our Pods

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
namespace: my-namespace
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- {} }
allow au

161
Ï Est
Et

→ In Ingress & F-
gress ,
several Selector exists E

- ↓ -
pod Selector namespace
Selector ip Block

162
T I TO
Ë÷ËÜ RBAC
0

-
/ Role-BaædAont§
→ Method of regulating access
to resources based on

the roles of individuel users


Useful when want to Control what your users can do
you
for which find of ressources in cluster
your

Kubernetes
→ Introduce 4
0K¥
RÊÈÀ Cluster Role
Binding

→ Operations ( verts ) allowed on Resources en

* create I list

d get 1¥ update
°O°

✗ delete à Watch 163


py@IoListofrerbsal owedonspec-ifieresourcesngSets.per
missions with in a
given namespace

O big ger power


ClusterRmËÆ"→ ÆŒÆ than Role

→ Grants permissions also but ou cluster aide


-

&
← →
cluster scoped
- resources
{ namespaced
accro ss
resources

all name
( like Pods )

Spaces
( Node no )
non resource
- end point

( Healthcare)


pp
ËI:
'

tsanethings,buttheirscopesaredif œ
Yes Role & Cluster Role objects do the
,

Fb

AWernetes@jeJtiseithernamespacedornot.but not both ?


164
olinkbetweenapoleandasubject
/ /
FÉE -

→ Grants the permissions defined in a


Rde to a user

or a set of users with in a


specific namespace


A Role Binding reference Role in the same
namespace
can
any

in a
namespace
f)

T↳→Riig
[
user

group PAQ ④
Service Accounts
Role

→ A Role Binding should be defined per namespace

0
÷iwgyaa to
my . user

#
bb

RoleBinding6-s@teRlemy.vser
f) e-

-
View 165
my namespace
-
clusterkoleB.no/ingJ
→ Same as Pole Binding but

for all
namespace ( cluster wide) _

g
f Aersig
my group to read secrets
-

in
any ns

pp
Fb

QQ clusterkoleB.no/ingQ
c@steRlemy.group
e- →

read - secret

'

severalcksterRolecana-aggegated.no
thanks to aggregationthule

166
ftijHowE_Ydlf-RokBinding6-sR@mf.vserread -

pods pod - reader

① Create a Hole that grants read Pods access

in
my namespace namespace
_

apiVersion:
rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-namespace
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
-
rend access

② Create a Hole Binding that grants pod - reader Role

to a user

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: my-namespace
subjects:
- kind: User
name: my-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader 167
apiGroup: rbac.authorization.k8s.io
cbterRokBindin@1C6steRlemy.vser
f) e-

secret reader
.

① Create a Cluster Role that grants read secrets access

in all name in the cluster


spaces

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole

}
metadata:
name: secret-reader
No metadata
namespace in

rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]

② Create a Cluster Role Binding that grants secret - Reader

Cluster Role to a
group

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: my-group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io

168
yqutSer iceAc ountlinke itoa@ClusterRoleP.QaO
Access in a cluster as a

≥ hubecte

t
e-
Rᵈi→
secret reader
read -

pods .

① Create test
" "

ns
a
namespace
-

$ kubectl create ns test-ns

② Create a Service Accounts in this


namespace
$ kubectl create serviceaccount my-sa-test -n test-ns

③ Create a Hole Binding that grant secret - reader


to test
my -
sa -

$ kubectl create rolebinding my-sa-test-rolebinding -n test-ns


—-clusterrole=secret-reader —-serviceaccount=test-ns.my-sa-test
169
④ Create a Kube config file for the created

service Accourt

$ export TEAM_SA=my-sa-test
$ export NAMESPACE_SA=test-ns

$ export SECRET_NAME_SA=`kubectl get sa ${TEAM_SA} -n $NAMESPACE_SA


-ojsonpath="{ .secrets[0].name }"`
$ export TOKEN_SA=`kubectl get secret $SECRET_NAME_SA
-n $NAMESPACE_SA -ojsonpath='{.data.token}' | base64 -d`

$ kubectl config view --raw --minify > kubeconfig.txt


$ kubectl config unset users --kubeconfig=kubeconfig.txt
$ kubectl config set-credentials ${SECRET_NAME_SA}
--kubeconfig=kubeconfig.txt --token=${TOKEN_SA}
$ kubectl config set-context --current --kubeconfig=kubeconfig.txt
--user=${SECRET_NAME_SA}

⑤ Execute hubeetl Commands in the cluster

as the Service Accourt

$ kubectl --kubeconfig=kubeconfig.txt get secrets -n test-ns

170
Ë¥ÏË¥☒
0

U V 11.22.55

Generals | Deprecated o


Set of conditions / ru les a Pod must follow in arder to

be accepted on the cluster

→ A policy can contain ru les on :

volumes
Types
of Privileged Pods
Read .
only files ystem

Kernel authorized capabilités


volume
Mount GID ( sysctl .
' '
)

( sudo )
host Port ¥9 Privileges elevation =

QQ UID / GID used by Pod


Sec
comp / AppArmor profiles
171
→ By default ,
a Kubernetes cluster accepte all Pods


So, first you reed to create policies

taekwondo
controller on
your
cluster in arder to use them


When PSP are activated,
euery
Pod that want to run

on the cluster needs to , use at least one


pôlxy

InPracti
→ A Pod is linked to PSP thanks to his Service Accourt

grants permissions
9
RoleBindi①
ⁿʰ- ËË
?
-

-
i
L - _ _
)

namespace

cluster wide- associates a cluster Role
172
permissions to a des ired SA
?⃝
→ Only one PSP for a
Pod can be a
pp
lied

→ PSP controller accepta refuse Pods in the cluster

nutable&
Ynonmta

ÉË ☒
|
<
notpassH.IM
You shall

¥ d to

mutable
non
policy
TE don't change the Pod

-
Imitables

l et.sadaptlchangefewthi.gs
Hey & You don't respect the roles ,

⊕ do to

&
you el have
'
the right to run

mutable the No des


policy
on
173
"

ËD:-< ButifseveralPSPexist@how.isit
Woking ?

{

"

non mutable
policies first non mutable n

do to
Pod
Security mutable


if several mutable

{
policies matches ,
À
chose ◦ ne ordered

by name
%
a →

( alphabet ically ) à
.

174
ËIH◦Ï→
① Define policy policy that prevent the creation
" "

a named my _

of privileged Pods

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: my-psp
spec:
privileged: false → Don't allow the creation
seLinux:

}
rule: RunAsAny of privileged Pods
supplementalGroups:
rule: RunAsAny
runAsUser:
other control
rule: RunAsAny
fsGroup:
aspects
rule: RunAsAny
volumes:
- '*' }
Allow access to all available volumes

② Create a cluster Role


kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-cluster-role
rules:
- apiGroups:
- policy
policey ④
resources:
- podsecuritypolicies
which is ③ PSP a

verbs:
- use → ①
Grant access
resourceNames:
- my-psp
→ to ②
my psp
.
175
③ Create a Role Binding linked to a des ired

Service Accourt ( SA )

# Bind the ClusterRole to the desired set of service accounts.


# Policies should typically be bound to service accounts in a
namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-rolebinding
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
bind
my
}
cluster role . .

[ }
- kind: ServiceAccount
name: default
to the SA in
default
namespace: my-namespace my namespace ns
-

which accounts the cluster Role is bound

012
3

} Create a Hole Binding linked to all Service Accounts


in
my namespace
-

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-rolebinding
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts 176
④ Enable Pod Security Policy admission Controller

in an
existing GKE cluster

$ gcloud beta container clusters update my-cluster


—-enable-pod-security-policy

177
v
ËËÏÏËï &
Node ( Anti ) Affinity o

→ By default ,
when
you deploy a Pod ,
Kubernetes

choses the right Node according to its reed

¥
Pod
"
-

t
-

↓ d

€0
'

" ᵈᵉ
enpty fuel

µÇË→
d. to
a

Podandvodelantilaffiniti.es?
But do
you know that you can Control

178
waystohandlenodeandpodafintes8Node5 lector@V.720.s
→ Several

ffii # fiiy
-
◦ [b

,
fii

179
lect
a-

"

"" " " "


"
" " "
"

À
¥
:

ByÆÜhaÇ%ab
lorunonaNode÷
but you can add
your
our labels to
force ce Pod

180
*

NodAfinty@oSHar_duleo OnlyrunthePod nNodestha.i "


" "

:
match Criteria

required During Schedule ing Ignore dDuring Execution

not possible Pod will be


If ,

on a not her Node

Preferred During Schedule ing Ignore dDuring Execution

181
Allowed Operators :
In,Exi Lt ,
Not In ,
Doesnttfxists
"

Affinity
"

Node Anti -

fp.a.y.n.i.gy.gg/
If the label on a Node

changes at the runtime ,

notting Changed ,

182
Pointy
→ Allow to run a Pool on
the same Node

than an other Pod

cool

WewanttobetogeLherT@ddq.d v
-

'
_.

Ë¥P%d A Pod B

thesamemachine.CI
Useful for Pods that reed to run on

183
,
fii


Allow to run several replica of a
Deployment
on a
different Node

youwikbeable-Gdis-fibeydsndiffere.int
Nodes If Node died the

willstillbeavailable.ie
one
.
,
app

184
ËIH◦Ï→
☆ List all Nodes with their labels

$ kubectl get node —-show-labels

☆ Deploy a Pod in a Mode that have a


specific label

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:

÷:::÷÷:::
- name: nginx
image: nginx
nodeSelector:
in a Mode with given label
node: my-node

À
II

Ë÷ËË
Æ¥- 185
☆ Create a Pod with No de Affinity ,
that will run

label
in a Mode that have a
specific
000
if possible
apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-node-affinity
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: app
operator: In
values:
- my-app
containers:
- name: my-container
image: busybox

Ë÷ËË
ËÏ
186
☆ Create a Pod that will run next to Pod
with label friand
"

truc
"
=

apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: friend
operator: In
values:
- true
topologyKey: topology.kubernetes.io/zone
containers:
- name: my-container
image: busybox

֍B
÷
"""

same
" " "

No de

À

À

-
t 187
☆ Create a
Deployment with 3
repli cas .

Each Pod must not run in the same Node .

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-db
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
À
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
containers:
- image: my-image:1.0
name: my-container

-
-

↓ ŒEÏ ŒE%

188
贆 .

→ Pods run on a Node

ÉËËËïï
→ A Node is a physica l or Virtual Machine ( VM )

Et
189
→ Container runtime :

ËÏ
ËËï ± ¥Ë÷ÈÊ÷Ê÷¥
⑨ ËËEÏ
ÏËÊÊËËÆË¥ÏÏ ÏË ÏËËƥ˥Ï☐ ËËË÷Ê:¥
☐˱ ←←ËÆ

" " ""


Ë ÊË ËÊË ËÏ ☐ ÊË ÷ËËÏÏ Ë ÆËËÉËëÉËËÆËËËÉËëÉ
Ê Ë=
from

registry
images
ÏËÊËËË¥ËÏÏ Ï
Ë Ï?ËᵐË ë :
€o←←s #
footage



runtime
un
pack the
container

ËÆËËË
Æ i*ÆË Ëü¥Ë¥ÏÎË
Æ Ë ËÏ*ÀÏ˥˥↑ÏË Ë Ê
.


run
the
app

190
Ë¥ïË
doesn't already
\

,•→→ÇË"
""""

; .
...
?

T.rs
.

INOUÏ# Out of
Disk ?

191
q<±n
' '
"
cordon ed

Ready,SchedulingDb

192
Ëjæ_
Il

} Show more
informations about a No de

( conditions ,
Capacity ◦ ◦o
)
$ kubectl describe node my-node

193
Ë¥¥ïË
0

Debug issues on Node

] Upgrade Node
Why ?
€4
a

seale down the cluster

I Reboot the Node


000

Ein


ËËÆ voie

NoSchedv# Taint


F- vict all Pods
from a No de & mark the No de
ces unschedulable

$ kubectl drain my-node

→ F- vict all Pods also Daemon sets


,

$ kubectl drain my-node —-ignore-daemonsets


194
→ F- vict all Pods also the cones not
managed by a Controller

$ kubectl drain my-node —-force

- *

LE
'

Node
e

→ Make the Node unschedulable


$ kubectl cordon my-node

→ Make the Node scheduler ble gain ,


a un do the cordon / train
$ kubectl uncordon my-node

Node taints are


hey : value pairs associated with an
effect

otN◦
Pods that don't tolerate the taint can
'
t be schedule on the Node
195
ÜferNÎ/
Kubernetes avoid scheduled Poels that don't tolerate
the Taint

ouf→
NOËL
Pods are cricket from the Node if they are
already
running ,
else they Cant be schedule
ing

?

Et
Il
|HowT/
_

} Display Taint informations for all Nodes

$ kubectl get node


-o jsonpath='{range .items[*]}{"\n"}{.metadata.name}:{.spec.taints}'

} Pause a Node ,
don't accepte new workloads on it

$ kubectl taint nodes my-node my-key=my-value:NoSchedule

} Un pause a Node

$ kubectl taint nodes my-node my-key:NoSchedule- 196


} Create a
Pod that can be scheduled on a Mode who have

the Taint Special key =


special value : No Schedule

...
spec:
tolerations:
- key: "specialkey"
operator: "Equal"
value: "specialvalue"
effect: "NoExecute"

} Create a Pod that story band 6000 seconds before


being ericted

...
spec:
tolerations:
- key: "my-key"
operator: "Equal"
value: "my-value"
effect: "NoExecute"
tolerationSeconds: 6000 } Delay Pod eviction

'

Nodecontrodercan-mtomati6-a.ae
with out a manual action

Node
☐ is not
ready
whey? ☐ Node is vnreachable

☐ Out of disk
☐ Network vrai valable 197
000
Debugging

ËËËË

"
"

dit Radins

debug ingkubernet slcanbev rytrick-y J-n.E


|
When errors occur ,


i. ?

¥ ↓

Œi
198

shownoreinformalionaboutyourpods-mgD.es
play all Pods in a
namespace
"

my - ns
"

$ kubectl get pod -n my-ns

→ Display information about a Pod


$ kubectl describe pod my-pod -n my-ns

These two Easy Commands can be


help ful in
many
situations ,

for example when a Pool is stack in Crash Loop Back Off

status :

ËÊË æ Émirat
.

199
② Mypodstnckinpendingstatus

%àËÏËÏ* :Ë Ë ☒
Podcant
¥
scheduled
"

d. 1-
amie

Solutions :

_

Resource limit are naybet than cluster capacités

Delete Pods / Clean unused resources in the cluster

Add Nodes capacites

Add more Nodes

000

200
?⃝
③SkËmag
→ Display all events for namespace
my ns
_

$ kubectl get events


—-sort-by=.metadata.creationTimestamp -n my-ns

→ Display only Warning messages

$ kubectl get events —-field-selector type=Warning

J
www.yg.ap.g.W
whenyouexecuteknbectldescribecommanc.ae
Events are name
spaced .

vents are dis played at the end


of

201
04nypodstuckinwait.mg/ImagePullBaek0ffstatu-
ËË
"" "
" " " mieux , app ,
" " . .
"
.
_ .

Questions
8

Image name
, tag & URL are good ?

Image exists in the Docker registry ?

Can it ?
you pull

Kubernetes have permissions to pull the image ?

Image Pull Secret is


configured for secret
registry ?

202
1.18
Êtes
Æ_¥

Runan Ephemera l Container near the Pod

we want to debug

In this container execute Curl


auget ( )
→ you can , °o°

Commands inside Kubernetes environnent

Pre ites 8 active feature gate


→ requis
-
-

-
Ephemera l Containers = true

Ë application

debug
container

container

$ kubectl alpha debug -it my-pod —-image=busybox


—-target=my-pod —-container=my-debug-container

ÀË
-

Share
.

a
process namespace
with a container inside the pod

U
debug container can see all
203
processes created by my -
pod
Podhavebeenrestartedmuetipleline.CO
] If pods use ta much
Memory
00M Killer can
destry them

ÊÉÏÏY

ÏTᵐᵐᵐî
ÊÏËÏ ÏË Use ta much
resources

'
☒ -


Solution
i :

Define request and lin its


memory
204

nyPodtao
- -

"
F ¥÷.→ ↓×ÈïÏ
E-
Ée ☐Æ
ConfigMap Secret

When
you
link a
Pod to a
Config Map and / or a Secret ,

attention to create the linked resources first


pay
to the name
you
used for linked resources

② nypodiskunningbutt-don.tknowwhyitsntworrk.mg?

You can
simply watch Pod logs in arder to
Ery to understand :

$ kubectl logs my-pod

whenapodisfevicftedfmaderterminate.cl
, logs are no longer available .

205

containwisrestartingo-ndeo-sl.ve
ness Probe is used to tel container is
healthy ,

but if it 's container Will be


mis
configured
Consired as un
Healthy .

Possible issues :
-

Probe target is accessible ?

Application take a
long time to start

④ I want to access
my
Pod with out an external

load-bal-ancer.ve/Tr----
µ4.Iis
,

and
un

your
manga gum , www.nap

Computer :
. ,

$ kubectl port-forward my-pod 8080 206


→ And mount a tunnel to a Service
directly ô

$ kubectl port-forward svc/my-svc 5050:5555

T-fremoteport-andlfaprtwe-esa.me
, specify Just one

Then Singly Curl to localhost


→ you can on

the local port :

$ curl localhost:8080/my-api

207
ËËÏÏ Convert


1.
21

Kubernetes have 3 4 releases


→ euery year
or

→ Several APIs are deprecated "


"

& then removed

TE

updatemanifeaspecific.US
gg.ge , au ,
, .gg
,. .. . ,
,

O L API version

pp
Fb

" "

E- → E-
E-

I-deprecate.cl API
API 208
ËÏHuÏ -

} Convert
my
-

ingress with Old API


to v1

$ kubectl convert -f my-ingress.yaml


—-output-version networking.k8s.io/v1 > my-ingress-updated.yaml

} Convert pod to latest version


my
-

$ kubectl convert -f my-pod.yaml

209
: Tools
i

y kubectx
ce
Mariage & switch

between hubectl context


,

https://github.com/ahmetb/kubectx

List all the existing clusters in


your KUBE CONFIG
$ kubectx

Switch / Connect to a cluster


$ kubectx my-cluster

Switch to the previous cluster


$ kubectx -

BË( $ kubectl config use-context my-cluster

210
"
Et
habens

a
Manage & switch

between
namespace )
s

https://github.com/ahmetb/kubectx

List all the the cluster


namespace in

$ kubens

Switch to a
namespace
$ kubens my-namespace

'

BÀerÇ $ kubectl config set-context —-current


—-namespace=my-namespace

211
"
Et
Stern

"
Kubectl logs
Lender steroids )
s

https://github.com/wercker/stern

Display logs of
" "

all Pods starting by a name

$ stern my-pod-start

Show Pods logs since 10 seconds

$ stern my-pod -c my-container -s 10s

Show Pods 's container with timestamp


$ stern my-pod-start my-container -t

'

Éternuant$ kubectl logs -f pod-1


$ kubectl logs -f pod-2

212
"
Et
krew
ITË Ê:ËÏË÷ÏËÊ ËÊÏÏË Ë÷:
"

Package manager
for kubectl plugins )
s

https://github.com/kubernetes-sigs/krew

List available plugins ( in the krew public index )


$ kubectl krew search

Install kubectx and habens Tools


$ kubectl krew install ctx
$ kubectl krew install ns

M-fterinstadation.thesetd-aeaa.ba
$ kubectl ctx and $ kubectl ns

Display installed plugins


213
$ kubectl krew list
'

Add
scraly private
'

index

$ kubectl krew index add scraly


https://github.com/scraly/krew-index

List all installed index


$ kubectl krew index list

List all available plugins index


'

scraly
'

in

$ kubectl krew search scraly

Install plugin
" "

season

"

( available scraly private index)


'

in

$ kubectl krew install scraly/season

plugin
"

Upgrade
"

season

$ kubectl krew upgrade season

214
"
Et
le 9s

ˌȥƥËᵉËÆ*ËÊÆ
LE
https://github.com/derailed/k9s

→ UI in CLI for Kubernetes

→ Kgs continu ally Watches for changes

Launch fr95
$ k9s

Run h 9s in a
namespace
$ k9s -n my-namespace

215
"
Et
k 3s

CC

Lightweight Kubernetes

cluster »

https://k3s.io

Install & launchk3skubernel.es cluster


$ curl -sfL https://get.k3s.io | sh -

216
↓ skaffold
"

Build , push & deploy


')
in one CLI

https://skaffold.dev

In it
your project ( and create skaffold .

yame)
$ skaffold init

Build , tag , deploy & stream the logs everytime


the code of your app changes
$ skaffold dev

Just build & tag an


image
$ skaffold build

Deploy image
$ skaffold deploy

Build , tag image and output template d manifest


217
$ skaffold render

| | huston ize


https://github.com/kubernetes-sigs/kustomize


Build -
in kubectl since v1.4

→ Déclarative like Huber ne les


-

( f- Imperative like Helm )

ËB:
LIJgiekusniehkeaakewithayersa.ie
O

GkfaEBY←

r
mix - in toto
secrets
☒EŒJEÉ¥ mix - in

EE.EE#r----* mix - in env

replica
JTeaTSozN-_ mix in-

Base
€Æ€--
→ The aim is to add
layers modifications on top of base

in arder to add function ali ties we want


218

Oaks like Docker or Git : each layers represents
intermediate state
system
«
an "

→ Each YAML files are valid / sable outsi de


u
of Kustom ize

?

Et
Il
|HowT/
_

① Define a
deployment .
yame file
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
labels:
app: kustomize
spec:
replicas: 1
selector:
matchLabels:
app: kustomize
template:
metadata:
labels:
app: kustomize
spec:
containers:
- name: app
image: gcr.io/foo/kustomize:latest
ports:
- containerPort: 8080
name: http
protocol: TCP
219
② Create a
file called Custom -
env .

yame
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
labels:
app: kustomize
spec:
replicas: 1
selector:
matchLabels:
app: kustomize
template:
metadata:
labels:
app: kustomize
spec:
containers:
- name: app

www.aani#EI
}
env:
- name: MESSAGE_BODY
value: by Kustomize to add to
- name: MESSAGE_FROM
value: overlay ‘custom-env’ our base

③ Create a kustonization.name file


apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../../base } base = our deployment
patchesStrategicMerge:
- custom-env.yaml } patchs to
apply

④ Apply
$ kubectl apply -k /src/main/k8s/overlay/prod
220
ÎÎ
| c'ést
'

kustomize i

① Set / change
image tag
$ kustomize edit set image
my-image=my-repo/project/my-image:tag

② Create a secret

$ kustomize edit add secret my-secret


—-type kubernetes.io/dockerconfigjson
--from-literal=password="toto"

Co louobut.f y2.Lwhat.sthemagic7@Infact. his


hustomization.yamlfile-ad asecretGeneratorintoi.lt
Command :

- edit

221
generated huston : ration yaml
.
:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- ../../base

patchesStrategicMerge:
- custom-env.yaml
- replica.yaml

secretGenerator:
- literals:
- password=toto
name: my-secret
type: kubernetes.io/dockerconfigjson

③ Add a
préfix and a
suffi x in resource 's name

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- ../../base

namePrefix: my-
nameSuffix: -v1

222
④ Create a hustomization.name which create a
Config Map

Esiee

: :

{
configMapGenerator: configMapGenerator:
- name: my-configmap - name: my-configmap2
files: literals:
- application.properties - key=value

⑤ Merge several files in one Config Map


configMapGenerator:
- name: my-configmap
behavior: merge
files:
- application.properties
- secret.properties

⑥ Di sable automatic hash siffix added

in
Config Map and Secret generated resources name

generatorOptions:
disableNameSuffixHash: true

o
generator Options change behari or

.
of all Config Map & Secret

generator

223
| hubeseal

En
cc
Crypt your secret

into Sealed Secret

& store then in Git Ë ,

https://github.com/bitnami-labs/sealed-secrets

Create a Secret

$ kubectl create secret generic my-token


—-from-literal=my_token=‘12345’ —-dry-run=client -o yaml
-n my-namespace > my-token.yaml

Seal the Secret

$ kubeseal —-cert tls.crt —-format=yaml < my-token.yaml >


mysealedsecret.yaml

224
""
sealed secret
" .

q
secret
Y -

""

÷!
"

← "
ü
q

÷ ,

§
.

secret
my -

225
"
Et
" ""

ËÊ ↓☒ÊçË¥Ë
"
" " ""
Security "

scanner clusters
'

»
for .
ooo

https://github.com/aquasecurity/trivy

→ Vulnérabilités détection :

☐ of 0s
packages
☐ Application dependence ie
( npm , yarn , cargo , pipent composer bundle
,
,
no )

→ Misconfiguration detection

( Kubernetes ,
Docker , Terraform ooo )

→ Secret détection


Simple and Fast scanner


Easy integration for CI

→ A Knbernetes opera
tor 226
Scan an
image
$ trivy image python:3.4-alpine

Scan and Save in a JSON report


$ trivy image golang:1.13-alpine -f json -o results.json

scan all Nodes in the default namespace

of a Kubernetes cluster & display


summary report
a

$ trivy k8s -n default —-report summary

scan & display only URGENT vulnérabilités

$ trivy k8s -n default —-report all


—-severity MEDIUM,HIGH,CRITICAL

227
↓ "

Backup & Restore Ku bernées

G⑤T=Iˢ
.

resources and persistent volumes , ,

https://velero.io

create a
full backup
$ velero backup create my-backup

Create a backup only for resources


matching with label
$ velero backup create my-light-backup
—-selector app=my-app
"

Restore from backup & Virtual Services


"

Istio Gateway
'

a s

$ velero restore create —-from-backup my-backup


—-include-resources
gateways.networking.istio.io,virtualservices.networking.
istio.io

Schedule backup
$ velero schedule create daily-backup
—-schedule="0 10 * * *" 228
i ¥


Create a
backup before a cluster upgrade


And an it is a
good practica to backup daily / weekly
your resources in sensitive clusters


Add revisions feature in the backup location bucket

229
ËË
ÏËËË
ËÏ ÷Ë¥ ÷ ï÷Ë÷
https://github.com/derailed/popeye


Scan cluster and output a
report ( with a score )

→ Custom izable scan & reports

→ Several to install it
ways
( lo cally , Docker ,
in the cluster ooo )

Scan a cluster Orly in one


namespace
$ popeye —-context my-cluster -n my-ns

list
Scan
Orly a
of specified resources

$ popeye —-context my-cluster -n my-ns -s po,deploy,svc

230
Scan with a
config file
$ popeye —f spinach.yaml

Sean & Save the report ( in HTML ) in a S3 bucket


$ popeye —s3-bucket my-bucket/folder —-out html

231
| kyverno

https://kyverno.io

→ Kubernetes native poli cy management


3 hind
→ of iules :


vali date

generale
◦ mutante

installes several web hooks


→ Kyverno :

NAME WEBHOOKS
AGE
validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-policy-validating-webhook-cfg 1
52s
validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-resource-validating-webhook-cfg 2
52s

NAME WEBHOOKS AGE


mutatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-policy-mutating-webhook-cfg 1 52s
mutatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-resource-mutating-webhook-cfg 2 52s
mutatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-verify-mutating-webhook-cfg 1 52s

232
i

i
= :

= :

ü
ï
-
-

( ¥
%•*o
↓!
.

>

Ë ÊÏËIÏË :÷Ï ËÏËÎ¥Ïʳ÷.Ï


.

herbe cluster

yny
.

yqË÷:
,"ç ÷ ÷ë:±÷÷± :±:÷÷:⇐*
- -

\
Mode -

pool ,

÷÷÷÷÷÷
÷±:± ÷
Ë˱÷±ÏÏ÷÷Ë:÷:÷
ᵐïµ ;Ï•Ë Ëâᵐᵗæ

233
?

À IH◦wT◦?
} Create a
policy that dis allow deployment for Pods in

the default namespace

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-default-namespace
spec:
validationFailureAction: enforce
rules:
- name: validate-namespace
match:
resources:
kinds:
- Pod
validate:
message: "Using \"default\" namespace is not
allowed."
pattern:
metadata:
namespace: "!default"
- name: require-namespace
match:
resources:
kinds:
- Pod
validate:
message: "A namespace is required."
pattern:
metadata:
namespace: "?*"

234
'

setvalidationfa%CA-ifenfre.BG

resource Creation or updates ,


and it to report violations .

} Create a
policy that create a
Config Map in all

name
Spaces excepted herbe system .

,
herbe -

public & kyverno


apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: zk-kafka-address
spec:
rules:
- name: zk-kafka-address
match:
resources:
kinds:
- Namespace
exclude:
resources:
namespaces:
- kube-system
- kube-public
- kyverno
generate:
synchronize: true
kind: ConfigMap
name: zk-kafka-address
# generate the resource in the new namespace
namespace: "{{request.object.metadata.name}}"
data:
kind: ConfigMap
metadata:
labels:
somekey: somevalue
data:
ZK_ADDRESS: "192.168.10.10:2181,192.168.10.11:2181"
KAFKA_ADDRESS:
"192.168.10.13:9092,192.168.10.14:9092" 235
'

setsynchronizeto-I-lo.HU#'accros changes .

} Create a
policy that adds -

label
my
-
awesome -

app
to Pods
,

Services Config Maps & Secrets


,
in a
given namespace

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-label
spec:
rules:
- name: add-label
match:
resources:
kinds:
- Pod
- Service
- ConfigMap
- Secret
namespaces:
- team-a
mutate:
patchStrategicMerge:
metadata:
labels:
app: my-awesome-app

} Deploy policy in the cluster

$ kubectl apply -f policy-add-label.yaml

236
} Display existing policies for all namespace
$ kubectl get cpol -A
NAME BACKGROUND ACTION READY
add-label true audit
disallow-default-namespace true enforce true

) Display reports for all


namespace
$ kubectl get policyreport -A

violations the cluster


} View all
policy in

$ kubectl describe polr -A | grep -i "Result: \+fail" -B10

} Check if policies are validates

$ kyverno validate *.yaml


diàiagaa way :

https://github.com/Kyverno/policy-reporter

*
Fb

237
ËË Kubernetes

.
¥
È☐|
GENERAL

oFirst2o2J
0 own logo

CLASS

[ INGRESS
æwresouræIngÎs↳
annotation ingress class
.

↳ Define Ingress Controller


name &
configuration
parameters

Ë÷÷ï÷÷ host: *.scraly.com

238
TEE
KUBECTLDE ,AlphaMhW

Ëᵗ
eünÏ
"" E " "" " "" """ "" "ᵉ

Pod we want to debug

Pre ites 8 active feature gate


requis
-
-

Now to :

poddebugcontainername.cn
$ kubectl alpha debug -it my-pod —-image=busybox
—-target=my-pod —-container=my-debug-container

FÊTA paœmes
.

with a container inside the

debug container can see all

processes created by my _
.

pod

239
-
tADA_AER -
_

I
§
"" " "" " " &" "" " " & "" " ""

"" " " """ & ""


apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
y name: my-deploy
minReplicas: 3

\
maxReplicas: 10
targetCPUUtilizationPercentage: 80
behavior:
scaleUp:
policies:
- type: Percent
value: 90
periodSeconds: 15
scaleDown:
policies:
# scale down 1 Pod every 10 min
- type: Pods
value: 1
periodSeconds: 600

- - -

240
ËËïïïÏÏËï

"" table F- " "" " " ↳ =

Allows to not edit sensitive data by mistake

Now ?
mutable
in : true field
t

KUBECTL-RUN-fkubec. t l
Ù¥

|
run command now create Orly a Poot

A-imsoneconmand-oneusage.ci#

241
Ë¥ïï 1. 19
/
#
/ maturity

GENERAL

☐ ◦

o
longest delivery cycle
34 enhancement
due to COVID

T-nnuTABLESECRETGCONriby GMAP-e-mhy.ge

§
Allows to not edit sensitive data mistake

|
.

piges againgpaea.ge Nae update , that can cause

downtime & optimize performance because

Control Plane don't have to check updates

w
.

apiVersion: v1
kind: ConfigMap
metadata:
name: my-cm
immutable: true
data:
my-key: my-value

242
Ë

beËT3

°""""""|
"" " " "" " "" "
" ""
"" " ""

use a deprecated API .

$ kubectl get ingress -n my—namespace


Warning: extensions/v1beta1 Ingress is deprecated in
v1.14+, unavailable in v1.22+; use
networking.k8s.io/v1 Ingress
No resource found in my-namespace namespace

243
?⃝
-
É -

MhW,GA

§
SecGmp ( Secure Computing Mode ) is a
security feature
" "e " "✗ ʰᵉᵐ" " " " """ " "" " " " ᵗ
"

"

i
◦ Pro vide Sand boxing

◦ Reduce the actions that a


process can
perform

Keeps your Pods Secure

apiVersion: v1
kind: Pod
metadata:
name: my-audit-pod
labels:
app: my-audit-pod
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/audit.json
containers:
- name: test-container
image: hashicorp/http-echo:0.2.3
args:
- "-text=just made some syscalls!"
securityContext:
allowPrivilegeEscalation: false

244
EË¥ËÜ
0

1. 20
u o
.

GENERAI

ËOÏÏËÏÏ
o 42 enhancement

"""⇐ "

[
Docker support in
%
%
Ku belet is now deprecated

::::::::::
But on don't Panic your Docker produced-

245
l'r ü:÷÷÷:÷Ë::
time outs .

Now the field timeout Seconds

du
is
respecte
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5

246
|
Hold off all the other probes until the Pod

finish es its startup

apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:1.0
ports:
- name: liveness-port
containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 1
periodSeconds: 10
startupProbe:
httpGet:
path: /healthz
port: liveness-port
5min ( 30 ✗ 10s )

failureThreshold: 30
to
periodSeconds: 10 finish its startup
-

247
ËÏ¥Ëï 1. 21
0

GENERAL

0 51
ÊËaa
enhancement

ˢ ←""ᵗ^←^°"

[
PSP is
deprecated & will continue to be ]
|
available and
Jully function al
. until 1.25 .

-
https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-
present-and-future/

248
?⃝
[YEYYYini-4.a.s-obsares.iq/. :
p
Generally Available
o the new
( GA ) .

Cron Job Controller v2

.
,

$ kubectl get <resource-type> <your-resource>


-o yaml

Will no longer display managed fields


which makes reaching and
parsing the output

249
t
" "" ""
""" " " "" " " "" "

apiVersion: v1
kind: Pod
metadata:
name: my-pod
annotations:
kubectl.kubernetes.io/default-container:
my-container-2
spec:
containers:
- name: my-container

,
image: my-image
- name: my-container-2
image: my-image-2

1
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done

$ kubectl exec my-pod -it —- sh

250
Ë¥ïÜ 1. 22
.

GENERAL
-


oggugppgnaugan.am#N
◦ The release cadence has Changed .

0 53 enhancement

n-i-tir .in?: .: : : . :ik ubectl


[
convert keeps you to mi
grate

$ kubectl convert -f role.yaml —-output-version


rbac.authorization.k8s.io/v1 > role-updated.yaml
Older

251
WARNINGMECHANÎSI
Mtf
-

'
I
Kubernetes returns Warning messages
when you

°" " """""" AÆ "

$ kubectl get ingress -n my—namespace


Warning: extensions/v1beta1 Ingress is deprecated in
v1.14+, unavailable in v1.22+; use
networking.k8s.io/v1 Ingress
No resource found in my-namespace namespace

InnutABLELABE-E-slableyhw.BY
I
default , namespace are not guaranteed

|
to have identified labels
any
.

Welcome to new inimitable label

qnbe.n.ge , ,
, , mepadapa.name .ua , na , geen added

to all
↳ namespace name
namespaces .

iaa"
INetworkpolicy.io#J
252
?⃝
Ë¥ïÏ 1. 23

GENERAL

☐ ◦

0
least release

47 enhancement
of 2021

tXNTAPDAT-AER-Mhy.gg
-

§
" " " " " " " "" " "" " "

+
" """

"" " " "" " " """ " " "" " " " "" "
"
"

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec: - - -

scaleTargetRef:
apiVersion: apps/v1

y H
kind: Deployment
name: my-deploy
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
behavior: -

scaleUp:
policies:
- type: Percent
value: 90
periodSeconds: 15

253
.÷ ÏÎÎ
"
Ê
Ï

[ [ "ᵉᵗ
"" " ""
" "" " "" ""
"

Pod we want to debug

$ kubectl alpha debug -it my-pod —-image=busybox


—-target=my-pod —-container=my-debug-container

wÎÎË
""
""" "

de the Poi
.

debugcontainercansee.ci
processes created by my _
.

pod

254
TRÈS

T
Welcome to the new
field ttl Seconds After Finished

|
in Jobs that will delete the Job it has
Spec once

""""""""""""ᵗ
No need to schedule Cleanup of Jobs
any more µ
apiVersion: batch/v1
kind: Job
metadata:
name: my-job-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
metadata:
name: my-job-with-ttl
spec:
containers:
- name: busybox
image: busybox

255
Mtf
POËT -

" "" " """ "" " "" & " " "

removed 1.25

|
in .

Now the new Pool Security Admission is enabled by


Pods
default ,
and build . in , in over to prevent

fromeverhavingdangerouscapabilities.TT
"°^ÛᵐW
-

fAll clusters

at the same
now

time
support
.
IP v4 and IP v6

duae.stackcap.se?F-pE..I
www.a.yg.e.UW
To use this feature ,
Nodes must have IP v4 &

IP v6 network interfaces .
So
you
must use a

& configure Services to use both with

256
?⃝
Ë¥ÏË 1. 24

GENERAL

☐ ◦

0
First release of

46 enhancement
2022

"""""""

[ Docker support in
Ës
%
%

Kube let is

Buton don't Panic your Docker produced


now
] removed

imageswil continuetoworhiny◦wcluste÷
I -

257
ÎᵐW

§;ÏÏÏ ÏÏ"
Aim to configure startup / Livenessl Readiness prob

apiVersion: v1
kind: Pod
metadata:
name: etcd-with-grpc
spec:
containers:
- name: etcd .

image: k8s.gcr.io/etcd:3.5.1-0

Ls
command: [ "/usr/local/bin/etcd", "--data-dir",
"/var/lib/etcd", "--listen-client-urls", "http://
0.0.0.0:2379", "--advertise-client-urls", "http://
127.0.0.1:2379", "--log-level", "debug"]
ports:
- containerPort: 2379
livenessProbe:
grpc:
port: 2379
initialDelaySeconds: 10

CLASSO.tl#BAAE-

[
|
Newfield
Y
Load Balancer Class

service that allows to


specify
for
the find

ofloadBalanceryouwant.LY

258

Welcome to a new metric in Kerbelet :

container events total that allow


-
oon
- -
you
to Count 00M ( Out of Memory ) events

that koeppen in each container .

Èᵐᵐᵐ
ÏD . Ë ËË Your
ces

259
ïBI

[
Alpha APIs are disabled by default and can be

enabled or disabled with feature gates .

Now Beta APIs are disabled by default ta µ

CREATESATO-K.CN

l'
"" "" "" """ "" " " "" " " " &"

apiVersion: v1
kind: Secret
metadata:
name: my-sa-secret
annotations:
kubernetes.io/service-account.name: mysa
type: kubernetes.io/service-account-token
---
|
apiVersion: v1
kind: ServiceAccount
metadata:
name: read-only
namespace: default
secrets: my-sa-secret

260
ËËïÏ 1. 25

GENERAL

ËlË
0 40 enhancement

ypœs "i
PSP have been removed because of its

but don't replacement


,
Complexity panic , a

exists 8 Pod Security Admission .

Migration guide :

https://kubernetes.io/docs/tasks/configure-pod-
container/migrate-from-psp/

261

"ᵉ^"""" .AlphaMhW

[
The fact is that there are a lot of
" "" " " " "" " " " " " ""

to Pod
given a .

Welcome to a
long awaited feature 8

user nan
espaces support in Kerber net es MY

jPre -

requis
ites 8
" ☐
active feature gate -

|[, g. ü
User Namespace Support = true

Now to :

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
hostUsers: false
containers:
- name: my-container
image: my-image

æËÇÊi 262
POËT
Mtbf
-

"" " d "" " ↳ "" " " " " "ˢ

yangeragcapapgqy.in|
now stable ,
and . bvilt - in
,
in corder

to prevent Pods from aver having

,ACphaMhgy
ïEHE

f
-

This feature allows to taking

i
a

snapshot of a
running container, that

Î
can be transferred to an other Node

& " " "


"
" " "" "

|
i.
Et Ï Ë ËËËË .
:

Pre.requisitessactivefeature-gatea.IT
Container checkpoint Restore = true '

263
?⃝
¥
:÷::::::÷ïˢ
Thanks to this feature you can

determine if a Job should be re tried

fpre.requisitesgactivefeature.GOV

Job Bsacfsoff Poli cy = true

Mt


sna.ap.a.na.e.pa.w.ga.w.a.mn#J
Aim to run Ephemera l Container near the
Pod we want to debug

$ kubectl debug -it my-pod —-image=busybox


—-target=my-pod —-container=my-debug-container

wÎÎË .de the Pod

, U

debugcontainercansee.ci
processes created by my _
.

pod

264
Lii
klemetes

,
he

evena

,"
teabancment
The powe

17
of the monmnity
,
MMS
- POD DISRUPTION BUDGET -

{
& HEALTHY PODS
Aepha
l
PDB
only take in account Running Pods but
i

!
a Pod can be Running and not Ready
.
This feature allow to prevent eriction
.
new an

Pre active feature


-requisites:
l
-gare
?

troe
PBUntealthypodfrictionli

265
i
PROVISiOn FROM VOLUNESNAPSHOT

f
ACROSS NANESPACES

I
W

Ain to create a
Persistent Volume Clain

from a Volume Snapshot accross

namespaces
.
öixu
enbad
İ
-

i!
: Poct

!. --
"ö....
I
Namespace d

- -
-

T
i
i

?-X

w
volume i
snapshot
-
-
i
-

Namespace
I
2

266
MKa
BALANCER
LOAD
-

SERVi -

{
l
Aim to create a service
of type equals l

to Load Balancer that has


different por l

I
definitions with different protocols

.
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx

U
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

267
PRIVMMGA
WINDOWS

F Host proces containurs the Windors i

,
ivolen toline prilegrd cdaies 1:

nupentD .
F
I

'!AUIHAL
PEROVAL T
catcinod ic antsiber nutiom for kubelets
,
netmancoprn
Cn in
6
268
ËËËÜ Docker

Âgrâ 1. 20

7-
☐ 880%
%
→ Docker support in Ku belet is now deprecated
& will be removed in a
future release

Erstfeld

§°° PBuil.cl/--Runt.ome-/
G- .

→ . .

→ Images stile can be


/ uÏË / locales,
with Docker
ËÉËÉËÏ Ë Ë Ê %ÎË
& through CI / CD & pulled by kubernetes 269
iDm%fE.pro/uces0CI(0penGntainerInitiativeJJ
→ Huber relies Will use container d as CRI runtime

PreviousarchitectureJ.'Hk
LI support

hub Mᵈ* "


ËËË naan
ÏÔ
DocherUI@Ztw.containerd

Problem se
ÏüÎ

-

gg Docker includes so
many components ( networking ,

volumes UX enhancement )
, o»

-4Fr security risks 270


?⃝
☆ Kubelet orly understand CRI runtime

☆ Need to maintain dockers him bridge

7E
hube④~Æ- contained

→ container d is am option for this container runtime

but also CRI -


O .

Ï :

M-wntainerrn.in#isresp6siblefrpuU-g

& running container images .

271
贕l 0

> CJ En Cron Job

> CM ◦
Config Map
: Container Network
> CNI
◦ Interface
> CRI 0
◦ Container Runtime Interface
> HPA
0
◦ Horizontal Pod Autos caler

y NP Network Policy
0

> Ns : Namespace
> OCI : Open Container Initiative
> PDB 0

Pod Disruption Budget
> PSP Pod Security Policy
y PV 8 Persistent Volume

> PVC ◦

Persistent Volume Clain

> QoS : Quality of Service

> RBAC ? Role -


Based Access Control
> SA 8 Service Accourt

272
CEI?
Dev Rel GᵐÊ Dev Ops + 16
years

xp
ÆŒÆ→ Ê •*
☒ as Google Developer Expert in Cloud technologies
#ïÏ¥ÆÏËËËËË!ÏËÏ¥ËÊÏËËÏËÎËÏ
IË¥ÇËÆ
CNCF Ambassador !Ïü☒ÆË Docker
Æ☒¥¥r¥Ï
The:-#Eh

Ë⇐⇐¥ËŒ¥Y Captain
Duchess France / Oomen in tech association

Technica writer -
Speaker -
Sketch noter #Ï

Contact ne !•
Aurélie Vache -

:÷§ÆËÏ @ aurelie vache

Abstract
Understand ing Knbernetes can be
difficult or time .

consuming .

I created this collection


of sketch notes about Kubernetes

in arder to
Ery to explain the technology in a Visual
way .

I¥d :
Kubernetes Components
Resources

Concrete example
Tips & Tools

You might also like