Professional Documents
Culture Documents
Kubernetes Zerotohero Basic Installation Steps
Kubernetes Zerotohero Basic Installation Steps
swapoff -a
5. exec bash
overlay
br_netfilter
EOF
9. mod modules
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward =1
EOF
15. add the gpg key and create k8s repository and edit system
25. on master only: to initialize k8s cluster (k8s control plane) on master node
mkdir -p $HOME/.kube
--discovery-token-ca-cert-hash
sha256:91b7274ff12f4f4179dffa22c8bdbaf8bcd51034412c50afdd266badac7ebf7d
29. on Master only to : install calico container network solution that allow k8s worknodes to
communicate:
curl https://raw.githubusercontect.com//projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -
O
31. ls -l
--discovery-token-ca-cert-hash
sha256:91b7274ff12f4f4179dffa22c8bdbaf8bcd51034412c50afdd266badac7ebf7d
35. kubectl get pods -A
Results:
I0304 14:25:24.481213 2074 version.go:256] remote version is much newer: v1.29.2; falling back to:
stable-1.28
I0305 12:38:42.382799 19523 version.go:256] remote version is much newer: v1.29.2; falling back to:
stable-1.26
root@k8s:/home/user#
I0304 14:39:36.814369 2706 version.go:256] remote version is much newer: v1.29.2; falling back to:
stable-1.28
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] apiserver serving cert is signed for DNS names [k8s kubernetes kubernetes.default
kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.163.134]
[certs] etcd/server serving cert is signed for DNS names [k8s localhost] and IPs [192.168.163.134
127.0.0.1 ::1]
[certs] etcd/peer serving cert is signed for DNS names [k8s localhost] and IPs [192.168.163.134 127.0.0.1
::1]
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory
"/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.505938 seconds
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for
the kubelets in the cluster
[mark-control-plane] Marking the node k8s as control-plane by adding the labels: [node-
role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s as control-plane by adding the taints [node-
role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes
to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs
from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the
cluster
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
export KUBECONFIG=/etc/kubernetes/admin.conf
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.163.134:6443 --token 15783p.y234zn73ostwsz1l \
--discovery-token-ca-cert-hash
sha256:7d9f3eac51ad5554502ef99165f4a345a9d852f86cc30606d2b71aeb858d9825
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy
root@k8s:/home/user#
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy
* Trying 127.0.0.1:10249...
>
< Content-Length: 8
<
pod/command-demo created
export KUBECONFIG=/etc/kubernetes/admin.conf
node/node1 labeled
node/node2 labeled
root@k8s:/home/user#