Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

GIP INTERNSHIP TASK

Deploy a Two-Node Kubernetes Cluster on AWS


Mini Project

Name - Himanshu Patil

Date - 10/07/24

Objective :
This task involves using Terraform to provision the necessary AWS resources for a two-node
Kubernetes cluster(1 Control Plane and 1 Worker Node).

Scenario :
Your client requires a Kubernetes cluster deployed on AWS. This task focuses on the initial
infrastructure provisioning using Terraform. The DevOps team will handle the Kubernetes cluster
deployment itself.

Constraints :
- Use Terraform for provisioning the resources.
- Reference the official Kubernetes documentation for hardware recommendations for control
plane and worker nodes.
- Use Ubuntu 20.04 as a Operating System in your instances.
- Make Sure necessary port numbers are opened in the Security Group for the kubernetes cluster
to work.

Completion Criteria:
- Terraform configuration is created to define two EC2 instances for a Kubernetes cluster.
- The chosen instance types are appropriate based on Kubernetes recommendations and client
requirements.
- The Terraform code is applied, and two EC2 instances are provisioned in your AWS account with
the selected AMI and security group configurations.
Solution :
1 . Make sure you have any cli of linux .

2 . Then Install the Terraform on it by following command

3 . Install the aws cli on it to use aws commands by following command

And Configure aws with accesss key and secreat key and region name .

make sure you have made user in aws with appropriate access .

4 . Then make main.tf file for creating the resources on the aws like instance , vpc ,
security group ,etc.

provider "aws" {

region = "us-west-2"

variable "aws_region" {

description = "The AWS region to deploy resources."

default = "us-west-2"

variable "instance_type" {

description = "The instance type for the Kubernetes nodes."

default = "t2.medium"

variable "key_name" {

description = "The name of the SSH key pair."

data "aws_ssm_parameter" "ubuntu_ami" {

name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
}

resource "aws_vpc" "main" {

cidr_block = "10.0.0.0/16"

resource "aws_internet_gateway" "main" {

vpc_id = aws_vpc.main.id

resource "aws_subnet" "public_subnet" {

vpc_id = aws_vpc.main.id

cidr_block = "10.0.1.0/24"

map_public_ip_on_launch = true

resource "aws_route_table" "public" {

vpc_id = aws_vpc.main.id

route {

cidr_block = "0.0.0.0/0"

gateway_id = aws_internet_gateway.main.id

resource "aws_route_table_association" "public" {

subnet_id = aws_subnet.public_subnet.id

route_table_id = aws_route_table.public.id

resource "aws_security_group" "k8s_cluster" {

vpc_id = aws_vpc.main.id

ingress {

from_port = 22

to_port = 22

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

ingress {

from_port = 6443

to_port = 6443

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]
}

ingress {

from_port = 2379

to_port = 2380

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

ingress {

from_port = 8080

to_port = 8080

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

ingress {

from_port = 80

to_port = 80

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

ingress {

from_port = 10250

to_port = 10252

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

ingress {

from_port = 30000

to_port = 32767

protocol = "tcp"

cidr_blocks = ["0.0.0.0/0"]

egress {

from_port = 0

to_port =0

protocol = "-1"

cidr_blocks = ["0.0.0.0/0"]
}

resource "aws_instance" "master" {

ami = data.aws_ssm_parameter.ubuntu_ami.value

instance_type = var.instance_type

subnet_id = aws_subnet.public_subnet.id

key_name = var.key_name

vpc_security_group_ids = [aws_security_group.k8s_cluster.id]

tags = {

Name = "K8s-Master"

resource "aws_instance" "worker" {

ami = data.aws_ssm_parameter.ubuntu_ami.value

instance_type = var.instance_type

subnet_id = aws_subnet.public_subnet.id

key_name = var.key_name

vpc_security_group_ids = [aws_security_group.k8s_cluster.id]

tags = {

Name = "K8s-Worker"

5 . After creating main.tf file , initialize the terraform by following command .

6 . After initialize the terraform , apply the changes on terraform by following


command .
7 . After Applying the changes make sure , the changes is live on aws , Check that 2
instances are in running state .

8 . Lets take ssh of instance and go inside the instance .

update the instance by “apt update” .


Now create a script for installing kubernetes with name k8s.sh and add the
following command in the script .

vi k8s.sh

-----------On control plane (Instance 1)------------------------------

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf


overlay
br_netfilter
EOF

sudo modprobe overlay


sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward =1
EOF

sudo sysctl --system


curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
systemctl restart docker
sudo systemctl restart docker
sudo systemctl enable docker
sudo docker ps

sudo vi /etc/containerd/config.toml
Enable_plugins = [“containerd”]
sudo systemctl restart docker
sudo systemctl restart containerd
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o


/etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /'


| sudo tee /etc/apt/sources.list.d/kubernetes.list
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1

sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

sudo systemctl restart containerd


sudo systemctl enable containerd
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo kubeadm init

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes

9 . Apply the Network Addons to make cluster ready .


kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-
daemonset-k8s.yaml
10 . Then check that your all pods are in running state .
kubectl get pods -n kube-system

12 . make sure that your all system pods are up and running .

1 . Open the worker node and Install k8s.sh script here , as same like Master
node .
-----------On worker node (Instance 1)------------------------------

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf


overlay
br_netfilter
EOF

sudo modprobe overlay


sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward =1
EOF

sudo sysctl --system


curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
systemctl restart docker
sudo systemctl restart docker
sudo systemctl enable docker
sudo docker ps

sudo vi /etc/containerd/config.toml
Enable_plugins = [“containerd”]
sudo systemctl restart docker
sudo systemctl restart containerd
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o


/etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /'


| sudo tee /etc/apt/sources.list.d/kubernetes.list
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1

sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

sudo systemctl restart containerd


sudo systemctl enable containerd
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

12 . copy the joining the token and paste it on worker node (instance 2 )

13 . Then check the nodes on master that worker node is connected or not by
following command .

15 . Finally Your k8s clutser is made with 1 + 1 cluster (1 is master + 1 is worker


node ) .

You might also like