Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Kubernetes

Kubernetes is the jewel of the crown of the containers orchestration. The product itself was
vamped by Google leveraging years of knowledge on how to run containers in production.
Initially, it was an internal system used to run Google services, but at some point, it became
a public project. Nowadays, it is an open source project maintained by few companies (Red
Hat, Google, and so on) and is used by thousands of companies. At the time of writing this,
the demand for Kubernetes engineers has skyrocketed up to a point that companies are
willing to hire people without expertise in the field but with a good attitude to learn new
technologies.
Kubernetes has become so popular due to, in my opinion, the following factors:
- It solves all the deployment problems
- It automates micro services' operations
- It provides a common language to connect ops and development with a clean
interface
- Once it is setup, it is very easy to operate
Nowadays, one of the biggest problems in companies that want to shorten the delivery life
cycle is the red tape that has grown around the delivery process. Quarter releases are not
acceptable anymore in a market where a company of five skilled engineers can overtake a
classic bank due to the fact that they can cut the red tape and streamline a delivery process
that allows them to release multiple times a day.

Cluster Components
The cluster is composed mainly out of two types of resources:
- Master:​ it is the VM that controls everything on the cluster. It is the one in charge to
make sure that the desired configuration is achieved if there are enough resources in
the cluster or achieve the best possible scenario in case of lack of resources.
- Worker Node:​ Or just Node, it is the VM in charge to run the workloads (containers).
It receives orders from the master on what containers to run and how to run them.

In general, any cluster will have (at least) 1 master node and many worker nodes. The
master node usually does not run any container (workloads, can run control containers).
This is the schema of a cluster:
Kubernetes Components
The first problem that you will find once you start playing with Kubernetes is creating a
mental map on how and where everything runs in Kubernetes as well as how everything is
connected. In this case, we are going to focus on the building blocks of Kubernetes as in
class we have only seen a managed cluster like GKE.

Pods
Pods are the most basic element of the Kubernetes API. A Pod basically is a set of
containers that work together in order to provide a service or part of it. The concept of Pod is
something that can be misleading. The fact that we can run several containers working
together suggests that we should be sticking the frontend and backend of our application on
a single pod as they work together. Even though we can do this, it is a practice that I would
strongly suggest you avoid. The reason for this is that by bundling together the frontend and
the backend, we are losing a lot of flexibility that Kubernetes is providing us with, such as
autoscaling, load balancing, or canary deployments.

Deployments
Even though the Replica Set is a very powerful concept, there is one part of it that we have
not talked about: what happens when we apply a new configuration to a Replica Set in order
to upgrade our applications? How does it handle the fact that we want to keep our
application alive 100% of the time without service interruption? Well, the answer is simple: it
doesn't. If you apply a new configuration to a Replica Set with a new version of the image,
the Replica Set will destroy all the Pods and create newer ones without any guaranteed
order or control. In order to ensure that our application is always up with a guaranteed
minimum amount of resources (Pods), we need to use Deployments.

Services
Up until now, we were able to deploy containers into Kubernetes and keep them alive by
making use of pods, Replica Sets as well as Deployments, but so far, you have not learned
how to expose applications to the outer world or make use of service discovery and
balancing within Kubernetes.
A service is pretty much a mechanism offered by Kubernetes to connect applications running
inside the cluster to the outside world and to each other. A service consists of an entry in the
internal Kubernetes DNS service plus a load balancer to distribute the requests across all
the available replicas of the applications.

You might also like