Getting Started With Kubernetes

Kubernetes has grown tremendously over the years and is considered by many to be one of the best orchestration tools today. That’s why professionals who are interested in a career in DevOps or cloud computing have been gaining expertise with this wildly popular and effective framework to advance their careers. Behemoths like Google, Airbnb, Spotify, Pinterest, and more have all been leveraging Kubernetes for years. 

Let’s dig into how professionals can work at getting started with Kubernetes.

Why Kubernetes?

Next in the Getting Started with Kubernetes tutorial, we will learn about some of the salient features of Kubernetes such as:

  1. Kubernetes can run on OpenStack, and public clouds, such as Google Cloud Platform, Azure, AWS, and many other platforms.
  2. Kubernetes’ modularity enables better management and decomposes containers into smaller parts.
  3. Kubernetes enables administrators to create multiple environments on available infrastructure.
  4. Kubernetes can run any containerized application, and is easy to manage on virtual infrastructure.

What is Kubernetes?

Kubernetes is an open-source platform used to deploy and maintain a group of containers in a virtualized environment. In practice, Kubernetes is most commonly used alongside Docker for better control and implementation of containerized applications. Containerized applications “bundle” applications together with all its files, libraries, and packages required for it to run reliably and efficiently on different platforms. 

Google initially developed Kubernetes—first introduced as a project at Google and then as a successor to Google Borg. Kubernetes was initially released in 2014 to make it easier to run applications on the cloud. The Cloud Native Computing Foundation currently maintains Kubernetes.

Features of Kubernetes

  • Automates various manual processes and controls server hosting and launching
  • Manages containers offer security, and networking and storage services
  • Monitors and continuously checks the health of nodes and containers
  • Automates rollback for changes that go wrong 
  • Mounts and adds a storage system to run apps

PRINCE2® Certification Exam Made Easy to Crack

PRINCE2® Foundation & Practitioner CertificationExplore Course
PRINCE2® Certification Exam Made Easy to Crack

Kubernetes vs. Docker Swarm

Kubernetes is a container management system. It is an open-source, portable system to automate the deployment and management of containers that eliminates many of the manual processes required to run applications on the cloud. In practice, Kubernetes is most commonly used alongside Docker for better control and implementation of containerized applications.

The following are a few differences between Kubernetes and Docker Swarm:

Kubernetes

Docker Swarm

Developed Google

Developed by Docker Swarm

Has a vast open-source community

Has a smaller community

More extensive and customizable

Less extensive and customizable

Requires heavy setup

Easy to set up files

High fault tolerance

Low fault tolerance

Provides strong guarantees to cluster states at the expense of speed

Facilitates fast container deployment in large clusters

Manual load balancing

Automatic load balancing

Kubernetes Architecture

Before diving any deeper in the Getting Started with Kubernetes tutorial, let’s first look at the hardware and software components of Kubernetes architecture.

Hardware Components

Nodes

nodes

A node is the smallest unit of hardware in Kubernetes. It is a representation of a single machine in the cluster. A node is a physical machine in a data center or virtual machine hosted on a cloud, like Google Cloud Platform.

Cluster

cluster

Kubernetes does not work with individual nodes; it works with the entire cluster. Nodes combine their resources to form a powerful machine known as a cluster. When a node is added or removed, the cluster shifts around the work as necessary.

Persistent Volumes

volumes

To store data permanently, Kubernetes uses persistent volumes. Nodes combine their resources to form a powerful machine known as a cluster. When a node is added or removed, the cluster shifts around the work as necessary.

Software Components

Containers

container

Containers are self-contained environments to execute programs. The programs are bundled up in a single file (known as a container) and then shared over a network. Multiple programs are added to a single container, with a limit of one process per container. Programs run on the Linux package as containers.

Pods

pods

A pod represents a group of one or more application containers bundled up together and is highly scalable. If a pod fails, Kubernetes automatically deploys new replicas of the pod to the cluster. Pods provide two different types of shared resources: networking and storage. Kubernetes manages the pods rather than the containers directly. Pods are the units of replication in Kubernetes.

Deployment

Pods cannot be launched on a cluster directly; instead, they are managed by one more layer of abstraction: the deployment. A deployment’s fundamental purpose is to indicate how many pods are running simultaneously. The manual management of pods is eradicated when deployment is used.

Ingress

ingress

Ingress allows access to Kubernetes services from outside the cluster. You can add an Ingress to the cluster through either an Ingress controller or a load balancer. It can provide load balancing, SSL termination, and name-based virtual hosting.

Now that you know about the hardware and software components let’s go ahead and dive deep into the Kubernetes architecture itself.

Master

The master node is the most vital component responsible for Kubernetes architecture.

It is the central controlling unit of Kubernetes and manages workload and communications across the clusters.

The master node has various components, with each one having its process. They are:  

  • ETCD
  • Controller Manager 
  • Scheduler
  • API Server

master

1. ETCD

  • ETCD stores the configuration details and essential values
  • It communicates with all other components to receive the commands and work to perform an action
  • It also manages network rules and posts forwarding activity

2. Controller Manager

  • The controller manager is responsible for most of the controllers and performs a task
  • It is a daemon that runs in a continuous loop and is responsible for collecting and sending information to the API server
  • The key controllers handle nodes and endpoints

3. Scheduler

  • The scheduler is one of the key components of the master node associated with the distribution of the workload
  • The scheduler is responsible for workload utilization and allocating the pod to a new node
  • The scheduler should be aware of the total resources available, as well as resources allocated to existing workloads on each node

4. API Server

  • Kubernetes uses the API server to perform all operations on the cluster
  • It is a central management entity that receives all REST requests for modifications, serving as a frontend to the cluster
  • It implements an interface, which means different tools and libraries can communicate effectively

5. Kubectl

  • Kubectl controls the Kubernetes cluster manager

Syntax  - kubectl [flags]

Slave

The slave node contains the following components:

slave

1. Pod

  • A pod is one or more containers controlled as a single application
  • It encapsulates application containers, storage resources, a unique network ID, and other configurations on how to run the containers

2. Docker

  • One of the basic requirements of nodes is Docker
  • It helps run the applications in an isolated, but lightweight operating environment. It also runs the configured pods
  • It is responsible for pulling down and running containers from Docker images

3. Kubelet

  • Kubelet is responsible for managing pods and their containers 
  • It deals with pod specifications, which are defined in YAML or JSON format
  • It takes the pod specifications and checks whether the pods are running properly or not

4. Kubernetes Proxy

  • It is a proxy service that runs on each node and helps make services available to the external host
  • Every node in the cluster runs a simple network proxy, and Kube-proxy routes request to the correct container in a node
  • It performs primitive load balancing and manages pods on nodes, volumes, secrets, creating new containers, and health checkups

Improve Your Earning Potential Further!

DevOps Engineer Masters ProgramExplore Program
Improve Your Earning Potential Further!

Companies Using Kubernetes

companies-kub

Kubernetes Use Case

Next in the getting started with Kubernetes tutorial we’ll look at the New York Times as a practical use case of Kubernetes.

  • When the publisher moved out of its data centers, the deployments were smaller, and VMs managed the applications.
  • They started building more tools. At one point, however, they realized that they were doing a disservice by treating Amazon as another data center.
  • The development team stepped in and came up with an excellent idea. The team proposed to use Google Cloud Platform with its Kubernetes-as-a-service offering.
  • Using Kubernetes had the following advantages:
  • Faster performance and delivery
  • Deployment time reduced from minutes to seconds
  • Updates were deployed independently and when required
  • A more unified approach to deployment across the engineering staff and better portability.

To conclude, the New York Times has gone from a ticket-based system for requesting resources and scheduling deployments to an automatic system using Kubernetes.

Earn the Most Coveted DevOps Certification!

DevOps Engineer Masters ProgramExplore Program
Earn the Most Coveted DevOps Certification!

Kubernetes Demo

Next up in the getting started with Kubernetes tutorial, let us quickly dive in a demo.

Steps:

1. Open the terminal on Ubuntu.

2. Install the necessary dependencies by using the following command:

$ sudo apt-get update

$ sudo apt-get install -y apt-transport-https

3. Install Docker Dependency by using the following command:

$ sudo apt install docker.io

Start and enable Docker with the following commands:

$ sudo systemctl start docker

$ sudo systemctl enable docker

4. Install the necessary components for Kubernetes.

First, install the curl command:

$ sudo apt-get install curl

Then download and add the key for the Kubernetes install:

$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

Change permission by using the following command:

$ sudo chmod 777 /etc/apt/sources.list.d/

Then, add a repository by creating the file /etc/apt/sources.list.d/kubernetes.list and enter the following content:

deb http://apt.kubernetes.io/ kubernetes-xenial main 

Save and close that file.

Install Kubernetes with the following commands:

$ apt-get update

$ apt-get install -y kubelet kubeadm kubectl kubernetes-cni

5. Before initializing the master node, we need to swap off by using the following command:

$ sudo swapoff -a

6. Initialize the master node using the following command:

$ sudo kubeadm init

You get three commands: copy and paste them and press  and “enter.”

$ mkdir -p $HOME/.kube

$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

7. Deploy pods using the following command:

$ $ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/ master/Documentation/kube-flannel.yml

$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/ master/Documentation/k8s-manifests/kube-flannel-rbac.yml

8. To see all pods deployed, use the following command:

$ sudo kubectl get pods –all-namespaces

9. To deploy an NGINX service (and expose the service on port 80), run the following commands:

$ sudo kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"

$ sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http

10. To see the services listed, use the following command:

$ sudo docker ps -a

Conclusion

Kubernetes is the most widely used container management system in the world, and there are plenty of career opportunities surrounding technology. If this Getting Started with Kubernetes Tutorial intrigues you and you think you are ready to start a career or jumpstart your existing IT career in the exciting field of cloud computing, you should check out our DevOps Engineer Master’s or Post Graduate Program in DevOps, offered in collaboration with world-renowned Caltech, might be a new avenue for you to consider.

About the Author

Karin KelleyKarin Kelley

Karin has spent more than a decade writing about emerging enterprise and cloud technologies. A passionate and lifelong researcher, learner, and writer, Karin is also a big fan of the outdoors, music, literature, and environmental and social sustainability.

View More
  • Disclaimer
  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.