How Kubernetes is used in Industries and what all use cases are solved by Kubernetes?

Ritik Raj
5 min readMar 26, 2021

What is Kubernetes?

Kubernetes (also known as k8s or “Kube”) is an open-source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.

In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters.

Kubernetes clusters can span hosts across on-premise, public, private, or hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling, like real-time data streaming through Apache Kafka.

Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. (This is the technology behind Google’s cloud services.)

What can you do with Kubernetes?

The primary advantage of using Kubernetes in your environment, especially if you are optimizing app dev for the cloud, is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines (VMs). With Kubernetes you can:

  • Orchestrate containers across multiple hosts.
  • Make better use of hardware to maximize the resources needed to run your enterprise apps.
  • Control and automate application deployments and updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications and their resources on the fly.
  • Declaratively manage services, which guarantees the deployed applications are always running the way you intended them to run.
  • Health-check and self-heal your apps with auto-placement, auto-restart, auto replication, and autoscaling.

Concepts of Kubernetes

Kubernetes defines a set of building blocks (“primitives”), which collectively provide mechanisms that deploy, maintain and scale applications based on CPU, memory or custom metrics. Kubernetes is loosely coupled and extensible to meet different workloads. This extensibility is provided in large part by the Kubernetes API, which is used by internal components as well as extensions and containers that run on Kubernetes. The platform exerts its control over compute and storage resources by defining resources as Objects, which can then be managed as such.

Control plane: The collection of processes that control Kubernetes nodes. This is where all task assignments originate.

Nodes: These machines perform the requested tasks assigned by the control plane.

Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage from the underlying container. This lets you move containers around the cluster more easily.

Replication controller: This controls how many identical copies of a pod should be running somewhere on the cluster.

Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod — no matter where it moves in the cluster or even if it’s been replaced.

Kubelet: This service runs on nodes, reads the container manifests and ensures the defined containers are started and running.

kubectl: The command-line configuration tool for Kubernetes.

What about Docker?

Docker can be used as a container runtime that Kubernetes orchestrates. When Kubernetes schedules a pod to a node, the kubelet on that node will instruct Docker to launch the specified containers.

The kubelet then continuously collects the status of those containers from Docker and aggregates that information in the control plane. Docker pulls containers onto that node and starts and stops those containers.

The difference when using Kubernetes with Docker is that an automated system asks Docker to do those things instead of the admin doing so manually on all nodes for all containers.

CASE STUDY: CAPITAL ONE BANK

challenge

The team set out to build a provisioning platform for Capital One applications deployed on AWS that use streaming, big-data decisioning, and machine learning. One of these applications handles millions of transactions a day; some deal with critical functions like fraud detection and credit decisions. The key considerations: resilience and speed — as well as full rehydration of the cluster from base AMIs.

Solution

The decision to run Kubernetes “is very strategic for us,” says John Swift, Senior Director of Software Engineering. “We use Kubernetes as a substrate or an operating system if you will. There’s a degree of affinity in our product development.”

Impact

“Kubernetes is a significant productivity multiplier,” says Lead Software Engineer Keith Gasser, adding that to run the platform without Kubernetes would “easily see our costs triple, quadruple what they are now for the amount of pure AWS expense.” Time to market has been improved as well: “Now, a team can come to us and we can have them up and running with a basic decision app in a fortnight, which before would have taken a whole quarter, if not longer.” Deployments increased by several orders of magnitude. Plus, the rehydration/cluster-rebuild process, which took a significant part of a day to do manually, now takes a couple of hours with Kubernetes automation and declarative configuration.

CASE STUDY: PEARSON

Challenge

A global education company serving 75 million learners, Pearson set a goal to more than double that number, to 200 million, by 2025. A key part of this growth is in digital learning experiences, and Pearson was having difficulty in scaling and adapting to its growing online audience. They needed an infrastructure platform that would be able to scale quickly and deliver products to market faster.

Solution

“To transform our infrastructure, we had to think beyond simply enabling automated provisioning,” says Chris Jackson, Director for Cloud Platforms & SRE at Pearson. “We realized we had to build a platform that would allow Pearson developers to build, manage and deploy applications in a completely different way.” The team chose Docker container technology and Kubernetes orchestration “because of its flexibility, ease of management and the way it would improve our engineers’ productivity.”

Impact

With the platform, there have been substantial improvements in productivity and speed of delivery. “In some cases, we’ve gone from nine months to provision physical assets in a data centre to just a few minutes to provision and get a new idea in front of a customer,” says John Shirley, Lead Site Reliability Engineer for the Cloud Platform Team. Jackson estimates they’ve achieved 15–20% developer productivity savings. Before, outages were an issue during their busiest time of year, the back-to-school period. Now, there’s high confidence in their ability to meet aggressive customer SLAs.

--

--