Article Image
Article Image
read

Kubernetes is often described as the “OS of the cloud,” but that abstraction can sometimes hide the complexity of what is actually happening under the hood. To truly understand how Kubernetes orchestrates workloads, it helps to step away from the code for a moment and look at the logistics.

If your application is the cargo, Kubernetes is the global shipping fleet ensuring it gets to the right destination, stays afloat, and scales to meet demand.

In this post, we are going to dive deep into the Kubernetes architecture, using the “Port Authority” model to explain how the Control Plane and Worker Nodes collaborate to maintain the desired state of your infrastructure.

The Harbor

The Port Authority: The Control Plane {The Brain of the Operation}

The Control Plane is the decision-making hub. In a shipping analogy, this is the Port Authority. No cargo moves, no ships dock, and no engines start without an order from here.

The Control Tower: API Server (kube-apiserver)

Everything starts here. The API Server is the front door to the Kubernetes control plane

Just like a Control Tower, it handles all communication. Whether the command comes from a user (via kubectl), an external automation tool, or a worker node reporting status, it all goes through the API Server.

It is the only component that communicates directly with the database (etcd). It handles authentication, authorization, and admission control (Mutating/Validating Webhooks). It is designed to scale horizontally.

The Official Ledger: etcd

A master manifest that records the exact location and contents of every single shipping container in the entire world. That is etcd.

This is a consistent and highly-available key-value store used as Kubernetes’ backing store for all cluster data. It records the “state” of the cluster. If it isn’t in etcd, it doesn’t exist.

The Harbor Master: The Scheduler (kube-scheduler)

A new shipment arrives. Which ship has deck space? Which ship has the fuel? The Harbor Master checks the fleet’s inventory and assigns the cargo to the best available vessel.

The Scheduler watches for newly created Pods that have no Node assigned. It selects a Node based on resource availability (CPU/RAM), constraints (Affinity/Anti-affinity rules), and “Taints and Tolerations” (ensuring sensitive cargo doesn’t go on insecure ships).

The Fleet Manager: Controller Manager (kube-controller-manager)

The Fleet Manager ensures the fleet matches the plan. If a ship sinks, the Manager orders a new one. If a container falls overboard, the Manager orders a replacement.

This is a daemon that embeds the core control loops (Node Controller, Replication Controller, Endpoints Controller). It constantly compares the Current State (what is happening now) with the Desired State (what is in the YAML) and works to reconcile them.

The Fleet: The Worker Nodes

{The Muscle of the Operation}

If the Control Plane is the management, the Nodes are the actual cargo ships doing the heavy lifting. This is where your application lives.

The Cargo Ship: The Node

A Node is a worker machine (virtual or physical). It provides the runtime environment for the containers.

The First Mate: kubelet

The Captain can’t manage every crate. The First Mate is the agent on the deck. They receive orders from the Port Authority and ensure the crew (containers) are working.

The kubelet is an agent that runs on each node. It takes a set of PodSpecs (primarily from the API Server) and ensures that the containers described in those PodSpecs are running and healthy. It communicates with the Container Runtime Interface (CRI) to manage the container lifecycle.

The Navigator: kube-proxy

Ships need to communicate. The Navigator manages the radio frequencies and maps, ensuring that when one container asks for “Database Service,” the message is routed to the correct ship.

kube-proxy maintains network rules on each node. These rules allow network communication to your Pods from network sessions inside or outside of your cluster. It essentially manages the iptables or IPVS rules to handle Service discovery and Load Balancing.

The Shipping Container: The Pod

The Pod is the smallest unit. It holds your actual cargo (the application container). Just like a shipping container might hold a few related boxes, a Pod can hold one or more tightly coupled containers.

A Pod represents a single instance of a running process in your cluster. It encapsulates an application’s container (or multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run.

The Workflow: From Order to Delivery

  1. The Order: You submit a manifest (YAML) to the Control Tower (API Server)
  2. The Record: The API Server writes this desired state to the Ledger (etcd).
  3. The Plan: The Fleet Manager (Controller) notices a new Deployment is required and creates a “ReplicaSet.”
  4. The Assignment: The Harbor Master (Scheduler) sees pending Pods and assigns them to a specific Cargo Ship (Node) based on resources.
  5. The Dispatch: The First Mate (kubelet) on that Node sees the assignment, pulls the image, and starts the container.
  6. The Route: The Navigator (kube-proxy) updates the networking rules so traffic can find the new application.

Conclusion

Kubernetes is complex, but it isn’t magic. It is a rigorous system of logic, state management, and reconciliation. By viewing it as a logistics fleet, we can better understand how to architect resilient systems that like a well-managed navy.

Blog Logo

Amgad Magdy


Published