top of page

Kubernetes Architecture (Simplified)

  • Writer: sumitnagle
    sumitnagle
  • Jun 21
  • 6 min read

Updated: Jun 22

Before we discuss the ✨architecture✨ of Kubernetes, we need to know some basic things,


First, what is a container? Containers a lightweight, standalone, executable unit which encapsulate everything an application needs (application code, runtime, libraries, and system tools), making it environment-independent, which means, we can build once and run anywhere i.e. application runs consistently across different environments, whether it's our local machine, a testing server, or a production environment. This ensures that software runs the same regardless of the environment, eliminating the "it works on my machine"😅 problem.

Now we have container where we will run our application, but the things comes how?? for that you must have heard docker, docker provides us container runtime (https://www.upwind.io/glossary/container-runtimes-explained), which you can just think of like an OS-processes which spawn these containers.


But now the question comes, as in, what about management of my containers? how will i ensure my containers are up-and-running, for this purpose we have, Kubernetes.


Kubernetes is a container orchestration platform for automating deployment, scaling, and management of containerised applications. It manages, deployment of containers across multiple machines (clusters of hosts), auto-scaling, self-healing (restart crashed containers), load balancing, and rolling updates, as well as networking, service discovery, and configuration management.

Ufff, a lot of things, but for now think like, in a production environment, we need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container should take its place. Kubernetes provides us with a framework to run distributed systems resiliently. It takes care of scaling and failover for our application, provides deployment patterns, and more.

Orchestration in simple means the automated management of something (the something here is our containerised applications), this includes deployment, scaling, networking, and lifecycle management, across a cluster of machines.

Now lets jump to actual thing, i.e. the ✨Architecture✨.

Yeahhhh Buddy!

At the core of Kubernetes lies the concept of a cluster, which are a group of machines (physical or virtual machines), called as nodes, that work together to run and manage our containerised applications.


This cluster is made up of broadly two main category of nodes,

  • One or more worker nodes (data plane), which run the actual application workloads.

    Each worker node hosts one or more pods, the smallest deployable units in Kubernetes, which encapsulate our containerised applications.

  • A master node (control plane), which manages the cluster, and these worker nodes.


Kubernetes follows master-worker architecture. Where there is control plane (master) which acts as a central management interface, and then multiple worker nodes (they communicate with the control plane), where the applications are actually running.

For simplicity think like, we have multiple machines working together to provide kubernetes runtime, some are worker which run the applications and some (usually one) are acting like manager, which manage all these machines.


Now question comes, how! so for that, these control and worker nodes, have some components running within the nodes, which are responsible to provide kubernetes runtime. And actually, when we setup a Kubernetes cluster, we essentially setup all the components (they are typically system processes or containerised processes) required to facilitate orchestration, including component on control plane and the worker nodes. So lets discuss what are these components!

Yeah! I know, you wanna rush, so let jump on it.

Control Plane

Control plane consists of multiple services which manages the overall state of the cluster, orchestrating containerised applications to ensure they run efficiently and reliably.

Control plane components can be run on any node in the cluster. However, for simplicity, setup scripts typically start all control plane components on the same node, and do not run containers on this machine, instead it is run on some other worker nodes.
  • kube-apiserver, is the core component that exposes the Kubernetes API (act as entry point for all commands and REST requests (similar to docker-client, when comparing with docker)), manages API requests, authenticates users, and authorises actions.


    The API server is the front face of control plane. It acts as the communication hub, linking the control plane with worker nodes. So all the interaction that is happening between worker nodes and control plane, is done by communicating with kube-apiserver.

As we can see, a lot is going with kube-apiserver, and for that purpose, kube-apiserver is designed to scale horizontally, i.e., it scales by deploying more instances. We can run several instances of kube-apiserver and balance traffic between those instances.

  • etcd, for the management of the whole system, we need to store a whole lot of configuration, state management and metadata, and for that Kubernetes uses a consistent and highly-available key value store for configuration and cluster state. It stores all configuration data, cluster state, metadata, and information about Kubernetes objects securely.



  • kube-scheduler, is responsible for scheduling pods onto the appropriate worker nodes, the kube-scheduler evaluates resource requirements, availability, and affinity rules.

    Let me make it clear, at the end our goal is running our container, and for that we need some component, which manages lifecycle of our container, including start/stop containers. And thats what the job of kube-scheduler is!!



  • kube-controller-manager, maintains the desired state (like ensuring a certain number of pods are running). It involves multiple controllers, and logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

    There are many different types of controllers, some of them are,

    • node controller, responsible for noticing and responding when nodes go down.

    • job controller, watches for job that represent one-off tasks, then creates pods to run those tasks to completion.

    • ServiceAccount controller, which create default ServiceAccounts for new namespaces.

    In cloud environments, the cloud-controller-manager integrates cloud-specific APIs with the cluster i.e. embeds cloud-specific control logic. It manages resources such as load balancers, instances, and storage components provided by cloud service providers, enabling seamless interaction between Kubernetes and the cloud environment.

    Control plane bragging! (to worker nodes).

Worker Nodes

While the control plane components manage the cluster state and administration, the worker nodes are where the actual workloads run. And for that to happen, Kubernetes deploy some components on each worker node. These components maintains our containers (inside a pod) and provide the kubernetes runtime environment.


  • kubelet is an agent that essentially communicates with the kube-apiserver, to ensures containers and other things are working as expected, such as managing the lifecycle of container.


  • kube-proxy implements network rules to route traffic to the appropriate pods. It enables service discovery and load balancing within the cluster by managing Kubernetes Service configurations and ensuring efficient network communication.


    You don't need to know what Service are (as of now!), and for now think like kube-proxy handles network traffic to/from the pods by managing networking rules.


  • container runtime is an essential component which is responsible for running the containers. It pulls container images from registries, starts, and stops container, and oversees their lifecycle. The container runtime ensures that application workloads are executed reliably.

    Kubernetes supports container runtimes such as containerdCRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface).

container runtime is the one running containers, but management of them is done by kubelet via the container runtime. i.e. within a worker node, Kubernetes makes use of container runtime (such as, containerd) to actually run our containerised application, and it do via kubelet which in-turns interact with container runtime.
A container runtime is the low-level software responsible for running container. It's the actual component that pulls the images, sets up container file systems, isolates processes, and execution of containers. When we use containerd (high-level container runtime), it prepares the images, config, rootfs and so on, then it internally calls (invokes as a child process) the runc, a CLI library with a runtime spec (config.json) and root filesystem. runc uses linux's syscalls like clone(), setns(), pivot_root(), and so on, to start the container as an isolated process.

Finally, we have seen a lot of stuffs going on, and here the simple part, i.e. the actual visualisation of how these things are interacting,

Kubernetes Architecture.
Kubernetes Architecture

Tired! you should be, and i hope it was worth the wait! If not, jump back in, and at the end, give me a like if you are, happy, happy, happy!!

Happy Happy Happy Cat!
happy!

Recent Posts

See All
Workload Resources

In Kubernetes , Workload Resources  are the objects that we use to deploy and manage our containerised applications. Think of them as...

 
 

Made in India with ❤️. This page strongly believes in anonymity. © 2025

bottom of page