K8s Cluster Architecture

Kubernetes clusters are built around three core component types that work together to orchestrate your workloads. The Control Plane acts as the cluster’s “brain,” making global decisions and exposing the API; Worker Nodes are the machines that actually run your containers; and Pods group those containers into deployable units.

  1. Components
    • Control Plane (formerly “master”): The brain of the cluster
    • Worker Nodes: Machines that run your actual applications
    • Pods: Groups of containers that are deployed together

Before you spin up a production-grade cluster, you need a minimal footprint: at least one Worker Node and a Control Plane (which can be co-located or distributed). For high availability, you’ll want multiple nodes of each type.

  1. Minimum Requirements
    • At least one worker node
    • Control plane (can run on single or multiple machines)
    • Production environments typically use multiple nodes for redundancy

The Control Plane itself comprises five key services. The kube-apiserver is your cluster’s “front door,” while etcd stores all state in a consistent, HA key-value store. The kube-scheduler decides where Pods go, the kube-controller-manager runs a suite of controllers (Node health, Jobs, ServiceAccounts, etc.), and the cloud-controller-manager bridges Kubernetes with cloud-provider APIs.

Control Plane Components

  1. kube-apiserver

    • The “front door” of Kubernetes
    • Exposes the Kubernetes API for all cluster communications
    • Key Term: Horizontal Scaling – can run multiple instances
  2. etcd

    • The cluster’s “database”
    • Definition: A consistent, highly-available key-value store
    • Stores all cluster data
    • Best Practice: Regular backups are crucial
  3. kube-scheduler

    • The cluster’s “placement manager”
    • Watches for new Pods without assigned nodes
    • Considers resource needs, constraints, data locality, affinity/anti-affinity
  4. kube-controller-manager

    • The cluster’s “oversight system”
    • Runs multiple controllers in one process:
      • Node Controller
      • Job Controller
      • EndpointSlice Controller
      • ServiceAccount Controller
  5. cloud-controller-manager

    • The “cloud integration layer”
    • Definition: Connects to cloud provider APIs
    • Manages load balancers, node lifecycle, routing in cloud environments

Every node runs a set of local components to maintain pods and networking. The kubelet ensures container specs stay healthy; kube-proxy (often optional) programs networking rules or forwards traffic; and a container runtime (containerd, CRI-O, etc.) actually launches containers.

Node Components

  1. kubelet

    • Ensures containers described by PodSpecs are running and healthy
  2. kube-proxy (optional)

    • Implements Service networking via iptables, IPVS, or userspace proxy
    • Can be skipped if your CNI plugin offers equivalent proxying
  3. Container runtime

    • Manages the execution and lifecycle of containers
    • Examples: containerd, CRI-O, or any CRI implementation

Addons extend cluster functionality via Kubernetes resources in the kube-system namespace. Cluster DNS ensures service names resolve inside your Pods; the Dashboard UI offers a web console; monitoring, logging, and network plugins fill out observability and connectivity needs.

Addons

  1. DNS

    • Core DNS server for service name resolution in-cluster
  2. Web UI (Dashboard)

    • Web-based management and troubleshooting interface
  3. Container resource monitoring

    • Time-series metrics and UI for container metrics
  4. Cluster-level Logging

    • Central log store with search and browsing
  5. Network plugins

    • CNI implementations for Pod networking and IP allocation

Kubernetes architecture can vary—from traditional systemd services on VMs, to Static Pods managed by kubelet, to self-hosted control planes running as Pods, to fully managed cloud offerings. Workloads may share nodes with control plane components in dev clusters or live on dedicated hardware in production; tools like kubeadm, kops, and Kubespray each have their own deployment flavor.

Architecture Variations

  • Traditional deployment: Control plane as systemd services on VMs
  • Static Pods: Control plane components as kubelet-managed Pods
  • Self-hosted: Control plane runs within the cluster as Deployments/StatefulSets
  • Managed services: Cloud provider handles control plane for you