Kubernetes Object Hierarchy: A Simple Mental Model

Kubernetes has a lot of objects. Don't think of them as a flat list. That's the wrong mental model. The right one is this: Kubernetes is a declarative control system. You describe the state you want, and a set of objects — each with a narrow job — cooperate to make reality match. Once you see how those objects fit together, the rest of Kubernetes is mostly details.

Note: this post covers Kubernetes objects — the resources you define in YAML. The control plane components that make these objects work (API server, scheduler, controller manager, etcd) are a separate layer and not shown here.

The Visual Hierarchy

The first thing to get right is that Kubernetes partitions a cluster two different ways — physically (Nodes) and logically (Namespaces). These are orthogonal, not nested. A Namespace spans many Nodes; a Node hosts Pods from many Namespaces. Pods are where the two views meet.

[              Cluster              ]
       /                       \
[   Nodes   ]             [ Namespaces ]
 (physical)                  (logical)
       \                       /
        \                     /
         ▼                   ▼
         ┌─────────────────┐
         │      Pods       │  ← scheduled onto a Node, scoped to a Namespace
         └─────────────────┘

Inside each Namespace:

  [ Ingress ] ──► [ Service ] ──► [ Workload Controller ]
                                   (Deployment / StatefulSet /
                                    DaemonSet / Job)
                                        │
                                        └── [ ReplicaSet ]
                                                 │
                                                 └── [ Pod ]
                                                      ├── Container(s)
                                                      └── Volume(s)

  [ ConfigMap / Secret ] ───────► injected into Pods
  [ PVC ] ──► [ PV ] ───────────► mounted into Pods as persistent storage
  

1. The Infrastructure Layer

Cluster. The outer boundary — one control plane plus a pool of worker machines. Every other object lives inside it.

Node. A single worker machine, physical or virtual, running the kubelet agent. Nodes provide the raw CPU, memory, and disk that Pods get scheduled onto; they don't care which Namespace a Pod belongs to.

2. The Logical Layer

Namespace. A virtual partition for grouping and isolating resources — think environments (dev, staging, prod) or teams. Namespaces give you scoped names, per-team resource quotas, and RBAC boundaries, without needing separate clusters.

3. Networking

Ingress. The cluster's front door for HTTP/HTTPS. It routes external traffic to internal Services based on hostname or path, typically handling TLS termination along the way.

Service. A stable IP and DNS name for a group of Pods. Pods are ephemeral — they get replaced, rescheduled, and change IPs — so the Service is the fixed address that Ingress and other Pods talk to.

4. Workload Controllers

You almost never create Pods directly. Instead, you pick the controller that matches how your app should behave, and it manages Pods for you.

Deployment. The default for stateless apps. Handles rolling updates, rollbacks, and scaling by managing a ReplicaSet underneath.

StatefulSet. For apps that need stable identity and durable storage — databases, queues, anything where pod-0 and pod-1 aren't interchangeable. Each Pod gets a predictable name and its own PVC.

DaemonSet. Runs one Pod per Node (or a chosen subset). Use it for node-level agents: log shippers, metrics collectors, CNI plugins.

Job / CronJob. A Job runs a Pod to completion for one-off work like migrations or batch processing. A CronJob is a Job on a schedule — the Kubernetes equivalent of cron.

5. The Execution Layer

ReplicaSet. Keeps exactly N copies of a Pod running and replaces any that fail. You rarely manage these directly; a Deployment creates and updates them on your behalf.

Pod. The smallest deployable unit — one or more containers that share a network namespace (same IP) and a set of volumes. A Pod is the thing that actually gets scheduled onto a Node.

Container. The running instance of your application image. Most Pods have one main container, sometimes with sidecars for logging, proxying, or similar helper tasks.

Volume. A directory available to the containers in a Pod, defined at the Pod level. It outlives individual container restarts and can be backed by anything from a temporary disk to a cloud volume.

6. Configuration and Storage

These objects don't run code — they supply Pods with the data and storage they need.

ConfigMap. Non-sensitive configuration — env vars, config files, feature flags — kept outside the container image so the same image runs anywhere.

Secret. Same shape as a ConfigMap, but for sensitive values: API keys, passwords, TLS certs. Stored with tighter access controls and optionally encrypted at rest.

PersistentVolumeClaim (PVC) and PersistentVolume (PV). A PV is a real piece of storage in the cluster (a cloud disk, an NFS share). A PVC is a Pod's request for storage — "I need 20Gi of SSD" — which Kubernetes binds to a matching PV. This indirection lets developers ask for storage without knowing the underlying infrastructure.

How It All Fits Together

The payoff of this hierarchy is separation of concerns, and it shows up clearly once you trace a request end to end:

This is the real payoff: your application is just a container. Everything around it — how it scales, how traffic reaches it, what config it reads, what storage it mounts — is handled by separate objects. Swap those objects and the same image runs anywhere, unchanged.

One axis this post deliberately skips is identity and access — ServiceAccounts, RBAC, and how cluster identity maps to cloud IAM. That would require a post of its own.