Run your first three Kubernetes objects — Pod, Deployment, Service — on a local cluster, then understand why each one exists and how they fit together.
By the end of this post you'll have a Kubernetes cluster running on your laptop, three core objects deployed to it (Pod, Deployment, Service), and a working mental model for what each one does. About 30 minutes start to finish.
You'll need Docker installed, plus kubectl and a local cluster tool (we'll use kind). Skip ahead to install steps if you don't have them yet.
Kubernetes is a system for running container workloads across a fleet of machines. Three core objects show up first when you start using it:
Almost everything else in Kubernetes (Ingresses, ConfigMaps, StatefulSets, autoscalers, RBAC) is an optimization on top of these three. Master these and you can read most production manifests.
kubectl is the command-line tool that talks to the cluster. kind runs Kubernetes inside Docker so you can have a real cluster on your laptop in under a minute.
On Mac:
brew install kubectl kind
On Linux, follow the official docs for both. On Windows, use WSL2 or Docker Desktop's bundled Kubernetes.
Verify:
kubectl version --client
kind version
You should see version output for both.
kind create cluster --name k8s-101
kind boots a cluster inside Docker. Takes about 30 seconds. When it finishes:
kubectl get nodes
You should see one node listed with status Ready. That's the whole cluster — one fake "machine" that's really just a Docker container.
Save this as pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: hello
spec:
containers:
- name: app
image: nginx:1.27-alpine
ports:
- containerPort: 80
Apply it:
kubectl apply -f pod.yaml
kubectl get pods
You should see hello 1/1 Running 0 30s. The Pod is now running an nginx container.
Test it from inside the cluster:
kubectl exec hello -- curl -s http://localhost:80 | head -5
You should see nginx's default welcome page HTML.
So that's a Pod. One container, running. But Pods don't restart themselves if they die, and you can't easily run multiple copies. That's what Deployments are for.
kubectl delete pod hello
Save this as deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: app
image: nginx:1.27-alpine
ports:
- containerPort: 80
Apply it:
kubectl apply -f deployment.yaml
kubectl get pods -l app=hello
You should see three Pods, each with 1/1 Running. The Deployment manages them.
Try killing one:
kubectl delete pod -l app=hello | head -1
kubectl get pods -l app=hello
You'll see the count drop briefly, then the Deployment spins up a replacement. That's the self-healing behavior — you say "I want 3", Kubernetes makes sure there's always 3.
The three Pods each have their own IP, but those IPs are ephemeral — they change when Pods restart. A Service gives you one stable endpoint.
Save this as service.yaml:
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
selector:
app: hello
ports:
- port: 80
targetPort: 80
Apply it:
kubectl apply -f service.yaml
kubectl get service hello
You'll see a ClusterIP like 10.96.123.45. That's the stable internal address. Anything in the cluster can hit that IP and reach one of your nginx Pods (load-balanced).
Test it:
kubectl run curl-test --image=curlimages/curl:latest -i --rm --restart=Never -- \
curl -s http://hello.default.svc.cluster.local
You should see nginx HTML returned. The Service did the DNS resolution, picked one of your three Pods, and forwarded the request.
kind delete cluster --name k8s-101
That deletes the whole cluster. No leftover state on your machine.
Forgetting the label selector. A Service uses selector to decide which Pods receive traffic. If your Service's selector doesn't match the Pod labels, traffic goes nowhere — silently. Always check kubectl get endpoints <service> to confirm Pods are linked.
Editing a Pod created by a Deployment. If you change a Deployment-managed Pod directly, the Deployment will overwrite your change on the next reconcile. Edit the Deployment, not the Pod.
Confusing Pod and Container. A Pod can have multiple containers (sidecars), but most Pods have one. The Pod is the scheduling unit; the container is what actually runs.
Using kubectl apply without seeing the diff. Run kubectl diff -f <file> before applying — it shows exactly what will change. Saves you from accidental destructive updates.
You've used the three core objects. The next levels:
Pods, Deployments, Services. Three objects, one mental model. Most production manifests are these three plus a handful of namespaces, configmaps, and secrets. Once they click, the rest is incremental.
Get the latest tutorials, guides, and insights on AI, DevOps, Cloud, and Infrastructure delivered directly to your inbox.
GitOps in plain words — what it actually is, the workflow it enables, and a hands-on demo using Argo CD on a local Kubernetes cluster.
A clear walkthrough of Linux file permissions. Read the funny rwx- letters, change them safely with chmod, fix "permission denied" errors with confidence.
Explore more articles in this category
Walk through a working GitHub Actions workflow — install, test, build, deploy — for a tiny Node app. Every line explained.
Walk through your first Dockerfile, container run, and image push in 30 minutes. No theory dumps — just the commands and what each one is doing.
Three layers of pooling, three different jobs. We learned the hard way which to use when. Real numbers from a 8k-connection workload.
Evergreen posts worth revisiting.