Node Components
Each worker node runs kubelet (pod lifecycle manager), kube-proxy (network rules), and a container runtime (containerd/CRI-O).
kubelet = the on-site foreman at each construction site (node), taking orders from HQ (API server) and managing the workers (containers). kube-proxy = the guy who paints road signs — doesn't direct traffic himself, just sets up the signs so cars (packets) know where to go.
kubelet: registers the node, watches the API server for pods assigned to this node, starts/stops/monitors containers via the CRI, reports pod status back. kube-proxy: maintains iptables (or ipvs) rules for Service VIPs — routes traffic to the correct pod endpoints. Container runtime: containerd (or CRI-O) — implements CRI to actually manage container lifecycle. Node conditions: Ready, MemoryPressure, DiskPressure, PIDPressure.
kubelet uses a PodSpec from the API server (or static pod manifests in /etc/kubernetes/manifests — how control plane components are bootstrapped with kubeadm). kubelet talks to containerd via gRPC. kube-proxy in IPVS mode is more scalable than iptables for large clusters (thousands of services) — uses hash tables O(1) vs iptables O(n) chain traversal. eBPF-based alternatives (Cilium) replace kube-proxy entirely — no iptables, lower latency. Node allocatable resources = node capacity minus system reserved minus kube reserved — this is the actual schedulable capacity.
Every node has three things: kubelet (talks to API server, manages pod lifecycle, reports health), kube-proxy (maintains iptables/IPVS rules for Service routing), and a container runtime (containerd). kubelet is the most critical — it's the local agent that actually starts and stops containers based on what the API server says. kube-proxy translates Service IPs into pod IPs. For large clusters, replace kube-proxy with Cilium (eBPF) for better scalability.
kube-proxy doesn't proxy traffic — it just sets up iptables rules that the kernel uses. The word 'proxy' is misleading. Traffic goes directly from client to pod — not through kube-proxy. kube-proxy just programs the routing.