Affinity, Taints & Tolerations
Taints repel pods from nodes. Tolerations let pods be scheduled on tainted nodes. Affinity/anti-affinity express preferences or requirements for node or pod co-location.
Taints = 'WET PAINT' signs on park benches (nodes). Only pods with tolerations (rain jackets) can sit there. Affinity = 'I want to sit NEAR the playground.' Anti-affinity = 'Keep me AWAY from that noisy kid.' topologySpreadConstraints = 'Spread kids evenly across all playgrounds.'
Taints: kubectl taint nodes node1 key=value:NoSchedule|PreferNoSchedule|NoExecute. Tolerations in pod spec to override taints. nodeSelector: simplest way to constrain pods to nodes with specific labels. nodeAffinity: richer expressions (In, NotIn, Exists, Gt, Lt) with requiredDuringScheduling (hard) vs preferredDuringScheduling (soft). podAffinity/podAntiAffinity: co-locate or spread pods relative to other pods.
NoSchedule: don't schedule new pods without tolerations (existing pods stay). PreferNoSchedule: avoid scheduling if possible. NoExecute: evict existing pods without matching tolerations. Node problem detector taints nodes with memory-pressure, disk-pressure, etc. automatically. topologySpreadConstraints (preferred over podAntiAffinity for spreading): spread pods evenly across zones/nodes with a maxSkew. This is critical for HA — using podAntiAffinity with requiredDuringScheduling prevents scheduling if it can't find a different node, potentially leaving pods unscheduled. topologySpreadConstraints with whenUnsatisfiable: DoNotSchedule vs ScheduleAnyway is more flexible.
Taints + tolerations are for restricting which pods CAN run on a node — dedicated GPU nodes, Windows nodes, spot instances. Affinity is for expressing preferences — keep frontend pods away from database pods (anti-affinity), or co-locate cache with app (affinity). For spreading pods across availability zones for HA, use topologySpreadConstraints — it's more ergonomic than podAntiAffinity and handles scaling gracefully. Control plane nodes are tainted node-role.kubernetes.io/control-plane:NoSchedule by default.
podAntiAffinity with requiredDuringSchedulingIgnoredDuringExecution can make a Deployment unschedulable. If you require each pod on a different node but have more replicas than nodes, the scheduler can't place all pods. Use preferredDuringScheduling or topologySpreadConstraints with ScheduleAnyway to handle this gracefully.