Messaginghigh

Pub/Sub vs Message Queue

Message queues deliver each message to one consumer (point-to-point). Pub/sub systems deliver each message to all subscribers of a topic (broadcast). Both decouple producers from consumers.

Memory anchor

Message queue = a to-do list on the fridge (one person crosses it off). Pub/sub = a radio broadcast (everyone with a radio hears it). Kafka = a recorded podcast -- broadcast AND replayable.

Expected depth

Message queues (SQS, RabbitMQ) are used for task distribution: one job should be processed by exactly one worker. Pub/sub (SNS, Kafka, Google Pub/Sub) broadcasts events to multiple independent consumers — an order placed event might be consumed by inventory, billing, and notification services simultaneously. Kafka blurs the line: it's a pub/sub log where each consumer group gets a full copy of the stream, but within a group, partitions are assigned to individual consumers (queue semantics within the group).

Deep — senior internals

The fundamental difference is consumer model. Queues have competing consumers (N workers share a queue, each message processed once). Pub/sub has independent subscriber groups, each getting all messages. Kafka achieves both: topic partitions are the unit of parallelism. Within a consumer group, each partition goes to one consumer (queue semantics, ordered processing per partition). Multiple independent consumer groups each read all partitions independently (pub/sub semantics). This makes Kafka ideal for event streaming where the same event stream drives multiple downstream systems. For ordered processing within a logical unit (all events for user_id=123 processed in order), Kafka's partition key ensures all events for that key go to the same partition.

🎤Interview-ready answer

For work distribution (process this job once), I use a queue (SQS). For event broadcasting where multiple systems need the same event, I use pub/sub (Kafka topics with multiple consumer groups). Kafka is my default for event-driven architectures at scale because it combines both: each consumer group gets a full ordered stream, and within a group, processing is parallelized across partitions. The durable log also enables replay — new consumers can process historical events from offset 0.

Common trap

Assuming pub/sub means fire-and-forget with no durability. Kafka retains messages for a configurable retention period (days/weeks). Even after all current consumers have read a message, it's available for replay. Design producers to be append-only to the log; let consumer groups manage their own offset position.

Related concepts