Messagingcritical

Delivery Guarantees

At-most-once: messages may be lost but never duplicated. At-least-once: messages are never lost but may be delivered multiple times. Exactly-once: every message is delivered precisely once.

Memory anchor

At-most-once = shouting across a canyon (maybe they hear you, maybe not). At-least-once = sending a certified letter with return receipt (they WILL get it, possibly twice). Exactly-once = teleportation (theoretically perfect, practically very expensive).

Expected depth

At-most-once: fire-and-forget, producer doesn't retry. Suitable for metrics, logs where occasional loss is acceptable. At-least-once: producer retries on failure; consumer must be idempotent to handle duplicates. This is the practical standard for most systems. Exactly-once: requires coordination between producer, broker, and consumer — expensive and often an illusion at the system level even if the broker guarantees it (a consumer may crash after processing but before acknowledging).

Deep — senior internals

Kafka exactly-once semantics (EOS): producer idempotence (each message has a sequence number; broker deduplicates retries within a session) + transactional producer (atomic writes across partitions + consumer offset commits in a single transaction). This achieves exactly-once within the Kafka ecosystem. But truly end-to-end exactly-once requires idempotent consumers too — if a consumer processes a message and crashes before committing the offset, it will reprocess on restart. The solution: transactional outbox pattern. Write to DB and outbox table in one ACID transaction; a relay process polls the outbox and publishes to Kafka. Consumer processes and commits result atomically. This gives end-to-end exactly-once semantics with at-least-once Kafka delivery plus idempotent consumers.

🎤Interview-ready answer

I design for at-least-once delivery with idempotent consumers — it's simpler and more reliable than exactly-once. Every consumer checks whether it has already processed a message (using the message ID as a deduplication key in a DB or Redis set) before processing. For payment or financial operations, I use the transactional outbox pattern: write the state change and the outbound event atomically in the DB, then a relay publishes to Kafka. This guarantees the event is published if and only if the DB transaction commits.

Common trap

Trusting Kafka's exactly-once producer setting alone to give end-to-end exactly-once. Kafka EOS covers the broker layer; your consumer can still process a message, crash, and reprocess it on restart. Idempotent consumers are always required.