Idempotent event consumer with processed-events table

11841
0

At-least-once delivery is the default for most queues and streams, so consumers must be idempotent. My go-to pattern is a processed_events table keyed by event_id with a unique constraint. When a message arrives, the consumer tries to insert event_id; if it’s already present, the message is a duplicate and can be safely skipped. The database uniqueness constraint is the real guarantee across instances and restarts. The rest of the handler can then assume “this event is new” and apply state changes. In production, I also store a processed_at timestamp and sometimes an event_type for debugging. This approach is simple, observable, and resilient to retries and rebalances, especially when combined with transactional writes (insert processed marker + domain update) so duplicates can’t slip through on partial failures.