What Is Event-Driven Architecture?
Event-Driven Architecture (EDA) is an integration paradigm in which services communicate by producing and consuming events — discrete records that describe something that has happened in the system. Unlike traditional request/response or polling models, EDA allows producers and consumers to operate independently, without direct knowledge of each other.
In the context of Enterprise Application Integration (EAI), this shift has significant consequences. Systems that once relied on point-to-point connections or centralised ESBs can now participate in a loosely coupled event mesh, reacting to business facts as they occur rather than waiting for synchronous calls.
Core EDA Patterns
1. Event Notification
The simplest form: a system publishes an event to signal that something happened (OrderPlaced, InvoiceApproved). Consumers react as needed. The producer does not know — or care — who is listening. This pattern is the foundation of all EDA implementations.
2. Event-Carried State Transfer
Events carry enough data for consumers to act without querying the source. An OrderShipped event might include the full order, recipient address, and carrier reference. Consumers become self-sufficient, reducing inter-service chattiness at the cost of larger message payloads.
3. Event Sourcing
Rather than persisting current state, the system persists a log of all events. The current state of any entity is derived by replaying its event history. Event sourcing provides a full audit trail, temporal querying, and the ability to rebuild projections — highly valuable in regulated industries and complex BPM scenarios.
4. CQRS (Command Query Responsibility Segregation)
Often paired with event sourcing, CQRS separates the write model (commands that mutate state) from the read model (queries optimised for consumption). In integration scenarios, this allows you to maintain specialised read stores — for example, a reporting database updated by events from a transactional system.
Practical Considerations
Adopting EDA in an enterprise context introduces challenges that must be addressed deliberately:
- Message ordering — Partitioned brokers (Kafka, Azure Event Hubs) preserve order within a partition. Design your partition key strategy around the entity whose events must be ordered.
- Idempotency — Consumers must handle duplicate delivery gracefully. Use idempotency keys or natural event identifiers to detect and discard replays.
- Dead-letter queues — Messages that cannot be processed should be captured for inspection and replay, not silently discarded.
- Schema evolution — Events are a public API. Use a schema registry (Confluent Schema Registry, Azure Schema Registry) to manage backward-compatible changes.
- Observability — Distributed event flows require distributed tracing. Propagate correlation IDs through event headers.
EDA in the KONDEVS Integration Practice
At KONDEVS, we help enterprises transition from monolithic integration topologies to event-driven meshes. Whether you are migrating a legacy ESB to a modern broker, instrumenting SAP with outbound events, or designing a greenfield microservices platform, the pattern fundamentals remain constant: produce facts, not commands; design for eventual consistency; and treat the event log as a first-class architectural artefact.
The result is an integration backbone that scales horizontally, survives partial failures gracefully, and evolves with your business without the brittle coupling that characterises older integration styles.