Defense information systems have a requirement that commercial applications rarely face: every state change in the system must be permanently recorded, tamper-evident, and reproducible. Post-operation analysis (after-action review), legal accountability for engagements, intelligence value assessments of collected data — all require an authoritative, immutable record of what the system knew and what decisions were made at every point in time. Event sourcing is the architectural pattern that provides this, and understanding how to implement it correctly in defense contexts is the subject of this article.
Event Sourcing vs CRUD: The Core Difference
In a traditional CRUD (Create, Read, Update, Delete) system, state is stored as the current value of each entity. When a track position is updated, the previous position is overwritten. When a SITREP is revised, the previous version may or may not be retained depending on how the application is designed. The system state at any given historical moment is not intrinsically recoverable from the database — it requires explicit versioning design choices to be present.
Event sourcing inverts this model. State is never stored directly. Instead, every change to system state is stored as an event — an immutable, append-only record. The event store contains: TrackPositionUpdated at time T1 to coordinates X1; TrackPositionUpdated at time T2 to coordinates X2; TrackIdentityAttributed at time T3 to unit designation Y. The current state of a track is always computable by replaying the event stream from the beginning. The event store itself is an append-only log — no event is ever modified or deleted.
The operational consequence is that every version of every entity is always available. "What did the system know about this track at 14:23:07?" is answerable by replaying events up to that timestamp. This capability is operationally essential for defense systems and essentially free to implement if the architecture is designed for it from the start.
Defense Use Cases for Event Sourcing
SITREP history: Situation reports are revised frequently as new intelligence arrives. In a CRUD system, the current SITREP reflects the latest assessment but the history of how that assessment evolved is lost. In an event-sourced system, every revision is an event — SITREPCreated, SITREPRevised, SITREPApproved, SITREPDisseminated — and the full history is always queryable. After an operation, intelligence analysts can reconstruct exactly what was known and when, which is essential for battle damage assessment and intelligence process improvement.
Track update log: The kinematic history of every track in the system — every position update, every identity attribution, every confidence change — is the raw material for pattern-of-life analysis, route reconstruction, and post-operation analysis. Event sourcing makes this history intrinsic to the architecture rather than an add-on. A TrackUpdated event contains the full track state at update time, the source of the update, the analyst or algorithm responsible, and the previous state being superseded.
Command decision replay for AAR (After-Action Review): An engagement involved a decision to fire at time T based on the system state at time T. The AAR needs to reconstruct the system state at time T: what tracks were visible, what threat assessments were current, what rules of engagement applied, what orders were active. Event sourcing enables this by replaying the event stream to time T and materializing the system state at that point.
Event Store Technologies
EventStoreDB is a purpose-built event store database, designed specifically for event-sourced architectures. It provides native stream partitioning (events are organized by aggregate identifier, so all events for track ID 12345 are in stream "track-12345"), native subscription feeds for building projections, and built-in support for optimistic concurrency control. EventStoreDB is a reasonable first choice for new event-sourced defense systems if the operational constraints permit a dedicated database process.
Apache Kafka as event log: Kafka's append-only, partitioned log architecture maps closely to event sourcing requirements. Kafka topics partition events by aggregate type; within a topic, events are ordered by partition and offset. The consumer group mechanism enables multiple projections to consume the same event stream at independent offsets. Kafka's distributed design provides fault tolerance that EventStoreDB requires additional configuration to match. For defense systems already using Kafka for ingestion pipelines, using Kafka as the event store avoids introducing a second specialized database.
PostgreSQL with JSONB append tables: For systems where introducing a dedicated event store is not feasible, PostgreSQL with an append-only table and JSONB event payloads is a practical alternative. The table schema: event_id (UUID), aggregate_type (VARCHAR), aggregate_id (UUID), event_type (VARCHAR), event_data (JSONB), occurred_at (TIMESTAMPTZ), recorded_at (TIMESTAMPTZ), recorded_by (VARCHAR). A trigger or application-level constraint prevents UPDATE and DELETE operations on the table. This provides event sourcing semantics without a dedicated technology.
Rebuilding State: Projections, Snapshots, and Replay Performance
Rebuilding the current state of an entity by replaying its full event history from the beginning is computationally expensive for entities with long histories. A track that has received 50,000 position updates over 30 days requires replaying 50,000 events to compute its current state. This is the performance challenge of event sourcing.
The standard solution is snapshotting: periodically materializing the current state of an entity and storing it alongside the event stream. When rebuilding state, the system loads the most recent snapshot and replays only events after the snapshot timestamp. A track with 50,000 total events but a snapshot taken after 49,900 events requires replaying only 100 events. Snapshot frequency is a tunable parameter: more frequent snapshots improve read latency at the cost of more storage.
Projections are read-optimized views derived from the event stream. A "current track positions" projection materializes the latest position of every track by consuming TrackPositionUpdated events. A "track history by unit" projection materializes all position history for each unit. Projections can be rebuilt from scratch by replaying the full event stream, which is how you recover from a corrupted projection database without losing data.
Legal and Compliance Dimensions
The legal requirements for defense data retention vary significantly by national jurisdiction and operation type. At minimum, most defense programs require: data integrity evidence (the stored record has not been modified since creation), chain of custody records (who accessed and handled each data item), and retention period compliance (data is retained for the required period and securely deleted after). Event sourcing provides data integrity evidence intrinsically — the append-only store is structurally resistant to modification. Chain of custody is provided by the event metadata (recorded_by, processing_system_id fields). Retention compliance requires explicit policy enforcement at the infrastructure level.
Key insight: Event sourcing in defense systems is not primarily a software architecture choice — it is an operational and legal compliance decision. The append-only audit trail it produces is required for post-operation analysis, legal accountability, and intelligence process validation. Systems that lack it are structurally unable to satisfy these requirements, regardless of how well-designed they are in other respects.