Military data fusion is the computational process of combining intelligence from multiple, heterogeneous sources into a coherent, consistent, and accurate representation of the operational environment. When it works, a commander sees a single track labeled "T-80 tank, confidence 87%, last updated 14 seconds ago." When it doesn't, they see three conflicting tracks, each from a different sensor, each with a different position — and no way to know which one is correct.

Getting fusion right is one of the most technically demanding problems in defense software. The inputs are noisy, delayed, and often contradictory. The output has to be trusted enough to act on.

The JDL Model: A Framework for Fusion Levels

The Data Fusion Information Group (DFIG) model — commonly called the JDL model after its origins at the Joint Directors of Laboratories — defines fusion as a series of processing levels, each building on the previous one.

Level 0 — Sub-object data association and estimation. Raw sensor signals are processed and cleaned. Pixel-level imagery is pre-processed; acoustic data is digitized; RF signals are demodulated. The output is a stream of observations, not yet correlated with any object.

Level 1 — Object refinement. Individual observations are combined to produce tracks. Multiple radar returns from the same physical object are associated and fused into a single kinematic track. This level handles the core track fusion problem: given five radar hits over 30 seconds, estimate the object's position, velocity, and heading, with an associated uncertainty ellipse. Algorithms here include Kalman filtering, multiple hypothesis tracking (MHT), and joint probabilistic data association (JPDA).

Level 2 — Situation refinement. Individual tracks are placed in context. This level answers "what does this formation mean?" — recognizing that the three tanks moving in a wedge pattern with artillery behind them constitute a breach attempt, not a patrol. Level 2 fusion requires correlating tracks with doctrine, order of battle databases, and historical patterns.

Level 3 — Threat refinement. The current situation is projected forward: if this formation continues on its present course and speed, what will it threaten in 20 minutes? This level produces threat assessments, not just track data.

Data Sources and Their Software Challenges

SIGINT feeds arrive as structured intercepts or raw RF captures. They carry timing uncertainty (intercept time vs transmission time can differ) and positional ambiguity when geolocation data is absent. SIGINT inputs often need format normalization from proprietary collection system outputs before they can enter the fusion pipeline.

IMINT products are the output of imagery exploitation — either automated (computer vision detections from UAV feeds) or manual (imagery analyst annotations). The challenge is timestamp accuracy: an image acquired at 09:47 showing a vehicle at coordinates X is only useful if the fusion engine knows it was acquired at 09:47, not processed and submitted at 11:15.

HUMINT reports are structured intelligence reports from human sources. They are typically low-frequency, high-confidence, and carry significant positional uncertainty. They are rarely directly fuseable with kinematic track data but are essential for building the order-of-battle context that Level 2 fusion requires.

EW sensor feeds provide electronic emissions data — radar parameter sets, communications frequencies, waveform signatures. When correlated with track data, they enable platform identification: the track moving at 60 km/h matching the emission signature of a BMP-2 radar becomes a high-confidence BMP-2 identification.

UAV video feeds produce a continuous stream of positional and visual data. The software challenge is extracting structured tracks from video — which requires real-time computer vision inference — and correlating those tracks with existing data, accounting for the fact that the same vehicle observed by a UAV and detected by a radar may generate two separate tracks in the fusion engine.

Normalization Challenges: The Hidden Complexity

Before any fusion algorithm runs, all incoming data must be normalized. This is unglamorous work that consumes a disproportionate share of development time in real fusion systems.

Coordinate system normalization: sensors report positions in WGS84, MGRS, local grid, or altitude-dependent coordinate systems. All must be transformed to a canonical representation before correlation is possible. A 10-meter error introduced by a coordinate transformation is operationally significant.

Timestamp normalization: different sensors use GPS time, UTC, local time, or sequence numbers. The fusion engine needs authoritative timestamps in a single reference frame. A GPS-synchronized timestamp is the standard, but not all legacy sensors support it.

Classification and caveat handling: fusion data crosses classification boundaries. A track built from SIGINT at one classification level and radar data at a lower level has a composite classification. The fusion engine must propagate classification correctly and enforce need-to-know at query time, not at ingestion time.

Correlation and Deconfliction

The core technical problem in Level 1 fusion is deciding whether two observations, from two different sensors, represent the same physical object. This is the data association problem. The standard approach is a gating function (eliminate candidates outside a maximum distance threshold) followed by a probabilistic scorer (e.g., nearest-neighbor or MHT) that assigns a correspondence probability.

Deconfliction — resolving the case where two existing tracks are actually the same object — is harder. It requires detecting persistent track duplicates, merging their histories, and reconciling attribute conflicts. Poor deconfliction leads to "ghost" tracks: objects that appear on the COP but do not exist, or objects that exist but appear twice.

Key insight: In most operational fusion systems, the biggest source of error is not the fusion algorithm itself — it is timestamp inaccuracy and coordinate normalization errors introduced at the data ingestion layer. Fix the plumbing before tuning the algorithms.

How Fusion Feeds the COP

The fusion engine produces the authoritative track store that the common operational picture (COP) renders. The COP layer queries the track store via API, subscribes to update events over WebSocket, and renders changes incrementally. The quality of the COP is entirely dependent on the quality of the fusion layer beneath it.

A well-designed fusion-to-COP pipeline publishes track update events (new track, track updated, track dropped) as a stream. The COP subscribes and applies deltas — not full-state snapshots — to maintain a responsive display even when the track database contains tens of thousands of objects. Latency from sensor observation to COP display should be measurable in single-digit seconds for tactical systems.