When defense software teams talk about data fusion architecture, they almost invariably reference the JDL model — whether they use the name or not. The JDL (Joint Directors of Laboratories) model, originally developed in 1985 and substantially revised in the 1990s and again in 2004, provides the canonical decomposition of data fusion into a hierarchy of processing levels. Understanding what each level actually requires in software terms is essential for designing fusion systems that work in practice.
This article walks through each level with concrete implementation detail — not just the theoretical definitions that appear in the academic literature, but the specific software components, algorithms, and data structures that implement each level in operational defense systems.
Origin and Structure of the JDL Model
The Joint Directors of Laboratories Data Fusion Subpanel published the original model in 1985 as a framework for thinking about the fusion problem in defense intelligence systems. The initial model defined four levels (0 through 3). The 2004 revision by Blasch, Bosse, and Lambert extended it to six levels (0 through 5), adding Level 0 (sub-object assessment) and Level 5 (user refinement) to better capture the full processing pipeline from raw signals to actionable intelligence.
The model is not a software architecture specification — it is a conceptual taxonomy. Different implementations place level boundaries in different places and may not implement all levels. What it provides is a shared vocabulary for discussing where in the processing chain a particular component operates and what its inputs and outputs are.
Level 0: Sub-Object Data Assessment
Level 0 addresses the preprocessing of raw sensor data before any object-level processing begins. The inputs are raw physical measurements — radar returns, acoustic samples, infrared detector arrays, digitized RF spectra. The outputs are structured observations that describe detections: a pixel cluster in an IR image, a pulse in a radar return, an energy spike in a frequency band.
In software terms, Level 0 comprises signal processing and feature extraction routines. For radar, this includes pulse compression, Doppler processing (to extract range-rate), CFAR (constant false alarm rate) detection thresholding, and extraction of detection parameters: range, azimuth, elevation, Doppler velocity, and RCS estimate. For imagery, it includes object detection inference (typically a deep learning model), producing bounding boxes with class labels and confidence scores. For RF signals, it includes channelization, energy detection, and modulation parameter extraction.
A critical output of Level 0 is uncertainty quantification. Every detection must carry not just a measurement but a measurement uncertainty: the 1-sigma uncertainty in range, the 1-sigma uncertainty in azimuth. These uncertainties propagate through Level 1 fusion algorithms and are essential for correct track quality estimation. A Level 0 processor that produces detections without associated uncertainties will produce a Level 1 system whose track quality estimates are meaningless.
Level 0 is the most computationally intensive level. For a single wideband radar, CFAR detection may process hundreds of thousands of range-azimuth cells per beam dwell. For a real-time video feed, object detection inference runs on every frame at 30 fps. This level typically runs on dedicated DSP hardware or GPU-accelerated processing nodes, not on general-purpose server CPUs.
Level 1: Object Refinement
Level 1 is the track fusion level — the most mathematically demanding and the most thoroughly studied in the academic literature. Its input is the stream of detections from Level 0 (and from multiple sensors). Its output is a set of tracks: state estimates representing physical objects, each with a position, velocity, heading, and associated covariance matrix.
The core Level 1 problem has two components: data association and state estimation.
Data association is the problem of deciding which detection, from which sensor, corresponds to which existing track — or whether it represents a new object. The naive approach (assign each detection to the nearest existing track) fails under clutter, sensor noise, and crossing tracks. Standard algorithms include:
Nearest-neighbor (NN): simple but fails under high clutter or close track separation. Adequate for sparse, low-noise environments.
Joint Probabilistic Data Association (JPDA): computes association probabilities across all detections and tracks jointly, handling clutter and ambiguity by maintaining soft associations. Better than NN under moderate clutter. Computationally expensive as track count grows.
Multiple Hypothesis Tracking (MHT): maintains multiple hypotheses about which detections correspond to which tracks, pruning low-probability hypotheses over time. Best performance in complex scenarios; highest computational cost. Used in air defense and air traffic management systems.
State estimation is the problem of updating the track state estimate given a new associated detection. The standard algorithm is the Kalman filter and its nonlinear extensions. The Kalman filter provides the optimal linear minimum mean square error estimate under Gaussian noise. For target motion that is nonlinear (e.g., coordinated turns), the Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF) is used. The filter also manages track initiation (creating a new track when a cluster of detections suggests a new object) and track termination (dropping a track when it has received no updates for a configurable period).
Multi-sensor Level 1 fusion merges tracks from independent sensors — a radar track and an EO/IR track that likely represent the same object are associated and merged into a single fused track with better state estimate quality than either sensor alone could provide.
Level 2: Situation Refinement
Level 2 places individual tracks in operational context. Its input is the Level 1 track picture — a set of tracked objects with kinematic state estimates. Its output is a situation picture: tracks with attributed identities, classified intent, and understood relationships.
Level 2 includes several sub-processes:
Platform identification: correlating a track's kinematic parameters and associated sensor signatures against a database of known platforms. A track whose velocity profile, maneuver characteristics, and associated radar emission match a BMP-3 profile receives a BMP-3 identity attribution. This requires a platform parameter database (PPD) and an attribution algorithm that handles partial matches and conflicting evidence.
Relationship analysis: identifying tactical relationships between tracks. Two tracks maintaining consistent spacing and heading over time are likely part of the same formation. A group of tracks converging on a point at the same time and speed suggests a deliberate tactical maneuver.
Pattern of life analysis: detecting deviations from baseline behavior for known entities. A vehicle that normally parks at coordinates X every night but is absent tonight is an anomaly of potential intelligence value. This requires temporal baseline modeling, which is computationally expensive but essential for priority targeting intelligence.
Level 2 requires access to contextual databases that are not part of the sensor processing pipeline: order of battle databases, equipment parameter libraries, terrain analysis products, historical behavior records. The software architecture must provide efficient query access to these knowledge bases from the real-time processing pipeline without introducing unacceptable latency.
Level 3: Impact / Threat Refinement
Level 3 projects the current situation forward in time to assess threats. Its input is the Level 2 situation picture. Its output is threat assessments: predictions of future enemy actions and their potential impact on friendly operations.
In software terms, Level 3 includes course-of-action prediction algorithms. Given a formation of armored vehicles at a known position, moving at a known velocity toward friendly lines, what is the probability that it will breach the defensive line at sector A versus B within the next 30 minutes? This requires route analysis (computing likely axes of advance through terrain), capability analysis (what can this formation do given its composition), and intent modeling.
Level 3 is the least algorithmatically well-defined level. Commercial implementations often use rule-based expert systems, Bayesian networks, or more recently, machine learning models trained on historical engagement data. The output requires careful presentation — threat assessments with artificially high confidence can anchor analyst thinking and cause them to discount contradicting evidence.
Levels 4 and 5: Process and User Refinement
Level 4 (Process Refinement) is the meta-level that monitors the fusion process itself and adapts collection to improve fusion quality. If Level 1 track quality is degraded because a radar sensor is operating at reduced range, Level 4 should request repositioning of a UAV sensor to compensate. In software, this is implemented as a sensor management module that receives fusion quality metrics and outputs sensor tasking requests.
Level 5 (User Refinement), added in the 2004 model revision, recognizes that human analysts interact with the fusion system and their queries and attention can improve or degrade fusion quality. A user who focuses attention on a particular area of the battlespace implicitly provides information about which tracks and events are important — information that should feed back into Level 4 sensor management priorities.
Key insight: In practice, most operational defense fusion systems fully implement Levels 0–2, partially implement Level 3, and implement Levels 4–5 only in research or high-end programs. Designing a system to the full JDL model is a reasonable architectural goal, but delivery teams should clearly scope which levels are in-scope for each program increment.
The JDL model's greatest value is not as a design blueprint but as a communication tool. When a fusion system produces poor situational awareness, the model helps diagnose where the failure lies: is it a Level 0 calibration problem producing biased detections? A Level 1 data association problem producing ghost tracks? A Level 2 attribution error mislabeling friendly vehicles as hostile? Each level has distinct failure modes, and the model provides a shared vocabulary for discussing them.