Offline maps are not optional in tactical applications. Operators in contested or remote environments cannot rely on tile servers. The map package must be on the device, queryable without internet access, and small enough to fit on the available storage. Two tile packaging formats dominate tactical use: MBTiles, the established SQLite-based standard, and PMTiles, a newer single-file format designed for cloud-optimized random access that also works well for embedded deployment.
Choosing between them — and understanding how to generate, deliver, and update offline map packages — is a practical engineering decision with significant operational consequences. An operator who runs out of storage because the map package was unnecessarily large, or who cannot see a particular area because the zoom levels packaged were wrong, has been failed by the logistics of offline map delivery.
MBTiles Format: SQLite Container Structure
MBTiles is an open specification maintained by Mapbox that packages map tiles — raster or vector — in a SQLite database. The database schema is minimal: two required tables and one optional.
The tiles table stores the actual tile data: zoom_level INTEGER, tile_column INTEGER, tile_row INTEGER, and tile_data BLOB. The tile coordinate system uses the TMS (Tile Map Service) convention, where the Y-axis is inverted relative to the more common XYZ convention used by web map services — tile_row = (2^zoom - 1) - y. This inversion is a common source of bugs when integrating MBTiles with map libraries that use XYZ coordinates.
The metadata table stores key-value pairs describing the tileset: name (display name), type (overlay or baselayer), version, description, format (png, jpg, or pbf for vector tiles), bounds (bounding box in WGS84), center (default view), and minzoom/maxzoom. For vector tile MBTiles, two additional metadata keys are required: json (the TileJSON-style layer schema) and vector_layers (the per-layer attribute definitions).
Performance characteristics of MBTiles depend heavily on the access pattern. Sequential tile reads (raster loading on pan/zoom) are fast because SQLite's B-tree indexes are efficient for range queries. The compound index on (zoom_level, tile_column, tile_row) is the critical performance index — without it, individual tile lookups degrade to full table scans as the database grows. A 10GB MBTiles file covering a country at zoom levels 0–16 with this index performs adequately on modern mobile devices; without it, the same file may take 200–400ms per tile query.
PMTiles: Single-File Random-Access Format
PMTiles was developed by Protomaps as a cloud-optimized alternative to MBTiles, designed to serve tiles directly from object storage (S3, GCS, Azure Blob) using HTTP range requests — without a tile server. A single PMTiles file contains all tiles plus an internal index that allows any tile to be located with at most two HTTP range requests regardless of the file's total size.
For tactical embedded deployment, the cloud-optimization properties of PMTiles translate into different advantages: no SQLite overhead, no SQLite lock contention when multiple processes access the file simultaneously, and a simpler read implementation since the format is designed around sequential byte-range reads rather than SQL queries. The PMTiles specification is fully open and implementations exist for Android, iOS, and JavaScript.
The PMTiles internal structure consists of a fixed 127-byte header, a root directory (the top-level index), and leaf directories (sub-indexes for large files). Each directory entry maps a tile coordinate (encoded as a Hilbert curve index for spatial locality) to a byte offset and length within the file. The Hilbert curve encoding ensures that spatially adjacent tiles are stored close together in the file, improving read-ahead cache performance when an operator pans continuously across the map.
The trade-off: PMTiles is immutable by design. Adding or replacing individual tiles requires rewriting the file. For tactical applications where the map package is replaced as a whole unit — download a new file, replace the old — this is not a problem. For applications that need to patch individual tiles, MBTiles is more appropriate.
Tile Generation: tippecanoe, MapTiler, GDAL
tippecanoe is the standard tool for generating vector tiles from GeoJSON, FlatGeobuf, or GeoPackage source data. Its tile simplification algorithm is specifically designed to produce useful tiles at every zoom level: at low zoom levels, complex geometries are simplified and small features are dropped; at high zoom levels, full detail is preserved. For tactical applications, the key tippecanoe parameters are --maximum-zoom (typically 16 for infantry use, 12–14 for vehicle operations), --minimum-zoom (0 for area overview, 6–8 to avoid needlessly large files), and --coalesce-fraction-as-needed (merge small polygons at low zoom levels to prevent tile size overflow).
MapTiler Engine (formerly GDAL2Tiles) handles raster tile generation: ortho imagery, satellite raster, DTED terrain. The key parameters are the output tile format (PNG for imagery with transparency, JPEG for pure raster at 75–85 quality, WebP for smaller file size), the coordinate reference system of the source (MapTiler handles reprojection from any GDAL-supported CRS to the standard Web Mercator EPSG:3857), and the zoom range.
GDAL's gdal2tiles.py script provides similar raster tile generation capabilities at no cost. For large source rasters (country-scale orthoimagery at 0.5m resolution), the generation time is significant — 8–16 hours for a complete country tileset on a single workstation. Parallelizing across multiple cores with the --processes N flag reduces this proportionally.
Partial Update Strategies
Full map package replacement is operationally expensive. A 15GB country-scale MBTiles file cannot be re-downloaded every time a small area is updated. Partial update strategies address this with delta packages — MBTiles files containing only the modified tiles.
Generating a delta package requires comparing the current version of each tile (identified by a hash of its tile_data) against the previous version. Tiles where the hash has changed are included in the delta; unchanged tiles are omitted. The merge operation on the device uses SQLite's INSERT OR REPLACE INTO tiles syntax, which updates existing tiles by their primary key and inserts new ones in a single operation.
For vector tiles, a more sophisticated diffing approach is possible: compute the difference between feature geometries and attributes at the GeoJSON level before tile generation, then generate tiles only for the bounding boxes of changed features. This produces smaller delta packages when updates are geographically sparse, but requires the server to maintain the previous version of all source GeoJSON for comparison.
Version tracking requires a versioning scheme in the MBTiles metadata. The standard approach is a version field in the metadata table, incremented with each full release, and a last_updated timestamp. The device client compares its local version against the server's current version and downloads delta packages sequentially for each version it is behind.
Integration in Android and iOS
On Android, MapLibre GL Native is the standard open-source map renderer for applications requiring offline vector tile support. It accepts MBTiles sources via a custom LocalTilesSource implementation that the application registers with the map style. Querying tiles from the SQLite database happens on a background thread via a thread pool, which prevents tile loads from blocking the UI thread.
Multi-source tile merging — displaying overlay tiles from a separate source on top of a base map from another — is supported in MapLibre by stacking multiple sources in the map style. A tactical application might display a base raster layer from a satellite imagery MBTiles, a vector overlay layer from a terrain features MBTiles, and a dynamic annotation layer from the application's own SQLite database, all rendered simultaneously in the correct draw order.
Key insight: The most operationally significant packaging decision is zoom level selection. Packaging zoom levels 0–18 for a country-scale deployment produces files 10–50x larger than zoom levels 6–16. For ground infantry use, zoom levels 8–17 are the operational range. For vehicle operations, 6–15. Never package more zoom levels than the operational requirement demands — storage is a finite resource on tactical devices.