Drag a drone video into QGIS or ArcGIS Pro. Watch the sensor footprint track across your existing map data. Click "Detect Vehicles." See georeferenced points populate in near real-time — no proprietary extensions, no cloud dependency, no per-seat upcharge.
GeoFMV delivers a complete FMV exploitation workflow — from raw STANAG 4609 ingestion to georeferenced AI detection export — without leaving your GIS.
Open local .ts, .mp4, .mxf files or
connect to live UDP/RTP multicast streams. Full playback controls including
8× fast-forward and frame-accurate seeking.
Synchronised extraction of all 146 MISB ST 0601 tags, security metadata (ST 0102), and legacy Predator UAV sets (EG 0104). Malformed packets are logged and skipped — parsing never stops.
The sensor FOV footprint, platform position, and flight trajectory update at ≥5 Hz on the map canvas. Bidirectional sync: click the map to seek the video; scrub the video to update the map.
Run local ONNX (YOLOv11) inference — completely air-gapped — or route frames to OpenRouter cloud models (QWEN3-VL, Llama-3.2-Vision). Switch without code changes.
Pixel-space bounding boxes are transformed to WGS 84 Lat/Lon using a homography matrix from KLV corner coordinates (Tags 82–89), or a ray-casting fallback with DEM intersection.
Export detections as GeoPackage, Shapefile, GeoJSON, or KML. Encode AI results as MISB ST 0903 VMTI packs embedded in the output MPEG-TS. DJI CSV → STANAG 4609 multiplexing built-in.
Both add-ons expose the same analyst-facing feature set. Pick the platform your organisation already licenses — no new infrastructure required.
QMediaPlayer + QVideoSink frame captureklvdata library for MISB ST 0601 parsingffmpeg-python MPEG-TS demuxer (background QThread)onnxruntime-gpu for local YOLO inferenceQgsVectorLayer memory provider for detectionsLibVLCSharp for codec-independent MPEG-TS / UDP playbackBinaryReader KLV parser (no proprietary SDK)OpenCvSharp4 for background frame extractionMicrosoft.ML.OnnxRuntime + DirectML GPU accelerationGraphicsLayer + in-memory FeatureClass for detections
In a disconnected tactical environment? Load a .onnx YOLOv11
model from a USB stick and run inference entirely on the local GPU.
Back at base with network access? Route frames to OpenRouter's multimodal
models for higher-accuracy scene analysis, OCR, and captioning.
┌─────────────────────────────────────────────┐ │ AI Detection Pipeline │ └─────────────────────────────────────────────┘ Media Player │ decoded frame (RGB buffer) ▼ Frame Extractor ◄──── KLV timestamp │ frame + telemetry ▼ AI Worker (background thread) │ ├──► ONNX Runtime (Local / Air-Gapped) │ YOLOv11 · NMS · confidence filter │ └──► OpenRouter POST (Remote / Cloud) QWEN3-VL · Llama-3.2-Vision │ ▼ pixel bounding boxes + labels Georeferencing Engine │ KLV corner coords → homography ▼ WGS84 Detection Points → Map Layer VMTI Encoder → MISB ST 0903
GeoFMV's analytics layer uses DuckDB WASM to query detection results and contextual geodata directly in the browser — no server round-trip needed. Install optional extensions to connect to S3, Azure, PostGIS, GeoPackage, remote Parquet files, and more.
Disconnected or bandwidth-constrained environment. Processes classified local drone files with local ONNX models — zero data leaves the device. QGIS on Windows or Linux.
Corporate or government office with ArcGIS Pro. Bypasses the Image Analyst extension entirely. Integrates detections with enterprise geodatabases and generates standardized GeoPackage reports.
Mobile command post with live UDP drone feed. Immediate situational awareness from real-time footprint tracking and rapid AI damage assessment using the laptop's local GPU.
Deep dives into FMV exploitation, MISB standards, geospatial AI, and field-tested workflows. New episodes drop monthly.
A 14-slide interactive presentation covering the problem, solution, architecture, AI pipeline, data sources, and roadmap. Ideal for briefings and demos.