From a systems perspective, NCF+DOMA is compatible with real-time operation. On a single commodity GPU, our implementation processes 643 RF
voxel volumes at approximately 59 FPS with modest
memory usage, while still outperforming both classical and neural baselines. Qualitative results on
RF-Human sequences show that the model maintains stable tracks through RF shadowing, disambiguates cluttered multipath, and separates multiple
co-channel interferers over long horizons, providing
interpretable dense correspondence fields alongside
trajectory estimates.
Repository: https://github.com/bgilbert1984/neural-correspondence-rf
Neural Correspondence Fields for Dynamic RF Source Tracking and Localization
This repository contains the implementation for the paper Neural Correspondence Fields for Dynamic RF Source Tracking and Localization by Benjamin Spectrcyde Gilbert (November 30, 2025). The project introduces Neural Correspondence Fields (NCF), a continuous space-time representation for tracking motion in radio-frequency (RF) sensing environments, and DOMA (Dynamic Object Motion Analysis), an end-to-end architecture for multi-object RF source localization and tracking.
The codebase provides:
- Core models: NCF for motion prediction and Gaussian Splats for efficient RF visualization.
- CUDA-accelerated components for RF data processing and rendering.
- Toy training and evaluation scripts for synthetic dynamic scenes.
- Benchmarks for performance on consumer GPUs (e.g., RTX 3060).
Note: This is a research prototype. Real-world RF datasets and full DOMA detection heads are not included; extend the stubs in evaluate_real_ablation.py for your data.
Key Features
- Continuous Motion Tracking: NCF maps 3D positions and time to motion vectors and confidence scores, enabling dense flow reconstruction and uncertainty-aware integration.
- Efficient Rendering: Neural Gaussian Splats with temporal warping via NCF for dynamic RF visualizations.
- RF-Specific Processing: CUDA kernels for IQ signal feature extraction across frequency bands (WiFi, 5G, etc.).
- Toy Experiments: Synthetic orbiting-blob datasets to demonstrate training with photometric and temporal losses.
- Ablations and Benchmarks: Scripts for variant comparisons (e.g., with/without NCF gating) and FPS measurements.
Installation
Prerequisites
- Python 3.8+
- CUDA-enabled GPU (recommended for acceleration; tested on RTX 3060)
- Dependencies: Install via pip (see
requirements.txtbelow)
Clone the repository:
git clone https://github.com/bgilbert1984/neural-correspondence-rf.git
cd neural-correspondence-rf
Install dependencies:
pip install -r requirements.txt
requirements.txt (create this file in the repo root):
torch>=2.0.0
numpy
cupy-cuda12x # Adjust for your CUDA version
numba
pillow
scikit-image # For metrics like PSNR/SSIM
lpips # For perceptual metrics
matplotlib # For visualizations in toys
For the optimized CUDA backend (faster Gaussian Splatting):
- Install
diff_gaussian_rasterization(via pip or from https://github.com/graphdeco-inria/diff-gaussian-rasterization).
If using external APIs (placeholders in code):
- Gemini: Configure in
utils/gemini_rf_analyzer.py(not provided). - Shodan: Configure in
utils/shodan_integration.py(not provided).
Quick Start
Running Toy Training
Train a Temporal Gaussian Splat (TGS) model on a synthetic dynamic scene:
python code/train_tgs_toy.py
- Outputs: Preview PNGs in
figures/, ablation assets in repo root (e.g.,ablation_frame120_gt.png), JSON summary inresults/.
For a smaller toy:
python code/train_toy_tgs.py
Benchmarking
Measure rendering FPS on RTX 3060-like hardware:
python code/experiment_rtx3060_rf_gs.py --num_gaussians 20000 --width 512 --height 512 --frames 60 --backend cuda-auto
- Options:
--sweepfor multiple configs,--motionfor NCF dynamic tests.
Evaluating Ablations on Real Data
Stub for custom datasets (implement RealDataset.get_scenes()):
python code/evaluate_real_ablation.py --data-root /path/to/dataset --checkpoint /path/to/checkpoint.pt --out-dir outputs
- Outputs: Per-scene JSON with metrics (L1/PSNR/SSIM/LPIPS) and PNGs for variants.
Project Structure
neural-correspondence-rf/
├── code/ # Core implementation
│ ├── neural_correspondence.py # NCF model (lightweight MLP with positional encoding)
│ ├── neural_gaussian_splats.py # GaussianSplatModel with NCF integration
│ ├── cuda_rf_processor.py # CUDA-accelerated RF IQ processing
│ ├── cuda_nerf_renderer.py # CUDA volumetric renderer
│ ├── rf_3dgs_backend.py # Adapter for optimized 3DGS CUDA rasterizer
│ ├── train_tgs_toy.py # End-to-end toy training script
│ ├── train_toy_tgs.py # Tiny toy training script
│ ├── evaluate_real_ablation.py # Ablation evaluation on real data (stub)
│ └── experiment_rtx3060_rf_gs.py# Benchmark script
├── figures/ # Output previews from training (auto-generated)
├── results/ # JSON summaries from runs (auto-generated)
├── README.md # This file
└── requirements.txt # Dependencies
Usage Examples
Training on Synthetic Data
# In train_tgs_toy.py (excerpt)
dataset = make_orbit_dataset(num_frames=180, height=128, width=256)
model = GaussianSplatModel(num_gaussians=10000, feature_dim=32, backend='cuda-auto')
model.enable_ncf(NeuralCorrespondenceField, ncf_kwargs={'pos_freqs':8, 'time_freqs':6})
# Train loop with L1 + temporal smoothness losses
Rendering a Frame
import torch
from neural_gaussian_splats import GaussianSplatModel
device = torch.device('cuda')
model = GaussianSplatModel(num_gaussians=20000, feature_dim=32, device=device)
# Load checkpoint if available
# model.load_state_dict(torch.load('checkpoint.pt'))
cam_pos = torch.tensor([2.0, 0.5, 2.0], device=device)
cam_to_world = look_at(cam_pos) # From experiment_rtx3060_rf_gs.py
output = model.render_image(
camera_position=cam_pos,
camera_matrix=cam_to_world,
width=512,
height=512,
focal_length=400.0, # Pixels
time=0.5, # Normalized [0,1]
gate_confidence=True # Use NCF confidence gating
)
rgb = output['rgb'] # (H, W, 3) tensor
# Save with PIL: Image.fromarray((rgb.cpu().numpy() * 255).astype('uint8')).save('render.png')
Processing RF Data
from cuda_rf_processor import CUDARFDataProcessor
processor = CUDARFDataProcessor(feature_dim=6, frequency_bands=[(2.4, 2.5), (5.1, 5.8)])
# iq_data: torch.Tensor (e.g., from USRP or simulation)
features = processor.process_iq_data(iq_data)
# Use features for NCF input or Gaussian fitting
Extending for Real RF Data
- Implement RF voxelization in
cuda_rf_processor.pyfor your hardware (e.g., integrate with SDR libraries like GNURadio). - Add real datasets to
evaluate_real_ablation.py: Parse JSON metadata for cameras/frames, load IQ samples as tensors. - Train end-to-end: Extend toys with RF supervision (e.g., add losses on predicted vs. ground-truth trajectories).
Citation
If you use this code, please cite:
@article{gilbert2025ncf,
title={Neural Correspondence Fields for Dynamic RF Source Tracking and Localization},
author={Gilbert, Benjamin Spectrcyde},
year={2025},
url={https://github.com/bgilbert1984/neural-correspondence-rf}
}
License
I don’t know man, I would love to make money somehow. I’m a Starving Physicists! Hire ME! List your contact info below!
Acknowledgments
- Inspired by NeRF, D-NeRF, and Gaussian Splatting papers.
- CUDA backend adapts diff_gaussian_rasterization.
- Contact: bgilbert1984 (University of Washington independent research).
For issues, open a GitHub issue or email the author. Contributions welcome! 🚀