{"id":4888,"date":"2025-11-30T23:08:40","date_gmt":"2025-11-30T23:08:40","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4888"},"modified":"2025-11-30T23:08:40","modified_gmt":"2025-11-30T23:08:40","slug":"neural-correspondence-fields-for-dynamic-rf-source-tracking-and-localization","status":"publish","type":"page","link":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/?page_id=4888","title":{"rendered":"Neural Correspondence Fields for Dynamic RF Source Tracking and Localization"},"content":{"rendered":"\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/11\/Neural-Correspondence-Fields-for-Dynamic-RF-Source-Tracking-bgilbert1984.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of Neural Correspondence Fields for Dynamic RF Source Tracking bgilbert1984.\"><\/object><a id=\"wp-block-file--media-071662bd-71ee-4bee-867d-2aa0bd8f7a4a\" href=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/11\/Neural-Correspondence-Fields-for-Dynamic-RF-Source-Tracking-bgilbert1984.pdf\">Neural Correspondence Fields for Dynamic RF Source Tracking bgilbert1984<\/a><a href=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/11\/Neural-Correspondence-Fields-for-Dynamic-RF-Source-Tracking-bgilbert1984.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-071662bd-71ee-4bee-867d-2aa0bd8f7a4a\">Download<\/a><\/div>\n\n\n\n<p class=\"wp-block-paragraph\">From a systems perspective, NCF+DOMA is compatible with real-time operation. On a single commodity GPU, our implementation processes 643 RF<br>voxel volumes at approximately 59 FPS with modest<br>memory usage, while still outperforming both classical and neural baselines. Qualitative results on<br>RF-Human sequences show that the model maintains stable tracks through RF shadowing, disambiguates cluttered multipath, and separates multiple<br>co-channel interferers over long horizons, providing<br>interpretable dense correspondence fields alongside<br>trajectory estimates.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Repository: <a href=\"https:\/\/github.com\/bgilbert1984\/neural-correspondence-rf\">https:\/\/github.com\/bgilbert1984\/neural-correspondence-rf<\/a><\/p>\n\n\n\n<h1 class=\"wp-block-heading\">Neural Correspondence Fields for Dynamic RF Source Tracking and Localization<\/h1>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/opensource.org\/licenses\/MIT\"><img decoding=\"async\" src=\"https:\/\/img.shields.io\/badge\/License-MIT-yellow.svg\" alt=\"License: MIT\"\/><\/a><\/figure>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/www.python.org\/downloads\/\"><img decoding=\"async\" src=\"https:\/\/img.shields.io\/badge\/python-3.8%2B-blue.svg\" alt=\"Python Version\"\/><\/a><\/figure>\n\n\n\n<figure class=\"wp-block-image\"><a href=\"https:\/\/pytorch.org\/\"><img decoding=\"async\" src=\"https:\/\/img.shields.io\/badge\/PyTorch-2.0%2B-EE4C2C.svg?logo=pytorch&amp;logoColor=white\" alt=\"PyTorch\"\/><\/a><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">This repository contains the implementation for the paper <em>Neural Correspondence Fields for Dynamic RF Source Tracking and Localization<\/em> by Benjamin Spectrcyde Gilbert (November 30, 2025). The project introduces Neural Correspondence Fields (NCF), a continuous space-time representation for tracking motion in radio-frequency (RF) sensing environments, and DOMA (Dynamic Object Motion Analysis), an end-to-end architecture for multi-object RF source localization and tracking.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The codebase provides:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Core models: NCF for motion prediction and Gaussian Splats for efficient RF visualization.<\/li>\n\n\n\n<li>CUDA-accelerated components for RF data processing and rendering.<\/li>\n\n\n\n<li>Toy training and evaluation scripts for synthetic dynamic scenes.<\/li>\n\n\n\n<li>Benchmarks for performance on consumer GPUs (e.g., RTX 3060).<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Note<\/strong>: This is a research prototype. Real-world RF datasets and full DOMA detection heads are not included; extend the stubs in <code>evaluate_real_ablation.py<\/code> for your data.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Key Features<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Continuous Motion Tracking<\/strong>: NCF maps 3D positions and time to motion vectors and confidence scores, enabling dense flow reconstruction and uncertainty-aware integration.<\/li>\n\n\n\n<li><strong>Efficient Rendering<\/strong>: Neural Gaussian Splats with temporal warping via NCF for dynamic RF visualizations.<\/li>\n\n\n\n<li><strong>RF-Specific Processing<\/strong>: CUDA kernels for IQ signal feature extraction across frequency bands (WiFi, 5G, etc.).<\/li>\n\n\n\n<li><strong>Toy Experiments<\/strong>: Synthetic orbiting-blob datasets to demonstrate training with photometric and temporal losses.<\/li>\n\n\n\n<li><strong>Ablations and Benchmarks<\/strong>: Scripts for variant comparisons (e.g., with\/without NCF gating) and FPS measurements.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Installation<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Prerequisites<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Python 3.8+<\/li>\n\n\n\n<li>CUDA-enabled GPU (recommended for acceleration; tested on RTX 3060)<\/li>\n\n\n\n<li>Dependencies: Install via pip (see <code>requirements.txt<\/code> below)<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Clone the repository:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>git clone https:\/\/github.com\/bgilbert1984\/neural-correspondence-rf.git\ncd neural-correspondence-rf<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">Install dependencies:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>pip install -r requirements.txt<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>requirements.txt<\/strong> (create this file in the repo root):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>torch&gt;=2.0.0\nnumpy\ncupy-cuda12x  # Adjust for your CUDA version\nnumba\npillow\nscikit-image  # For metrics like PSNR\/SSIM\nlpips  # For perceptual metrics\nmatplotlib  # For visualizations in toys<\/code><\/pre>\n\n\n\n<p class=\"wp-block-paragraph\">For the optimized CUDA backend (faster Gaussian Splatting):<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Install <code>diff_gaussian_rasterization<\/code> (via pip or from https:\/\/github.com\/graphdeco-inria\/diff-gaussian-rasterization).<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">If using external APIs (placeholders in code):<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Gemini: Configure in <code>utils\/gemini_rf_analyzer.py<\/code> (not provided).<\/li>\n\n\n\n<li>Shodan: Configure in <code>utils\/shodan_integration.py<\/code> (not provided).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Quick Start<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Running Toy Training<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Train a Temporal Gaussian Splat (TGS) model on a synthetic dynamic scene:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>python code\/train_tgs_toy.py<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Outputs: Preview PNGs in <code>figures\/<\/code>, ablation assets in repo root (e.g., <code>ablation_frame120_gt.png<\/code>), JSON summary in <code>results\/<\/code>.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">For a smaller toy:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>python code\/train_toy_tgs.py<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Benchmarking<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Measure rendering FPS on RTX 3060-like hardware:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>python code\/experiment_rtx3060_rf_gs.py --num_gaussians 20000 --width 512 --height 512 --frames 60 --backend cuda-auto<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Options: <code>--sweep<\/code> for multiple configs, <code>--motion<\/code> for NCF dynamic tests.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Evaluating Ablations on Real Data<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Stub for custom datasets (implement <code>RealDataset.get_scenes()<\/code>):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>python code\/evaluate_real_ablation.py --data-root \/path\/to\/dataset --checkpoint \/path\/to\/checkpoint.pt --out-dir outputs<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Outputs: Per-scene JSON with metrics (L1\/PSNR\/SSIM\/LPIPS) and PNGs for variants.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Project Structure<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>neural-correspondence-rf\/\n\u251c\u2500\u2500 code\/                          # Core implementation\n\u2502   \u251c\u2500\u2500 neural_correspondence.py   # NCF model (lightweight MLP with positional encoding)\n\u2502   \u251c\u2500\u2500 neural_gaussian_splats.py  # GaussianSplatModel with NCF integration\n\u2502   \u251c\u2500\u2500 cuda_rf_processor.py       # CUDA-accelerated RF IQ processing\n\u2502   \u251c\u2500\u2500 cuda_nerf_renderer.py      # CUDA volumetric renderer\n\u2502   \u251c\u2500\u2500 rf_3dgs_backend.py         # Adapter for optimized 3DGS CUDA rasterizer\n\u2502   \u251c\u2500\u2500 train_tgs_toy.py           # End-to-end toy training script\n\u2502   \u251c\u2500\u2500 train_toy_tgs.py           # Tiny toy training script\n\u2502   \u251c\u2500\u2500 evaluate_real_ablation.py  # Ablation evaluation on real data (stub)\n\u2502   \u2514\u2500\u2500 experiment_rtx3060_rf_gs.py# Benchmark script\n\u251c\u2500\u2500 figures\/                       # Output previews from training (auto-generated)\n\u251c\u2500\u2500 results\/                       # JSON summaries from runs (auto-generated)\n\u251c\u2500\u2500 README.md                      # This file\n\u2514\u2500\u2500 requirements.txt               # Dependencies<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Usage Examples<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Training on Synthetic Data<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code># In train_tgs_toy.py (excerpt)\ndataset = make_orbit_dataset(num_frames=180, height=128, width=256)\nmodel = GaussianSplatModel(num_gaussians=10000, feature_dim=32, backend='cuda-auto')\nmodel.enable_ncf(NeuralCorrespondenceField, ncf_kwargs={'pos_freqs':8, 'time_freqs':6})\n# Train loop with L1 + temporal smoothness losses<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Rendering a Frame<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>import torch\nfrom neural_gaussian_splats import GaussianSplatModel\n\ndevice = torch.device('cuda')\nmodel = GaussianSplatModel(num_gaussians=20000, feature_dim=32, device=device)\n# Load checkpoint if available\n# model.load_state_dict(torch.load('checkpoint.pt'))\n\ncam_pos = torch.tensor(&#91;2.0, 0.5, 2.0], device=device)\ncam_to_world = look_at(cam_pos)  # From experiment_rtx3060_rf_gs.py\n\noutput = model.render_image(\n    camera_position=cam_pos,\n    camera_matrix=cam_to_world,\n    width=512,\n    height=512,\n    focal_length=400.0,  # Pixels\n    time=0.5,            # Normalized &#91;0,1]\n    gate_confidence=True # Use NCF confidence gating\n)\nrgb = output&#91;'rgb']  # (H, W, 3) tensor\n# Save with PIL: Image.fromarray((rgb.cpu().numpy() * 255).astype('uint8')).save('render.png')<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">Processing RF Data<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>from cuda_rf_processor import CUDARFDataProcessor\n\nprocessor = CUDARFDataProcessor(feature_dim=6, frequency_bands=&#91;(2.4, 2.5), (5.1, 5.8)])\n# iq_data: torch.Tensor (e.g., from USRP or simulation)\nfeatures = processor.process_iq_data(iq_data)\n# Use features for NCF input or Gaussian fitting<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Extending for Real RF Data<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Implement RF voxelization in <code>cuda_rf_processor.py<\/code> for your hardware (e.g., integrate with SDR libraries like GNURadio).<\/li>\n\n\n\n<li>Add real datasets to <code>evaluate_real_ablation.py<\/code>: Parse JSON metadata for cameras\/frames, load IQ samples as tensors.<\/li>\n\n\n\n<li>Train end-to-end: Extend toys with RF supervision (e.g., add losses on predicted vs. ground-truth trajectories).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Citation<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">If you use this code, please cite:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>@article{gilbert2025ncf,\n  title={Neural Correspondence Fields for Dynamic RF Source Tracking and Localization},\n  author={Gilbert, Benjamin Spectrcyde},\n  year={2025},\n  url={https:\/\/github.com\/bgilbert1984\/neural-correspondence-rf}\n}<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">License<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">I don&#8217;t know man, I would love to make money somehow. I&#8217;m a Starving Physicists! Hire ME! List your contact info below!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Acknowledgments<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Inspired by NeRF, D-NeRF, and Gaussian Splatting papers.<\/li>\n\n\n\n<li>CUDA backend adapts diff_gaussian_rasterization.<\/li>\n\n\n\n<li>Contact: bgilbert1984 (University of Washington independent research).<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">For issues, open a GitHub issue or email the author. Contributions welcome! \ud83d\ude80<\/p>\n","protected":false},"excerpt":{"rendered":"<p>From a systems perspective, NCF+DOMA is compatible with real-time operation. On a single commodity GPU, our implementation processes 643 RFvoxel volumes at approximately 59 FPS with modestmemory usage, while still outperforming both classical and neural baselines. Qualitative results onRF-Human sequences show that the model maintains stable tracks through RF shadowing, disambiguates cluttered multipath, and separates&hellip;&nbsp;<\/p>\n","protected":false},"author":2,"featured_media":4476,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"class_list":["post-4888","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/pages\/4888","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4888"}],"version-history":[{"count":0,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/pages\/4888\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/media\/4476"}],"wp:attachment":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4888"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}