# RF-NeRF Engine (`nerf-rf-engine.py`)

<!-- filepath: /home/gorelock/gemma/NerfEngine/docs/nerf-rf-engine.md -->

This module implements the core components of the RF-NeRF engine using PyTorch. It includes the NeRF model itself, a volumetric renderer, and a data processor for handling various RF signal inputs.

## Core Components

### 1. `RFNeRFModel`

*   **Purpose:** A PyTorch `nn.Module` that represents the Neural Radiance Field adapted for RF data. It learns a mapping from a 3D spatial location and associated RF features to a color (RGB) and volume density (sigma).
*   **Key Features:**
    *   Uses positional encoding for input 3D coordinates to capture high-frequency details.
    *   Accepts RF signal features (e.g., frequency, amplitude, phase) concatenated with encoded positions.
    *   Configurable network architecture (hidden dimensions, number of layers, skip connections).
*   **Initialization Parameters:**
    *   `encoding_dim`: Dimensionality for positional encoding.
    *   `hidden_dim`: Size of the hidden layers in the MLP.
    *   `num_layers`: Number of layers in the MLP.
    *   `rf_feature_dim`: Expected dimension of the input RF feature vector.
    *   `skip_connections`: List of layer indices where skip connections (concatenating input) should occur.
*   **Methods:**
    *   `positional_encoding(x)`: Applies positional encoding to input coordinates.
    *   `forward(x, rf_features)`: Performs the forward pass, taking spatial coordinates (`x`) and RF features (`rf_features`) to produce `color` and `density`.

### 2. `RFNeRFRenderer`

*   **Purpose:** Performs volumetric rendering using the trained `RFNeRFModel` to generate 2D images (or other views) from the learned 3D RF scene representation.
*   **Key Features:**
    *   Generates camera rays based on position and direction.
    *   Samples points along these rays within specified near and far bounds.
    *   Queries the `RFNeRFModel` for color and density at sample points, incorporating RF features.
    *   Uses volumetric integration (alpha compositing) to compute the final pixel color, depth, and opacity.
    *   Supports batched processing (chunking) to handle large numbers of rays efficiently.
*   **Initialization Parameters:**
    *   `model`: An instance of `RFNeRFModel`.
    *   `num_samples`: Number of points to sample along each ray.
    *   `near_bound`, `far_bound`: Minimum and maximum depth for ray sampling.
    *   `chunk_size`: Batch size for querying the NeRF model during rendering.
*   **Methods:**
    *   `generate_rays(...)`: Creates ray origins and directions for a given camera setup.
    *   `sample_points_along_rays(...)`: Generates 3D points and corresponding depth values (`t_vals`) along rays. Supports stratified sampling.
    *   `render_rays(rays, rf_features, ...)`: Renders a batch of rays, performing sampling, model querying, and volumetric integration. Returns a dictionary containing `rgb`, `depth`, `opacity`, etc.
    *   `render_image(camera_position, camera_direction, rf_data, ...)`: Renders a full image by generating all necessary rays, processing them in chunks using `render_rays`, and assembling the results.

### 3. `RFDataProcessor`

*   **Purpose:** Preprocesses various forms of RF data into a consistent feature vector format suitable as input for the `RFNeRFModel`. It can also handle spatial organization and interpolation of data.
*   **Key Features:**
    *   Processes raw IQ data (from SDRs) using FFT to extract features across specified frequency bands (e.g., power, variance).
    *   Processes WiFi Channel State Information (CSI) and RSSI values to extract features like magnitude, phase coherence, and channel quality.
    *   Can convert fMRI data into a pseudo-RF representation for visualization within the NeRF framework.
    *   Optionally applies Kalman filtering to smooth noisy position or signal data.
    *   Can interpolate sparse measurements onto a regular 3D grid using `scipy.interpolate.griddata`.
*   **Initialization Parameters:**
    *   `feature_dim`: The target dimension for the output RF feature vector.
    *   `use_kalman_filter`: Boolean flag to enable/disable Kalman filtering.
    *   `spatial_resolution`: Resolution of the 3D grid for interpolation.
    *   `frequency_bands`: List of frequency ranges (in GHz) to analyze in IQ data.
*   **Methods:**
    *   `process_iq_data(...)`: Extracts features from complex IQ samples.
    *   `process_wifi_csi(...)`: Extracts features from CSI matrices and RSSI.
    *   `create_rf_grid(...)`: Interpolates scattered position/feature data onto a regular grid.
    *   `convert_fmri_to_rf_features(...)`: Transforms fMRI voxel data into position/feature pairs.
    *   `apply_kalman_filter(...)`: Smooths position data using a Kalman filter (primarily for fMRI or tracking data).

## Benchmarking (`if __name__ == "__main__":`)

The script includes a command-line interface for benchmarking performance:

*   **Usage:** `python nerf-rf-engine.py --benchmark [--cuda-python] [--output <filename>]`
*   `--benchmark`: Runs the performance tests.
*   `--cuda-python`: If CUDA-accelerated versions (`cuda_rf_processor.py`, `cuda_nerf_renderer.py`) are available and this flag is set, it benchmarks them against the pure PyTorch implementation.
*   `--output`: Specifies the JSON file to save benchmark results (default: `benchmark_results.json`).
*   **Tests:** Measures initialization time, grid creation time (if applicable), and rendering time for both PyTorch and (optionally) CUDA Python implementations. Calculates speedups if CUDA Python is used.
