Skip to content

AM/FM-Impostor Robustness

Near-Degenerate Mode Recovery: Δf vs SNR

  • Thesis: quantify when two close modes are separable as spacing shrinks and noise rises.
  • Figures: heatmap of true-hit rate (Δf×SNR); runtime contours; example spectra.
  • Hooks: delta_f_list, snr_db_list, run_sweep(), plot_slices(..., "dual_df_vs_snr.png"). swept_adversarial_grid

AM-Impostor Robustness

  • Thesis: AM depth masquerades as extra modes; map false/ghost hits vs AM%.
  • Figures: AM%×SNR hit/ghost surfaces; confusion between true/ghost counts.
  • Hooks: am_depth_list, AM impostor wiring in synth_for_grid(), plot_slices(..., "dual_am_vs_snr.png"). swept_adversarial_grid

FM-Impostor Robustness

  • Thesis: FM deviation mimics drifting modes; failure frontier vs SNR.
  • Figures: FM_dev×SNR recovery maps; runtime overlays.
  • Hooks: fm_dev_list, FM impostor in synth_for_grid(), plot_slices(..., "dual_fm_vs_snr.png"). swept_adversarial_grid

Quality Factor vs Spacing: The Q–Δf Trade

  • Thesis: lower τ (Q proxy) increases spectral overlap; map recoverability vs Q and Δf.
  • Figures: Q×Δf hit-rate + runtime contours; example reconstructions.
  • Hooks: q_list, make_modes_for_deltaf(...), plot_slices(..., "dual_q_vs_df.png"). swept_adversarial_grid

Runtime–Robustness Pareto

  • Thesis: turn fit_time_ms into an ops cost; plot Pareto fronts (hit-rate vs ms).
  • Figures: Pareto scatter by scenario; iso-runtime contours on recovery heatmaps.
  • Hooks: test_point_worker(...) (captures fit_time_ms, total_ms), plot_slices() runtime layers. swept_adversarial_grid

Ghost-Hit Economics

  • Thesis: cost of ghost modes vs missed true modes under operational budgets.
  • Figures: iso-cost surfaces (λ ghost, μ miss) over Δf×SNR; decision regions.
  • Hooks: score_recovery(...) (ghost_hits, true_hits), sweep grids. swept_adversarial_grid

Tolerance Setting: Frequency Error vs Hits

  • Thesis: optimize FREQ_TOLERANCE_HZ to maximize utility across scenarios.
  • Figures: utility vs tolerance; variance of mean_freq_err across grid.
  • Hooks: FREQ_TOLERANCE_HZ, score_recovery(...) (mean_freq_err). swept_adversarial_grid

Threading & Throughput: Scaling the Sweep

  • Thesis: characterize parallel speedups and saturation (ThreadPoolExecutor).
  • Figures: tasks/sec vs workers; wall-clock vs grid size; CPU utilization snapshots.
  • Hooks: n_workers, parallel path in run_sweep(parallel=True). swept_adversarial_grid

Adapter Interfaces that Don’t Break

  • Thesis: standardize RFMode fit adapters; evaluate placeholder vs real backends.
  • Figures: agreement (freq) between adapters; latency deltas; error taxonomy.
  • Hooks: rfmode_fit_adapter(...) shim contract & outputs ({"modes":...,"quality":...}). swept_adversarial_grid

Colored Noise & SNR Calibration

  • Thesis: calibrate SNR mapping in synth_for_grid() so “dB” matches observed.
  • Figures: measured SNR vs target; recovery vs color exponent; residual histograms.
  • Hooks: synth_for_grid(...) (colored_noise_alpha, noise_std, target SNR logic). swept_adversarial_grid

AM/FM/Δf Interaction Effects

  • Thesis: 3-way interaction—where do AM and FM jointly collapse separability?
  • Figures: 3D slices or small multiples; significance bars per factor.
  • Hooks: joint grids over am_depth_list, fm_dev_list, delta_f_list, run_sweep(). swept_adversarial_grid

Result Artifacts for Reproducibility

  • Thesis: CSV+JSON as first-class artifacts enable audit-grade science.
  • Figures: pipeline diagram; provenance table from summary.json.
  • Hooks: sweep_reports/sweep_results.csv, sweep_reports/sweep_summary.json writing in run_sweep(). swept_adversarial_grid

Visualization Patterns for Robustness Slices

  • Thesis: dual-panel visual grammar (outcome + runtime) communicates tradeoffs best.
  • Figures: side-by-side panels; runtime contour variants; colormap studies.
  • Hooks: plot_slices() with create_dual_plot(...) and heatmap_slice(...). swept_adversarial_grid

Minimal Grid vs Active Selection

  • Thesis: show that a smarter sampler (replace product grid) achieves same insights with fewer runs.
  • Figures: sample-efficiency curves; area-under-robustness vs evaluations.
  • Hooks: baseline is your Cartesian grid via product(...); compare to iterative policies using the same test_point_worker(...) API. swept_adversarial_grid

MWFL Peak-Pattern Detectors Under Real-World Noise Floors — Validate the stubbed kW-class multi-wave free-electron-laser (MWFL) detector against synthetic spectra; sensitivity/specificity vs threshold and bin width. Figures: ROC/PR vs threshold_db; sideband count vs SNR; spacing-consistency ablation. Hooks: detect_kW_laser_signature() (spacing logic, sidebands). spatial_mwfl_harness

Restorative FFT Paths: When “Mock Tensors” Are Enough — Compare NumPy-only vs Torch-backed pipelines on the same spectra; quantify any latency/accuracy drift from the torch fallback. Figures: p50/p95 latency per path; agreement heatmap. Hooks: torch fallback stub + restored_fft creation. spatial_mwfl_harness

Latent Aggregation for Eventizing Spectra — Formalize how LatentAggregator.observe_spectrum() converts FFT bins into alert messages and buffers provenance for later audit. Figures: end-to-end timing (ingest→alert); buffer growth over load; alert completeness table. Hooks: LatentAggregator.observe_spectrum, buffer schema. spatial_mwfl_harness

Spatially Enhanced Alerts in ≤500 ms — Measure pub/sub round-trip with the mock network and the stubbed SpatialReasoningBridge; SLA hit-rates across 0.5 s budget. Figures: CDF of end-to-end latencies; timeout miss-rate; topic fan-out scaling. Hooks: MockCommNetwork, alert_event.wait(timeout=0.5). spatial_mwfl_harness

From Peaks to Places: Minimal Spatial Enrichment That Works — Show how even a stubbed bridge materially improves actionability by adding origin, altitude, path model, and propagation delay. Figures: decision utility vs “with/without enrichment”; confidence calibration. Hooks: stub SpatialReasoningBridge.enrich_alert() fields: predicted_origin, spatial_confidence, propagation_delay_ms. spatial_mwfl_harness

Environmental Soundings → Path Reasoning — In the full bridge, inject meteo soundings to steer ray tracing and reasoning thresholds; show when enriched paths cross the publish threshold. Figures: path-candidate counts vs sounding quality; threshold sweeps. Hooks: update_env_sounding(), trace_paths(...), reasoning_threshold. spatial_reasoning_bridge

Action-Reasoning over RF: Movement Vectors and Deception Flags — Treat spatial reasoning as a classifier over path candidates + metadata; assess movement_hypothesis accuracy and possible_deception recall. Figures: vector-angle error; deception ROC; confusion matrix by alert_type. Hooks: SpatialReasoningModel.reason_about_signal(...), movement_hypothesis, deception_flag. spatial_reasoning_bridge

MWFL Taxonomy in Synthetic Worlds — “Standard / narrow / wide / complex” patterns: which produce the most reliable enrichments and fewest false routes. Figures: per-type detection yield; spacing histogram stability; enriched-alert confidence. Hooks: pattern set in patterns[...] + generator generate_synthetic_mwfl_fft(...). spatial_mwfl_harness

Echo-Rich Bursts & Ringdown Mode Recovery (Blade Validation) — Ground-truth damped sinusoids vs recovered modes; %-error by freq/τ/amp with pass/fail badges. Figures: GT vs recovered tables; error CDFs; runtime vs modes. Hooks: generate_echo_rich_burst(...), RFModeFitter integration. spatial_mwfl_harness

Ghost-Mode Resilience Under Adversarial Scenarios — Near-degenerate triplets, phase-locked pairs, AM/FM confusers, heavy-tail decays; report true-hit vs ghost-hit rates. Figures: ghost-hit rate vs scenario; recovery-time bars; robustness radar. Hooks: run_adversarial_case(...) harness + adapter. spatial_mwfl_harness

Ray-Trace-Guided Publishing: When to Speak, When to Hold — Only publish when reasoning_confidence ≥ threshold; quantify missed threats vs false dispatches as reasoning_threshold sweeps. Figures: cost curves; precision/recall vs threshold. Hooks: reasoning_threshold, enriched publish gate. spatial_reasoning_bridge

Multipath & Orbital Mimic Triage — Fuse matched_peaks, tentative source_guess, and multipath_score into a triage score; benchmark “orbital mimic” vs terrestrial glare. Figures: triage ROC; case studies with predicted paths. Hooks: SRB feature vector build (spatial_features{...}). spatial_reasoning_bridge

Pub/Sub Patterns for RF Situational Awareness — Minimal message schemas and topic topologies that survive adversarial load; back-pressure behavior. Figures: throughput vs subscribers; message-loss heatmap under bursty MWFL tests. Hooks: subscribe()/publish() implementation and capture. spatial_mwfl_harness

Confidence Fabrics: Aligning Detector Confidence with Spatial Confidence — Calibrate mwfl_hit['confidence'] vs spatial_confidence to build a fused risk score. Figures: reliability diagrams; fused-score ROC; net-benefit decision curves. Hooks: detector confidence + enrichment confidence fields. spatial_mwfl_harness

SLA-First Design: Meeting Sub-Second Budgets without GPUs — Show that the stub path (10–50 ms enrichment delay) meets near-real-time ops; where GPUs bend curves. Figures: budget pie (ingest→detect→enrich→publish); jitter histogram. Hooks: enforced sleep in enrichment + measured elapsed times. spatial_mwfl_harness

Interfaces that Don’t Lie: Buffer Introspection and Post-Incident Forensics — Treat the aggregator/bridge buffers as an audit trail; propose invariants and serialization for after-action reviews. Figures: schema diagrams; integrity checks; replay timings. Hooks: LatentAggregator.buffer, SpatialReasoningBridge.buffer, get_spatial_summary(). spatial_mwfl_harnessspatial_reasoning_bridge

A) Foundations & Representations

  1. Bio-Inspired RF Memory for Weak-Signal Recall — Describe the K9SignalProcessor architecture, feature vectorization, cosine similarity, and persistence/forgetting dynamics; analyze how SignalMemory.persistence and _clean_memory gate long-tail recall. Figures: PR/ROC vs memory size; time-to-match vs feature_dim; decay curves under different persistence. Hooks: extract_features, _calculate_similarity, _clean_memory. k9_signal_processor
  2. Quantum-Spin State Modeling of RF Spectra — Formalize your spin-based state representation and density matrix pipeline; compare qubit (Pauli) vs qudit (generalized Gell-Mann) regimes from _generate_gell_mann_matrices. Figures: purity vs SNR; coherence (ℓ1) vs bandwidth; qubit/qudit confusion plots. Hooks: process_signal, _calculate_density_matrix, _generate_gell_mann_matrices. quantum_spin_processor
  3. Tomography for RF: Bloch-Vector Maps as Diagnostics — Use Stokes parameters and Bloch vectors from _perform_quantum_tomography to derive interpretable health metrics for live RF scenes. Figures: Bloch-sphere scatter by class; purity histograms; Stokes parameter stability vs noise. Hooks: _perform_quantum_tomography. quantum_spin_processor

B) Quantum Enhancements & Fusion

  1. Superposition & Coherence as Signal Quality Priors — Quantify how superposition_score and quantum_coherence improve classification/confidence on weak or overlapping emitters. Figures: AUROC/TTFB under interference; calibration curves pre/post quantum prior. Hooks: _detect_superposition, _calculate_coherence. quantum_spin_processor
  2. Cross-Frequency Entanglement Cues for Multi-Emitter Scenes — Turn _analyze_entanglement into a detector for coordinated emitters; study entangled_frequencies and entanglement_strength. Figures: hit-rate vs entanglement_sensitivity; confusion under frequency overlap (Jaccard). Hooks: _analyze_entanglement, _calculate_frequency_correlation. quantum_spin_processor
  3. Interference Cartography via Quantum Formalism — Treat _analyze_interference as a structured oscillation detector; benchmark vs classical second-derivative heuristics. Figures: interference_strength heatmaps; ΔF1 vs classical. Hooks: _analyze_interference. quantum_spin_processor
  4. Quantum-Classical Late Fusion for RF SCYTHE — Evaluate integrate_with_k9_processor end-to-end: signal_complexity, detection_confidence, anomaly_score, and quantum processing gain estimator. Figures: p50/p95 confidence lift; “gain in dB” violin; error vs complexity. Hooks: integrate_with_k9_processor, _estimate_quantum_processing_gain. quantum_spin_processor

C) Spatial Intelligence & Field Ops

  1. Spatial Entanglement Graphs in the Wild — Use QuantumCelestialK9 to build entanglement links across location grids; test thresholds and temporal stability. Figures: geo-graph of entanglement links; link persistence CDF; false-link ablations. Hooks: _add_quantum_spatial_information, _detect_spatial_entanglement, spatial_entanglement_map. quantum_celestial_k9
  2. Grid Resolution vs Detection Yield — Trade-off study of location_grid_resolution on link density and false matches across motion profiles. Figures: yield vs resolution; compute/time vs resolution; precision-recall by grid size. Hooks: quantum_location_map, config location_grid_resolution. quantum_celestial_k9
  3. Ops Metrics Under Real-Time Constraints — Characterize QuantumCelestialK9’s thread loop latency, processing_time EMA, and throughput vs signal load. Figures: TTFB throughput curves; CPU/GPU utilization; stall/miss histograms. Hooks: start()/_processing_loop, metrics. quantum_celestial_k9

D) Robustness, Anomalies & Security

  1. Anomaly Scoring with Coherence-Purity Mismatch — Validate _calculate_quantum_anomaly_score for spotting spoofed/“too-clean” emitters; quantify red-team evasions. Figures: attack success rate vs anomaly threshold; SHAP of anomaly features. Hooks: _calculate_quantum_anomaly_score. quantum_spin_processor
  2. Adversarial Interference vs Quantum Interference — Stress _analyze_interference against adversarial periodic jamming; measure separability under phase randomization. Figures: adversarial gap; PSD overlays with phase controls. Hooks: _analyze_interference. quantum_spin_processor
  3. Memory Poisoning & Confuser Signals — Study how poisoned SignalMemory affects recall; mitigation via entropy/flatness guards in extract_features. Figures: precision under poisoning rate; memory-pruning strategies. Hooks: SignalMemory, extract_features. k9_signal_processor

E) Feature Engineering & Retrieval

  1. From FFT Stats to Field Wins — Systematic ablation over extract_features: spectral entropy, skew/kurtosis, centroid/spread, and down-sampled magnitudes. Figures: per-feature Shapley bars; accuracy/latency vs feature_dim. Hooks: extract_features. k9_signal_processor
  2. Similarity Kernels for K9 Memory — Replace cosine with alternatives; show effects on near-duplicate recall and tail generalization. Figures: PR curves by kernel; latency vs kernel. Hooks: _calculate_similarity. k9_signal_processor

F) Systems Integration & Engineering Notes

  1. Interface Drift in Quantum-Classical Pipelines — Case study of constructor/API mismatch (e.g., passing use_gpu/sensitivity into K9SignalProcessor from QuantumCelestialK9); propose interface contracts and CI checks. Figures: failure modes; contract tests; compile/run matrix. Hooks: QuantumCelestialK9.__init__, K9SignalProcessor.__init__. quantum_celestial_k9k9_signal_processor
  2. Threshold Economics: Entanglement & Coherence — Grid search on entanglement_threshold and coherence_threshold vs miss/false-link cost; present operating characteristic surfaces. Figures: iso-cost surfaces; Pareto fronts. Hooks: config entanglement_threshold (QC-K9), coherence_threshold (Q-spin). quantum_celestial_k9quantum_spin_processor
  3. End-to-End Demo: Quantum-Enhanced Celestial Tracking — Holistic benchmark combining K9 features, quantum fusion, and spatial links; report signals_processed, quantum_enhanced_detections, entangled_signal_pairs. Figures: demo timeline; KPI dashboard; map overlay. Hooks: get_metrics(), get_quantum_spatial_map(). quantum_celestial_k9

Majority vs Weighted vs Stacked Voting in RF Modulation Ensembles — Ablate voting_method ∈ {majority, weighted, stacked}; figs: accuracy/TTFB vs #models; vote entropy vs error; misvote waterfall. Hooks: classify_signal() vote paths. ensemble_ml_classifier

Spectral vs Temporal vs Hybrid Inputs — Compare _create_spectral_input (FFT→256) vs _create_temporal_input (seq=128, I/Q) vs _create_transformer_input (fusion); figs: AUROC per path; aliasing stress sweep. ensemble_ml_classifier

Transformer Feature-Fusion for IQ+FFT — Show gains from per-timestep spectral repetition concatenated to temporal features; figs: ablation on fusion width; latency vs dim. Hooks: _create_transformer_input. ensemble_ml_classifier

Deep + Classical Co-Training (RF/SVM/GBM/KNN) Under Scarce Labels — Enable/disable use_traditional_ml; figs: sample-efficiency curves; OOD drift. Hooks: _extract_features, _classify_with_traditional_ml, scaler. ensemble_ml_classifier

Checkpoint/Metadata Mismatch Tolerance — Robustness when .pt classes ≠ runtime classes; figs: accuracy vs class-map divergence; recovery time. Hooks: class_mapping from *_metadata.json, load_from_checkpoint() fallback. ensemble_ml_classifier

Fallback Paths: Hierarchical → Frequency-Based Rescue — When parent super().classify_signal() fails, you drop to SignalProcessor frequency classification; figs: failure modes and rescue rate. Hooks: exception branch in classify_signal(). ensemble_ml_classifier

Short-Signal Resilience — Behavior and thresholds when len(iq_data) < 32; figs: accuracy/coverage vs length; early-quit policy. Hooks: length check + padding strategy. ensemble_ml_classifier

Resampling Effects (FFT→256; Seq→128) — Quantify downsample/interp distortion; figs: PSD divergence (KL), task accuracy vs target sizes. Hooks: _create_spectral_input, _create_temporal_input. ensemble_ml_classifier

Confidence Calibration for Weighted Voting — Post-softmax calibration and its impact on ensemble weighting; figs: ECE/MCE; utility vs miscalibration. Hooks: probability paths in classify_signal(). ensemble_ml_classifier

Open-Set Handling (“Unknown” as a First-Class Outcome) — Thresholding and abstention strategies; figs: OSCR; AU-PR for unknowns. Hooks: default “Unknown” mapping & thresholds. ensemble_ml_classifier

Hierarchical vs Flat Ensembles — When does the parent HierarchicalMLClassifier beat flat ensembling? Figs: per-class wins; confusion deltas. Hooks: super().classify_signal() vs ensemble block. ensemble_ml_classifier

Explainability from Vote Traces — Turn signal.metadata["ensemble_*"] into audit trails; figs: vote timelines; Shapley-like vote contributions. Hooks: metadata writes in classify_signal(). ensemble_ml_classifier

NaN/Padding/Interpolation Robustness — Quantify impact of np.nan_to_num, zero-padding, linear interp; figs: error vs corruption ratio; latency. Hooks: input sanitation in temporal/spectral builders. ensemble_ml_classifier

AM/FM Handcrafted Features vs Learned Features — Value of am_mod_index, fm_deviation, spectral kurtosis/skewness; figs: SHAP on classical stack; feature ablation. Hooks: _extract_features. ensemble_ml_classifier

Ensemble Size vs Latency/Energy on CPU/GPU — Trade-off by toggling ensemble_models set; figs: p50/p99 latency vs models; energy/op (J/inference). Hooks: model.to(self.device), per-model loops. ensemble_ml_classifier

Specialized Models per Modulation Family — Route subsets to SpectralCNN, SignalLSTM, ResNetRF, SignalTransformer; figs: specialization gain vs generalists. Hooks: per-model inputs & predictions. ensemble_ml_classifier

Stacked Meta-Learner Blueprint — Implement the “not yet” path for stacked with logistic-regression/GBM meta on model logits; figs: overfit risk vs cross-val. Hooks: voting_method == "stacked" branch. ensemble_ml_classifier

IQ Length Normalization Policies — Evenly spaced downsampling vs windowed pooling vs STRIDED selection; figs: aliasing vs accuracy; window sensitivity. Hooks: _create_temporal_input index selection. ensemble_ml_classifier

soft_triangulator.py + soft_triangulator_enhanced.py

  1. Differentiable AoA Soft-Triangulation as a Learning Layer — Treat expected-angle rays + pairwise intersections as an end-to-end differentiable layer; quantify localization error vs beam count K and temperature τ. Figures: MAE vs K; bias/variance vs τ. Hooks: SoftTriangulator.forward() (softmax→E[θ]→ray intersections). soft_triangulator
  2. Temperature Annealing for Beam Posteriors — Show how τ in softmax trades exploration vs peaky beams; derive a training schedule for low-SNR scenes. Figures: calibration curves of peak prob; MAE vs τ schedule. Hooks: temp parameter. soft_triangulator
  3. Confidence-Weighted Ray Intersection — Evaluate peak-prob × sensor-weight × skew-penalty scheme; ablate each term and the skew σ≈200 m. Figures: ΔMAE vs {no weights, no skew, full}; λ-sweep on skew penalty. Hooks: EnhancedSoftTriangulator weights & skew_penalty. soft_triangulator_enhanced
  4. Robust Triangulation via MAD Gating — Quantify outlier rejection using median-absolute-deviation with threshold γ; failure modes in near-parallel rays. Figures: inlier ratio vs γ; tail-MAE. Hooks: robust_threshold + MAD logic. soft_triangulator_enhanced
  5. Uncertainty Ellipses from Weighted Covariance — Extract eigen-ellipse (95% scale factor 5.991) and validate against empirical error; compare to CRLB. Figures: ellipse vs Monte-Carlo; calibration slope. Hooks: weighted covariance → uncertainty=[major,minor,angle]. soft_triangulator_enhanced
  6. Hybrid AoA+TDoA Refinement with Huber Loss — Start with AoA soft-triangulation, then gradient-descent refine on TDoA with Huber(δ); report convergence traces and residuals. Figures: loss/time; steps-to-ε; residual CDF. Hooks: HybridTriangulator.forward() loop, _huber_loss, position_steps. soft_triangulator_enhanced
  7. TDoA Sigma & Weighting: From Physics to Loss Scaling — Sensitivity of refinement to sigma_s and pair weights w; show robustness to clock-bias mis-specification. Figures: residual vs σ mis-cal; ablation of pair selection. Hooks: tdoa_pairs={i,j,tdoa_s,sigma_s,w}; sensor_clock_bias. soft_triangulator_enhanced
  8. Geometry Design for Sensor Layouts — Optimize sensor XY geometry to minimize ellipse major-axis (D-optimal/A-optimal criteria) under your soft triangulator. Figures: ellipse axes vs baseline grids (square/tri/irregular). Hooks: sensor_xy as design variable; pairwise intersections. soft_triangulatorsoft_triangulator_enhanced
  9. Near-Parallel Ray Regularization — Analyze the numerical fallback (+1e-6 I) for singular A; quantify stability and error under shallow intersection angles. Figures: condition number vs angle; MAE with/without regularization. Hooks: torch.linalg.solve + regularized branch. soft_triangulator_enhanced
  10. Clamp Policies & Map Priors — Study max_range clamping artefacts and propose soft-prior walls (penalties) instead of hard clamps. Figures: boundary-bias map; clamp vs prior loss. Hooks: .clamp(-max_range,max_range). soft_triangulatorsoft_triangulator_enhanced
  11. Angle-Bin Design: Resolution vs Computation — Choose angle_bins (uniform vs learned, K-sweep); show latency/accuracy tradeoffs. Figures: MAE vs K; FPS vs K. Hooks: angle_bins buffer; softmax over K. soft_triangulator
  12. End-to-End Beamformer Training with Triangulation Loss — Backprop a geodesic loss on pos_xy into beam logits; show uplifts vs CE-only training. Figures: learning curves; generalization under SNR decay. Hooks: differentiable path beam_logits → pos_xy. soft_triangulatorsoft_triangulator_enhanced
  13. Hybrid vs AoA-Only: When Does TDoA Pay Off? — Operating regions where the hybrid wins (pair count, σ_tdoa, baseline SNR); guidance for field kits. Figures: heatmaps over {P, σ}; Pareto fronts (MAE vs compute). Hooks: HybridTriangulator vs EnhancedSoftTriangulator. soft_triangulator_enhanced

signal_exemplar_matcher.py + simple_exemplar_search.py

  1. Tri-Modal Exemplar Fusion (Spectrum ⨉ Motion ⨉ Geo)
    • Thesis: Toggle/weight spectrum, DOMA motion, and geo to map the best operating points.
    • Figures: mAP/Recall@K vs {use_spectrum,use_doma,use_geo}; ablate per-modality failure cases.
    • Hooks: SignalExemplarMatcher(..., use_doma=True, use_spectrum=True, use_geo=True), _extract_feature_vector() concatenates [compressed_spectrum[:128], vx,vy,vz, x,y]. signal_exemplar_matcher
  2. Cosine vs Euclidean for RF Exemplars
    • Thesis: When does cosine win over ℓ2 for mixed-scale features?
    • Figures: top-K precision vs SNR; rank-stability under feature rescaling.
    • Hooks: find_similar_signals(..., similarity_metric="cosine"|"euclidean"). signal_exemplar_matcher
  3. Padding, Truncation, and Leakage: 128-Bin Fingerprints
    • Thesis: Quantify accuracy loss from truncating compressed_spectrum to 128 dims; propose multi-resolution tiling.
    • Figures: accuracy/latency vs spectrum length; confusion matrices under aliasing.
    • Hooks: _extract_feature_vector()spectrum[:128]. signal_exemplar_matcher
  4. Zero-Fill Fallbacks Under Missing Metadata
    • Thesis: Robustness when motion/geo absent; propose mask bits & learned imputation.
    • Figures: degradation vs missing-field rate; recovery with masks.
    • Hooks: zeros for missing motion_prediction and last_position. signal_exemplar_matcher
  5. From Hybrid Sweeps to Exemplars: A Normalization Pipeline
    • Thesis: Turn sweep result blobs into stable exemplar vectors; compare flat vs nested params records.
    • Figures: pre/post normalization variance; downstream retrieval uplift.
    • Hooks: normalize_result_record(), extract_feature_vector() over snr_db, delta_f_hz, q_ms, am_depth_pct, fm_dev_hz, hit_ratio, runtime. simple_exemplar_search
  6. Runtime-Aware Retrieval: Speed as a First-Class Signal
    • Thesis: Use normalized runtime/fit_time_ms to bias toward fast, deployable matches.
    • Figures: ROC with/without runtime term; Pareto (quality vs latency).
    • Hooks: extract_feature_vector() includes min(1.0, runtime/100.0). simple_exemplar_search
  7. Open-Set Querying via Perturbed Exemplars
    • Thesis: Evaluate generalization with create_demo_query() (random base + noise); report hit-rate under distribution shift.
    • Figures: success vs perturbation magnitude; calibration drift.
    • Hooks: create_demo_query(results) mutates SNR/Δf/Q before search. simple_exemplar_search
  8. Hit-Ratio as a Supervisory Prior
    • Thesis: Treat true_hits / n_recovered as a weak label; rank exemplars by operational value, not just similarity.
    • Figures: utility-weighted precision; business metric: “alerts avoided per 100 scans.”
    • Hooks: extract_feature_vector() computes hit_ratio. simple_exemplar_search
  9. Indexless at Scale: When Can You Skip FAISS?
    • Thesis: Complexity and recall of pure NumPy/Sklearn similarity vs ANN; break-even exemplar counts.
    • Figures: latency vs N; recall@K vs N under both indexless (cosine_similarity) and ANN.
    • Hooks: sklearn.metrics.pairwise.cosine_similarity path; linear scan in find_similar_signals. signal_exemplar_matcher

SEQ-GPT files:

  1. Natural-Language RF Search: Embeddings, Relations, Retrieval
    • Thesis: Benchmark the NL→RF retrieval path (MiniLM vs fallback BoW) and the regex-based structuring in SpatialQuery.from_natural_language.
    • Figures: Recall@K vs query class; ablate embedding on/off; parse success vs template complexity.
    • Hooks: SpatialQuery.from_natural_language(...), SEQGPTMatcher._embed_text(...). seq_gpt_matcher
  2. Spatial Relation Math for RF: Haversine, Cardinality, and Motion
    • Thesis: Quantify how relation scoring improves search (near/far, north_of/… , moving_toward/away).
    • Figures: score lift vs baseline cosine match; error vs distance; heading-alignment ROC.
    • Hooks: _haversine_distance, _cardinal_direction_score, _motion_relationship_score. seq_gpt_matcher
  3. Multi-View Feature Fusion: Spectrum ⨉ Motion ⨉ Position
    • Thesis: Tune weights for fusion and show robustness to missing views.
    • Figures: mAP vs {spectrum,location,motion,metadata} weights; cold-start (no spectrum) stress.
    • Hooks/Config: SignalExemplar.to_feature_vector(...) and matcher.default_weights in seq_gpt_config.json. seq_gpt_matcherseq_gpt_config
  4. Query-Latency at the Edge: Uvicorn Workers, Payload Size, and Top-K
    • Thesis: Model elapsed_time_ms from /query vs workers, top_k, exemplar count.
    • Figures: p50/p95 latency vs workers; CPU/RAM vs exemplar cardinality; Top-K scaling curves.
    • Hooks: /query response shape in seq_gpt_api.py; server knobs in seq_gpt_config.json. seq_gpt_apiseq_gpt_config
  5. Exemplar Lifecycle & Persistence Economics
    • Thesis: Autosave/restore throughput, corruption tolerance, and max-exemplar caps; demonstrate auto-creation flow from the RF pipeline.
    • Figures: write-amp vs backup interval; cold-start time vs DB size.
    • Hooks: /exemplars add/list/get/delete, /save, /load in API; integrate_with_signal_intelligence(...) for auto exemplar creation. seq_gpt_apiseq_gpt_client
  6. Map-First OSINT: Geoviz of RF Exemplars
    • Thesis: Basemap-driven operational map with power colormap; evaluate operator accuracy/triage speed.
    • Figures: geo-heat overlays; operator study—time-to-triage vs tabular UI.
    • Hooks: visualize_map() (Basemap; normalized power), visualize_spectrum(). seq_gpt_visualizer
  7. Dialogue-Scheduled Refinement for RF Search
    • Thesis: Port the “dialogue-scheduled search refinement” concept to RF tasks; show fewer iterations to target.
    • Figures: steps-to-success vs vanilla one-shot; refinement trajectories.
    • Hooks/Spec basis: README’s capability list and SEQ-GPT alignment notes. SEQ_GPT_README
  8. API/Client Contract Drift: A Case Study in Observability Debt
    • Thesis: Document and fix drift between tools expecting /health & /queries and the actual API.
    • Figures: failure matrix; before/after SLOs.
    • Evidence: seq_gpt_dashboard.sh checks /health & /queries; seq_gpt_visualizer.py calls /queries; seq_gpt_api.py exposes /, /query, /exemplars, /save, /load (no /health or /queries).
    • Hooks: seq_gpt_dashboard.sh, seq_gpt_visualizer.py, seq_gpt_api.py. seq_gpt_dashboardseq_gpt_visualizerseq_gpt_api
  9. Query-History Mining for Intent & Drift
    • Thesis: Use query_history to cluster intents and detect seasonality/drift; propose active-learning prompts.
    • Figures: query clusters; score distributions over time.
    • Hooks: SEQGPTMatcher.query() appends to self.query_history. seq_gpt_matcher
  10. Client-Side Integration: Wiring SEQ-GPT into a Live RF SOC
    • Thesis: Show the callback path that turns processed signals into exemplars; measure duplicate control and database bloat.
    • Figures: exemplar growth vs dedupe; wall-clock ingestion throughput.
    • Hooks: integrate_with_signal_intelligence(...) (auto-exemplar creation & NL query shim). seq_gpt_client
  11. Dashboarding NL-RF Ops
    • Thesis: Terminal + HTML dashboards for ops; compare operator comprehension from visualize_queries() charts vs CLI dashboard.
    • Figures: score-timeline bars; CLI resource probe vs Uvicorn workers.
    • Hooks: SEQGPTVisualizer.visualize_* and seq_gpt_dashboard.sh system probes. seq_gpt_visualizerseq_gpt_dashboard

ringdown_rf_modes.py

  1. Quasinormal RF: Damped-Sinusoid Decomposition for Multipath — Show that burst windows decompose into a small set of ringdown modes (direct, ducted, reflected). Figures: mode count vs SNR; τ–f scatter; reconstruction SNR. Hooks: RFModeFitter.fit/select_model, _mode_func. ringdown_rf_modes
  2. Ghost-Mode Immunity via BIC + Cross-Validation — Quantify false-modes under AM/FM impostors; ablate use_bic, cross_validate, and min_freq_separation. Figures: ghost-rate vs separation (Hz); BIC ladders; persistence curves. Hooks: fit_modes(..., use_bic=True, cross_validate=True, min_freq_separation=…). ringdown_rf_modes
  3. SNR Ladders vs True Path Count — Recover “evidence ladders” and correlate best-k SNR with ground-truth path multiplicity. Figures: SNR-ladder heatmaps; ΔSNR per added mode. Hooks: select_model() returns snr_ladder. ringdown_rf_modes
  4. Ringdown-from-FFT: When Time-Domain Isn’t Stored — Benchmark fit_ringdown_from_spectrum() for streaming receivers that only expose magnitude bins. Figures: fidelity gap (time vs FFT); fit_time_ms distributions. Hooks: fit_ringdown_from_spectrum(fft_bins, fs, n_modes_max). ringdown_rf_modes
  5. Minimum-Spacing Theorem for RF Modes — Empirically derive safe min_freq_separation as a function of fs, window, and SNR to suppress near-degenerate ghosts. Figures: false-positive surface over (SNR, Δf). Hooks: fit_modes(..., min_freq_separation=…). ringdown_rf_modes
  6. Latency–Accuracy Frontier for Field Ops — Trace fit_time_ms vs reconstruction SNR to define deployable mode budgets per hardware class. Figures: Pareto frontier; p50/p95 latency bars. Hooks: quality block from fit_ringdown_from_spectrum(). ringdown_rf_modes
  7. Impostor Triage: AM/FM Artifacts vs True Ringdowns — Stress-test “resistance to AM/FM impostor artifacts” from the module docstring; quantify miss/false alarms. Figures: confusion grids; τ-stability under modulation. Hooks: fit_modes() with/without use_bic. ringdown_rf_modes
  8. Tau Tomography for Environment Classification — Use τ-distributions to label ducts, waveguides, or urban canyons; connect τ-bands to geography. Figures: τ histograms per site; KL divergence between locales. Hooks: fit_modes() returns per-mode τ. ringdown_rf_modes
  9. End-to-End: RTL-SDR → Ringdown → FCC — Add mode features (mode_count, τ_mean, f_spread) to your violation detector; measure PPV uplift on “too-clean” carriers. Figures: PPV at fixed FAR; ROC with/without ringdown. Hooks: call fit_ringdown_from_spectrum() inside your SDR stream loop. ringdown_rf_modes
  10. Quantum × Ringdown Fusion — Feed mode purity (few stable τ) as a prior into your QuantumSpin anomaly score; report uplift under multipath. Figures: AUROC delta; calibration shift. Hooks: ringdown modes → quantum process_signal() features. ringdown_rf_modes
  11. Forensics & Daubert-Ready Checks — Package BIC, cross-validation, and residual-whiteness tests as reliability exhibits for expert testimony; benchmark reproducibility across runs. Figures: residual PSDs; CV agreement rates. Hooks: residuals via fit()/fit_modes(). ringdown_rf_modes
  12. Catalog of RF Ringdown Archetypes — Build an atlas of τ–f–φ patterns for common environments and emitters; deliver a lookup for triage. Figures: UMAP of mode vectors; centroid exemplars. Hooks: normalized modes list (amp-normalized) from fit_modes(). ringdown_rf_modes

SDR/FCC modules

  1. Open-Set RF Compliance: EIBI-Guided Violation Detection at Scale — Treat the EIBI list as a soft “allowlist,” quantify precision/recall of _detect_violations() vs. FFT peak heuristics, and show latency under live websockets. Hooks: load_eibi_data(), detect_violations(), sdr_stream_with_detection(). python-fcc-detector
  2. WSL2 SDR: USB, TCP, or Sim? Throughput & Fidelity Trade-offs — Benchmark real USB, rtl_tcp, and synthetic paths for identical spectra; report SNR drift and spectral leakage. Hooks: WSL detect is_wsl(), simulation_mode, tcp_mode, _generate_simulated_samples(). rtl_sdr_wsl_driver
  3. Synthetic RF Scene Generators as Unit Tests for Infra — Formalize the simulator as a regression oracle (FM/GMSK/OFDM/LoRa chirps) and show how controlled modulations catch pipeline regressions before field ops. Hooks: RTLSDRConfig.sim_signals, _generate_simulated_samples(). rtl_sdr_wsl_driver
  4. Async IQ Ingestion Under Backpressure — Evaluate queue growth, callback jitter, and drop behavior in read_samples_async and the driver _async_callback; propose pacing and adaptive batch sizes. Hooks: RTLSDRDriver.start()/_async_callback(). rtl_sdr_driver
  5. Scan Strategy Economics: Dwell vs. Step vs. Yield — Optimize SDRScanConfig for detection probability vs. power budget; derive a closed-form for “cost per positive hit.” Hooks: start_scan(), _scan_thread(), dwell_time, step_size, min_snr_db. rtl_sdr_receiver
  6. Bandwidth Estimation Robustness: −3 dB Heuristics vs. ML — Compare _estimate_bandwidth() (half-power points) to a learned estimator on synthetic and live captures; report bias at low SNR. Hooks: _estimate_bandwidth(). rtl_sdr_receiver
  7. MongoDB as a Spectrum Black-Box Recorder — Replayable incident logs: schema for frequency/power/time and retention policies; ingest speed vs. write concern; failure modes when insert_many stalls. Hooks: setup_mongodb(), Mongo write path in stream loop. python-fcc-detector
  8. WebSocket Telemetry for Field Ops — p50/p95 end-to-end (SDR→FFT→JSON→browser) and loss under 4G; delta-encoding bins to reduce payload without denting violation recall. Hooks: sdr_stream_with_detection() packing freqs, amplitudes, violations. python-fcc-detector
  9. Auto-Recovery and Self-Healing SDR — Empirically validate the driver’s error counters/reinit logic and propose watchdogs for error_count>5 && successful_reads<2. Hooks: read_samples(), reinitialize branch. rtl_sdr_driver
  10. Gain & PPM: Calibration Drift in $50 Dongles — Sweep gain/freq_correction to map miss-rate for narrowband services; publish a 10-minute factory calibration recipe. Hooks: set_gain(), freq_correction, get_available_gains(). rtl_sdr_driver
  11. Preset-Driven Triage for Public-Safety Bands — Latency and accuracy when hopping between curated presets (2 m/70 cm/LPD433) vs. wide scans; propose “incident focus mode.” Hooks: frequency_presets, tune_to_preset(). rtl_sdr_receiver
  12. K9 Memory × SDR Receiver: Few-Shot Recall on Live Air — Route RTLSDRReceiver detections into K9 feature memory to de-dup recurring offenders; measure time-to-first-match and drift tolerance. Hooks: _forward_to_processor() into your SignalProcessor/K9 pipeline. rtl_sdr_receiver
  13. Quantum-Enhanced FCC Monitoring — Fuse quantum_coherence/anomaly_score with EIBI matching to demote spoofed “too-clean” carriers; report uplift in PPV at fixed false-alarm rates. Hooks: FCC detector detect_violations() + QuantumSpin process_signal(). python-fcc-detector
  14. TCP-Mode SDR for Remote Towers — Architect rtl_tcp deployments for rooftop radios; quantify loss vs. USB baseline, and show when the fidelity hit is “worth the truck roll you didn’t do.” Hooks: _init_tcp_connection(), _read_samples_from_tcp(). rtl_sdr_wsl_driver

🧲 Buyers & $$: municipal spectrum enforcement, airports/seaports, utilities, defense ranges, stadiums. Sell a compliance feed (per-site subscription), incident replay SaaS, and a dongle-in-a-box kit for contractors. Tie into grant-funded “critical infrastructure protection” budgets.

A) Foundations & Representations

  1. Bio-Inspired RF Memory for Weak-Signal Recall — Describe the K9SignalProcessor architecture, feature vectorization, cosine similarity, and persistence/forgetting dynamics; analyze how SignalMemory.persistence and _clean_memory gate long-tail recall. Figures: PR/ROC vs memory size; time-to-match vs feature_dim; decay curves under different persistence. Hooks: extract_features, _calculate_similarity, _clean_memory. k9_signal_processor
  2. Quantum-Spin State Modeling of RF Spectra — Formalize your spin-based state representation and density matrix pipeline; compare qubit (Pauli) vs qudit (generalized Gell-Mann) regimes from _generate_gell_mann_matrices. Figures: purity vs SNR; coherence (ℓ1) vs bandwidth; qubit/qudit confusion plots. Hooks: process_signal, _calculate_density_matrix, _generate_gell_mann_matrices. quantum_spin_processor
  3. Tomography for RF: Bloch-Vector Maps as Diagnostics — Use Stokes parameters and Bloch vectors from _perform_quantum_tomography to derive interpretable health metrics for live RF scenes. Figures: Bloch-sphere scatter by class; purity histograms; Stokes parameter stability vs noise. Hooks: _perform_quantum_tomography. quantum_spin_processor

B) Quantum Enhancements & Fusion

  1. Superposition & Coherence as Signal Quality Priors — Quantify how superposition_score and quantum_coherence improve classification/confidence on weak or overlapping emitters. Figures: AUROC/TTFB under interference; calibration curves pre/post quantum prior. Hooks: _detect_superposition, _calculate_coherence. quantum_spin_processor
  2. Cross-Frequency Entanglement Cues for Multi-Emitter Scenes — Turn _analyze_entanglement into a detector for coordinated emitters; study entangled_frequencies and entanglement_strength. Figures: hit-rate vs entanglement_sensitivity; confusion under frequency overlap (Jaccard). Hooks: _analyze_entanglement, _calculate_frequency_correlation. quantum_spin_processor
  3. Interference Cartography via Quantum Formalism — Treat _analyze_interference as a structured oscillation detector; benchmark vs classical second-derivative heuristics. Figures: interference_strength heatmaps; ΔF1 vs classical. Hooks: _analyze_interference. quantum_spin_processor
  4. Quantum-Classical Late Fusion for RF SCYTHE — Evaluate integrate_with_k9_processor end-to-end: signal_complexity, detection_confidence, anomaly_score, and quantum processing gain estimator. Figures: p50/p95 confidence lift; “gain in dB” violin; error vs complexity. Hooks: integrate_with_k9_processor, _estimate_quantum_processing_gain. quantum_spin_processor

C) Spatial Intelligence & Field Ops

  1. Spatial Entanglement Graphs in the Wild — Use QuantumCelestialK9 to build entanglement links across location grids; test thresholds and temporal stability. Figures: geo-graph of entanglement links; link persistence CDF; false-link ablations. Hooks: _add_quantum_spatial_information, _detect_spatial_entanglement, spatial_entanglement_map. quantum_celestial_k9
  2. Grid Resolution vs Detection Yield — Trade-off study of location_grid_resolution on link density and false matches across motion profiles. Figures: yield vs resolution; compute/time vs resolution; precision-recall by grid size. Hooks: quantum_location_map, config location_grid_resolution. quantum_celestial_k9
  3. Ops Metrics Under Real-Time Constraints — Characterize QuantumCelestialK9’s thread loop latency, processing_time EMA, and throughput vs signal load. Figures: TTFB throughput curves; CPU/GPU utilization; stall/miss histograms. Hooks: start()/_processing_loop, metrics. quantum_celestial_k9

D) Robustness, Anomalies & Security

  1. Anomaly Scoring with Coherence-Purity Mismatch — Validate _calculate_quantum_anomaly_score for spotting spoofed/“too-clean” emitters; quantify red-team evasions. Figures: attack success rate vs anomaly threshold; SHAP of anomaly features. Hooks: _calculate_quantum_anomaly_score. quantum_spin_processor
  2. Adversarial Interference vs Quantum Interference — Stress _analyze_interference against adversarial periodic jamming; measure separability under phase randomization. Figures: adversarial gap; PSD overlays with phase controls. Hooks: _analyze_interference. quantum_spin_processor
  3. Memory Poisoning & Confuser Signals — Study how poisoned SignalMemory affects recall; mitigation via entropy/flatness guards in extract_features. Figures: precision under poisoning rate; memory-pruning strategies. Hooks: SignalMemory, extract_features. k9_signal_processor

E) Feature Engineering & Retrieval

  1. From FFT Stats to Field Wins — Systematic ablation over extract_features: spectral entropy, skew/kurtosis, centroid/spread, and down-sampled magnitudes. Figures: per-feature Shapley bars; accuracy/latency vs feature_dim. Hooks: extract_features. k9_signal_processor
  2. Similarity Kernels for K9 Memory — Replace cosine with alternatives; show effects on near-duplicate recall and tail generalization. Figures: PR curves by kernel; latency vs kernel. Hooks: _calculate_similarity. k9_signal_processor

F) Systems Integration & Engineering Notes

  1. Interface Drift in Quantum-Classical Pipelines — Case study of constructor/API mismatch (e.g., passing use_gpu/sensitivity into K9SignalProcessor from QuantumCelestialK9); propose interface contracts and CI checks. Figures: failure modes; contract tests; compile/run matrix. Hooks: QuantumCelestialK9.__init__, K9SignalProcessor.__init__. quantum_celestial_k9k9_signal_processor
  2. Threshold Economics: Entanglement & Coherence — Grid search on entanglement_threshold and coherence_threshold vs miss/false-link cost; present operating characteristic surfaces. Figures: iso-cost surfaces; Pareto fronts. Hooks: config entanglement_threshold (QC-K9), coherence_threshold (Q-spin). quantum_celestial_k9quantum_spin_processor
  3. End-to-End Demo: Quantum-Enhanced Celestial Tracking — Holistic benchmark combining K9 features, quantum fusion, and spatial links; report signals_processed, quantum_enhanced_detections, entangled_signal_pairs. Figures: demo timeline; KPI dashboard; map overlay. Hooks: get_metrics(), get_quantum_spatial_map(). quantum_celestial_k9

Casualty Urgency on Head-Mounted Displays — Validate severity→urgency mapping and color encoding; calibrate IMMEDIATE/URGENT cutoffs. Figs: calibration curves; confusion of triage bands; colorblind-safe palette check. Hooks: CasualtyReport._calculate_urgency, _get_severity_color, to_glass_casualty_json(). core

Haversine Clustering at the Edge — Proximity clustering quality vs radius, density, and drift; SLA impact. Figs: F1 vs cluster_radius_meters; time/cluster; geo heatmaps. Hooks: CasualtyTracker._calculate_distance, _update_clusters, get_active_clusters(). core

Clearance-Aware Redaction for Wearables — RBAC redaction accuracy vs mission loss. Figs: risk–utility curves; redaction rates by clearance. Hooks: SecurityFilter.filter_for_clearance(). core

From Voxels to Bearings — Lightweight direction finding for Glass; error vs voxel SNR/size. Figs: angular error CDF; runtime vs cube size. Hooks: GlassOptimizer.optimize_rf_viz() (max-voxel→az/elev), _classification_to_color(), _calculate_priority(). core

Asset Telemetry → AR Cues — Geo→az/elev conversion accuracy; distance scaling. Figs: bearing error; elevation vs range; status→color confusion. Hooks: optimize_asset_viz() path. core

Payload Budgeting Under Cognitive Load — Limit overlays without missing the important ones. Figs: ROC of “keep vs drop”; p95 pipleline time vs max_signals/distance_threshold. Hooks: GlassOptimizer.filter_payloads(). core

Indoor Wayfinding Heuristics for Pentagon-like Structures — Map ring/corridor inference from azimuth. Figs: ring/corridor accuracy; failure modes near boundaries. Hooks: PentagonLocationService.direction_to_coordinates(), get_location(). core

Audio/Haptics Alert Policy Tuning — Who gets buzzed and when? Figs: alert precision/recall vs severity; user load survey proxy. Hooks: GlassServer._should_play_audio_alert(), _should_use_haptic_feedback(), send_casualty_data(). core

Streaming Loop SLA for Mixed Signal+Casualty Feeds — End-to-end loop stability. Figs: p50/p95 latency per cycle; backlog dynamics. Hooks: GlassVisualizationSystem.start(), _send_payloads_with_casualties(). core

Cross-System Broadcast Semantics for Emergencies — Fan-out and idempotency across topics. Figs: time-to-notify vs subscribers; duplicate suppression. Hooks: _broadcast_casualty_alert() and topic use (casualty_alert, medical_emergency). core

Hotspot Threat Scoring — Does the heuristic match ops reality? Figs: threat score vs outcomes; ablation of recency/casualty-count terms. Hooks: _identify_geographic_hotspots(), _calculate_hotspot_threat_level(). core

Medical Recommendations at the Edge — Quality of triage suggestions from streaming state. Figs: actionability audits; precision@k for location lists. Hooks: _generate_medical_recommendations(). core

Export Provenance for After-Action Reviews — JSON artifact integrity & completeness. Figs: schema coverage; replay timing. Hooks: export_casualty_data() with NumpyJSONEncoder. core

Data Minimization vs Situational Awareness — Measure value of each casualty field. Figs: SHAP/ablation of vitals, metadata, confidence. Hooks: CasualtyReport.to_glass_casualty_json() field usage. core

Interface-Drift Detection in Mission Software — Guardrails for evolving APIs. Figs: breakage matrix (called vs defined); CI contract tests. Hooks: mismatches such as GlassServer.start() logging secure_mode (undef), CasualtyTracker calls to cleanup_expired_casualties() / get_casualty_clusters_for_glass() and cluster .add_casualty() not present—document and test. core

Classifier→Color Taxonomy for RF on Glass — Do humans parse these colors? Figs: user-study proxy; misread rate per class. Hooks: _classification_to_color() mapping (“wifi/bluetooth/cellular/gps/satellite/unknown”). core

Priority Fabric: From Signal Type + Strength to 1–5 — Calibrate priority scoring. Figs: reliability diagrams vs “should page me?” labels; ops loss vs threshold. Hooks: _calculate_priority() (type/strength). core

Edge Triage Near Duplicates — De-dup & decay of payloads in the rolling minute buffer. Figs: duplicate suppression gain; recall vs dedup aggressiveness. Hooks: GlassVisualizationSystem.current_payloads pruning logic. core

Device-Specific Redaction + Geo Filtering — Different clearances/locations, different overlays. Figs: per-device overlay size/time; fairness metrics. Hooks: _send_payloads_with_casualties() combining SecurityFilter & CasualtyTracker.get_casualties_for_glass_device(). core

Demo Generators as Unit-Test Oracles — Use create_casualty_demo() to guarantee non-regression. Figs: deterministic checksums; drift alerts over versions. Hooks: GlassVisualizationSystem.create_casualty_demo(). core