{"id":4416,"date":"2025-10-30T15:10:28","date_gmt":"2025-10-30T15:10:28","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4416"},"modified":"2025-10-30T15:10:28","modified_gmt":"2025-10-30T15:10:28","slug":"end-to-end-rf-inferred-inner-speech-decoding-fft-triage-to-bayesian-command-reconstruction-2","status":"publish","type":"page","link":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/?page_id=4416","title":{"rendered":"End-to-End RF-Inferred Inner Speech Decoding: FFT Triage to Bayesian Command Reconstruction"},"content":{"rendered":"\n<figure class=\"wp-block-image\"><a href=\"https:\/\/ctcn.utexas.edu\/research.html\"><img decoding=\"async\" src=\"https:\/\/ctcn.utexas.edu\/images\/logo.svg\" alt=\"\"\/><\/a><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/chatgpt.com\/\"><img loading=\"lazy\" decoding=\"async\" width=\"35\" height=\"35\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-64.png\" alt=\"\" class=\"wp-image-4418\"\/><\/a><\/figure>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/End-to-End-RF-Inferred-Inner-Speech-Decoding-FFT-Triage-to-Bayesian-Command-Reconstruction-bgilbert1984-Xinhao-Liandao-Rev-3.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of End-to-End RF-Inferred Inner Speech Decoding FFT Triage to Bayesian Command Reconstruction bgilbert1984 Xinhao Liandao Rev 3.\"><\/object><a id=\"wp-block-file--media-e743a0de-6efd-487f-8ab6-334ed4128d5d\" href=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/End-to-End-RF-Inferred-Inner-Speech-Decoding-FFT-Triage-to-Bayesian-Command-Reconstruction-bgilbert1984-Xinhao-Liandao-Rev-3.pdf\">End-to-End RF-Inferred Inner Speech Decoding FFT Triage to Bayesian Command Reconstruction bgilbert1984 Xinhao Liandao Rev 3<\/a><a href=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/End-to-End-RF-Inferred-Inner-Speech-Decoding-FFT-Triage-to-Bayesian-Command-Reconstruction-bgilbert1984-Xinhao-Liandao-Rev-3.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-e743a0de-6efd-487f-8ab6-334ed4128d5d\">Download<\/a><\/div>\n\n\n\n<p class=\"wp-block-paragraph\">We present a real-time RF-to-speech pipeline that<br>decodes inner speech from RF-inferred neural surrogates using<br>a word-state HMM with GPT-style priors. Starting from 1.5<br>ms FFT triage (0.754 AUROC), we map spectral confidence<br>to link quality q \u2208 [0, 1], which predicts command success<br>(73.9% \u2192 100%) and p95 latency (2.6 s \u2192 375 ms). A<br>Bayesian decoder with language priors reduces WER from 2.8%<br>to 1.1% at 10 dB SNR (60.7% relative reduction), with posterior<br>concentration on correct word spans. The system integrates with<br>tactical control systems for hands-free command execution. Full<br>end-to-end reproducibility: make all generates IQ \u2192 WER.<\/p>\n\n\n\n<div data-wp-interactive=\"core\/file\" class=\"wp-block-file\"><object data-wp-bind--hidden=\"!state.hasPdfPreview\" hidden class=\"wp-block-file__embed\" data=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/End-to-End-RF-Inferred-Inner-Speech-Decoding-FFT-Triage-to-Bayesian-Command-Reconstruction-bgilbert1984-Xinhao-Liandao-Rev-2.pdf\" type=\"application\/pdf\" style=\"width:100%;height:600px\" aria-label=\"Embed of End-to-End RF-Inferred Inner Speech Decoding FFT Triage to Bayesian Command Reconstruction bgilbert1984 Xinhao Liandao Rev 2.\"><\/object><a id=\"wp-block-file--media-cc575448-e688-44be-b4b3-71b5ad52bbbc\" href=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/End-to-End-RF-Inferred-Inner-Speech-Decoding-FFT-Triage-to-Bayesian-Command-Reconstruction-bgilbert1984-Xinhao-Liandao-Rev-2.pdf\">End-to-End RF-Inferred Inner Speech Decoding FFT Triage to Bayesian Command Reconstruction bgilbert1984 Xinhao Liandao Rev 2<\/a><a href=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/End-to-End-RF-Inferred-Inner-Speech-Decoding-FFT-Triage-to-Bayesian-Command-Reconstruction-bgilbert1984-Xinhao-Liandao-Rev-2.pdf\" class=\"wp-block-file__button wp-element-button\" download aria-describedby=\"wp-block-file--media-cc575448-e688-44be-b4b3-71b5ad52bbbc\">Download<\/a><\/div>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"166\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-65-1024x166.png\" alt=\"\" class=\"wp-image-4419\" style=\"width:1170px;height:auto\" srcset=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-65-1024x166.png 1024w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-65-300x49.png 300w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-65-768x125.png 768w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-65-1536x249.png 1536w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-65.png 1860w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><a href=\"https:\/\/scholar.google.com\/citations?user=q13ZxTMAAAAJ\"><img loading=\"lazy\" decoding=\"async\" width=\"780\" height=\"208\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-68.png\" alt=\"\" class=\"wp-image-4422\" style=\"width:1170px;height:auto\" srcset=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-68.png 780w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-68-300x80.png 300w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-68-768x205.png 768w\" sizes=\"auto, (max-width: 780px) 100vw, 780px\" \/><\/a><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><a href=\"https:\/\/scholar.google.com\/citations?user=HvSCNaEAAAAJ\"><img loading=\"lazy\" decoding=\"async\" width=\"696\" height=\"206\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-69.png\" alt=\"\" class=\"wp-image-4423\" style=\"width:1170px;height:auto\" srcset=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-69.png 696w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-69-300x89.png 300w\" sizes=\"auto, (max-width: 696px) 100vw, 696px\" \/><\/a><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><a href=\"https:\/\/labs.uthscsa.edu\/rsh\/meet-the-team\/\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"184\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-66-1024x184.png\" alt=\"\" class=\"wp-image-4420\" style=\"width:1170px;height:auto\" srcset=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-66-1024x184.png 1024w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-66-300x54.png 300w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-66-768x138.png 768w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-66-1536x277.png 1536w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-66.png 1832w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"276\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-67-1024x276.png\" alt=\"\" class=\"wp-image-4421\" style=\"width:1170px;height:auto\" srcset=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-67-1024x276.png 1024w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-67-300x81.png 300w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-67-768x207.png 768w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-67-1536x414.png 1536w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-67.png 1825w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"184\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-70-1024x184.png\" alt=\"\" class=\"wp-image-4424\" style=\"width:1170px;height:auto\" srcset=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-70-1024x184.png 1024w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-70-300x54.png 300w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-70-768x138.png 768w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-70-1536x276.png 1536w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-70.png 1822w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Liberty Hamilton &#8211; Determining the functional organization of the speech cortex<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">To process speech, the brain must transform low-level acoustic inputs to higher order linguistic categories, such as phonemes, words, and narrative meaning. This involves being able to encode acoustic features that happen at both brief and long timescales. We applied unsupervised methods to neural recordings of people listening to naturally spoken sentences, and uncovered an organization of the auditory cortex and surrounding areas into two spatially and functionally distinct modules. They are now applying similar methods to look at changes in functional organization during brain development in children with epilepsy. They also apply computational models to analyze which particular sound features are represented in the brain, and how areas functionally interact during natural speech perception and production.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><a href=\"https:\/\/mastodon.social\/@Bgilbert1984\/115463745799245572\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"186\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-71-1024x186.png\" alt=\"\" class=\"wp-image-4425\" style=\"width:1170px;height:auto\" srcset=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-71-1024x186.png 1024w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-71-300x55.png 300w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-71-768x140.png 768w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-71-1536x279.png 1536w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-71.png 1822w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Some Inspiration from:<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/29468723\/\"><img loading=\"lazy\" decoding=\"async\" width=\"767\" height=\"544\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-72.png\" alt=\"\" class=\"wp-image-4431\" srcset=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-72.png 767w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-72-300x213.png 300w\" sizes=\"auto, (max-width: 767px) 100vw, 767px\" \/><\/a><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/www.cincinnatichildrens.org\/research\/divisions\/b\/biostatistics\">https:\/\/www.cincinnatichildrens.org\/research\/divisions\/b\/biostatistics<\/a><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>bgilbert@neurosphere:~\/paper_Bayesian_Decoding_of_Inner_Speech_from_RF-Inferred_Activity$ cd \/home\/bgilbert\/paper_Bayesian_Decoding_of_Inner_Speech_from_RF-Inferred_Activity &amp;&amp; grep -n \"atl\\|ATL\" code\/core.py\n95:        # ATL\/TWPA design-aware processing\n96:        self.atl_design = None\n105:        # Load ATL\/TWPA design configuration\n106:        self._load_atl_design()\n142:    def _load_atl_design(self):\n143:        \"\"\"Load optional ATL\/TWPA design facts from Arxiv 2510.24753v1.\"\"\"\n145:            path = \"config\/atl_design.json\"\n150:                self.atl_design = {\n160:                logger.info(\"ATL\/TWPA design loaded from config\/atl_design.json\")\n162:            logger.warning(f\"Failed to load ATL design: {e}\")\n164:    def _label_atl_band(self, f_hz: float, tol_hz: float = 0.01e9):\n165:        \"\"\"Return band label based on design facts from ATL synthesis.\"\"\"\n166:        if not self.atl_design:\n168:        d = self.atl_design\n190:        if not self.atl_design or not self.atl_design.get(\"pump_hz\"):\n192:        fp = self.atl_design&#91;\"pump_hz\"]\n202:        if self.atl_design.get(\"mixing_mode\", \"4WM\").upper() == \"4WM\":\n223:    def annotate_signal_with_atl(self, signal: \"RFSignal\"):\n224:        \"\"\"Attach ATL\/TWPA labels to signal.metadata (no-op if no design).\"\"\"\n231:            band_label, band_info = self._label_atl_band(signal.frequency)\n234:            signal.metadata.setdefault(\"atl\", {})\n235:            signal.metadata&#91;\"atl\"].update({\n241:            # Log important ATL events\n243:                logger.info(f\"ATL event detected - Signal {signal.id}: {band_label}, near_3fp: {mix_info.get('near_3fp', False)}\")\n246:            logger.debug(f\"ATL annotate failed: {e}\")\n248:    def process_atl_alerts(self, signal: \"RFSignal\"):\n249:        \"\"\"Process ATL-related alerts and update classifications as needed.\"\"\"\n250:        if not self.atl_design or \"atl\" not in signal.metadata:\n253:        atl_data = signal.metadata&#91;\"atl\"]\n255:        # Check for important ATL events that warrant classification updates\n258:        if atl_data.get(\"near_3fp\"):\n261:        if atl_data.get(\"band_label\") == \"stopband\":\n264:        if atl_data.get(\"near_rpm_notch\"):\n267:        if atl_data.get(\"idlers\"):\n268:            alert_conditions.append(f\"parametric_mixing_detected({len(atl_data&#91;'idlers'])})\")\n272:            new_classification = f\"ATL_Event: {', '.join(alert_conditions)}\"\n277:                update_info={\"atl\": atl_data, \"alert_conditions\": alert_conditions}\n396:                # Re-annotate with ATL if this is an ATL-related update\n397:                if update_info and \"atl\" in update_info:\n398:                    self.annotate_signal_with_atl(signal)\n470:            # Add ATL band labeling if design is present\n471:            if self.atl_design:\n473:                label, _ = self._label_atl_band(center, tol_hz=0.02e9)\n474:                band_dict&#91;\"atl_band_label\"] = label<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>We present a real-time RF-to-speech pipeline thatdecodes inner speech from RF-inferred neural surrogates usinga word-state HMM with GPT-style priors. Starting from 1.5ms FFT triage (0.754 AUROC), we map spectral confidenceto link quality q \u2208 [0, 1], which predicts command success(73.9% \u2192 100%) and p95 latency (2.6 s \u2192 375 ms). ABayesian decoder with language priors&hellip;&nbsp;<\/p>\n","protected":false},"author":2,"featured_media":67,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"class_list":["post-4416","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/pages\/4416","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4416"}],"version-history":[{"count":0,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/pages\/4416\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/media\/67"}],"wp:attachment":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4416"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}