{"id":4407,"date":"2025-10-30T13:53:23","date_gmt":"2025-10-30T13:53:23","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4407"},"modified":"2025-10-30T13:53:23","modified_gmt":"2025-10-30T13:53:23","slug":"viterbi-decoding-algorithm-complete-technical-explanation","status":"publish","type":"page","link":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/?page_id=4407","title":{"rendered":"Viterbi Decoding Algorithm: Complete Technical Explanation"},"content":{"rendered":"\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>1. What Is Viterbi Decoding?<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Viterbi<\/strong> is a <strong>dynamic programming algorithm<\/strong> that finds the <strong>most likely sequence of hidden states<\/strong> in a <strong>Hidden Markov Model (HMM)<\/strong> given an observation sequence.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\"><strong>In your RF-to-speech pipeline<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Hidden states<\/strong> = words (<code>sierra<\/code>, <code>charlie<\/code>, <code>bravo<\/code>)<\/li>\n\n\n\n<li><strong>Observations<\/strong> = RF-inferred neural features $ x_t $<\/li>\n\n\n\n<li><strong>Goal<\/strong> = reconstruct <strong>inner speech<\/strong> from noisy RF surrogates<\/li>\n<\/ul>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>2. Why Not Brute Force?<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>States<\/strong>: $ N = 50 $ words (NATO lexicon)<\/li>\n\n\n\n<li><strong>Sequence length<\/strong>: $ T = 30 $ frames<\/li>\n\n\n\n<li><strong>Total sequences<\/strong>: $ N^T = 50^{30} \\approx 10^{51} $<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Viterbi<\/strong>: $ O(T N^2) $ \u2192 <strong>polynomial<\/strong>, <strong>feasible<\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>3. HMM Components (Your Paper)<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Component<\/th><th>Formula<\/th><th>Meaning<\/th><\/tr><\/thead><tbody><tr><td><strong>Emission<\/strong><\/td><td>$ p(x_t | w_t) = \\mathcal{N}(x_t; \\mu_{w_t}, \\Sigma) $<\/td><td>How likely is $ x_t $ given word $ w_t $?<\/td><\/tr><tr><td><strong>Transition<\/strong><\/td><td>$ p(w_t | w_{t-1}) = \\pi_{w_{t-1}, w_t} $<\/td><td>Language prior (bigram or GPT-style)<\/td><\/tr><tr><td><strong>Joint<\/strong><\/td><td>$ P(w_{1:T}, x_{1:T}) = p(w_1) \\prod_{t=2}^T p(w_t | w_{t-1}) \\cdot p(x_t | w_t) $<\/td><td>Full probability<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>4. Viterbi Step-by-Step (With Example)<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Input<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>$ x_{1:30} $ = RF features<\/li>\n\n\n\n<li>True utterance: <code>sierra charlie bravo<\/code><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 1: Initialization<\/strong> ($ t = 1 $)<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">For each word $ w $:<br>$$<br>V_1(w) = p(w) \\cdot p(x_1 | w)<br>$$<br>$$<br>\\psi_1(w) = 0<br>$$<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\">$ V_t(w) $ = <strong>max probability<\/strong> of being in $ w $ at time $ t $<br>$ \\psi_t(w) $ = <strong>best previous word<\/strong><\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 2: Recursion<\/strong> ($ t = 2 $ to $ T $)<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">For each word $ w $ at time $ t $:<br>$$<br>V_t(w) = \\max_{w&#8217;} \\left[ V_{t-1}(w&#8217;) \\cdot \\pi_{w&#8217;,w} \\cdot p(x_t | w) \\right]<br>$$<br>$$<br>\\psi_t(w) = \\arg\\max_{w&#8217;} \\left[ V_{t-1}(w&#8217;) \\cdot \\pi_{w&#8217;,w} \\right]<br>$$<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Key Insight<\/strong>:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\">At each step, <strong>keep only the best path<\/strong> to each state \u2192 <strong>prune $ N^T $ to $ T N $<\/strong><\/p>\n<\/blockquote>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 3: Termination<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Best path probability:<br>$$<br>P^* = \\max_w V_T(w)<br>$$<br>Final state:<br>$$<br>q_T^* = \\arg\\max_w V_T(w)<br>$$<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Step 4: Backtracking<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">$$<br>q_t^* = \\psi_{t+1}(q_{t+1}^*), \\quad t = T-1, \\dots, 1<br>$$<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">\u2192 Output: <code>sierra \u2192 charlie \u2192 bravo<\/code><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>5. Visual Example (Your Fig. 1)<\/strong><\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>graph TD\n    subgraph \"Frame 1\u20135\"\n        A&#91;sierra] --&gt; B&#91;charlie]\n        A --&gt; C&#91;delta]\n    end\n    subgraph \"Frame 6\u201310\"\n        B --&gt; D&#91;charlie]\n        C --&gt; E&#91;papa]\n    end\n    subgraph \"Frame 11\u201315\"\n        D --&gt; F&#91;bravo]\n        E --&gt; G&#91;quebec]\n    end\n\n    style A fill:#90EE90\n    style B fill:#90EE90\n    style D fill:#90EE90\n    style F fill:#90EE90\n    style C fill:#FFB6C1\n    style E fill:#FFB6C1\n    style G fill:#FFB6C1<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Green path<\/strong> = Viterbi path (highest $ V_t $)<\/li>\n\n\n\n<li><strong>Red paths<\/strong> = pruned (lower probability)<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>6. Why Language Priors Matter (Your Fig. 2)<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>SNR<\/th><th>No Prior WER<\/th><th>Bigram WER<\/th><th>GPT-style WER<\/th><\/tr><\/thead><tbody><tr><td>10 dB<\/td><td>2.8%<\/td><td>1.6%<\/td><td><strong>1.1%<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>GPT prior<\/strong> $ p(\\text{charlie} | \\text{sierra}) $ high \u2192 <strong>boosts $ V_t(\\text{charlie}) $<\/strong><br><strong>No prior<\/strong> \u2192 all transitions equal \u2192 <strong>framewise confusion<\/strong><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>7. Viterbi in Your Code<\/strong><\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>def viterbi(obs, mu, Sigma, trans_matrix):\n    T, _ = obs.shape\n    N = len(mu)\n    V = np.zeros((T, N))\n    psi = np.zeros((T, N), dtype=int)\n\n    # Init\n    emit = &#91;multivariate_normal.pdf(obs&#91;0], mu&#91;i], Sigma) for i in range(N)]\n    V&#91;0] = emit\n    psi&#91;0] = 0\n\n    # Recursion\n    for t in range(1, T):\n        for j in range(N):\n            probs = V&#91;t-1] * trans_matrix&#91;:, j]\n            emit_prob = multivariate_normal.pdf(obs&#91;t], mu&#91;j], Sigma)\n            V&#91;t, j] = np.max(probs) * emit_prob\n            psi&#91;t, j] = np.argmax(probs)\n\n    # Backtrack\n    path = &#91;np.argmax(V&#91;-1])]\n    for t in range(T-1, 0, -1):\n        path.append(psi&#91;t, path&#91;-1]])\n    return path&#91;::-1]<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>8. Connection to FFT Triage<\/strong><\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>graph LR\n    A&#91;IQ] --&gt; B&#91;FFT Triage]\n    B --&gt; C&#91;Confidence c]\n    C --&gt; D&#91;\\hat{q}]\n    D --&gt; E&#91;SNR_est]\n    E --&gt; F&#91;Set \u03c3\u00b2 in HMM]\n    F --&gt; G&#91;Viterbi Decoder]\n    G --&gt; H&#91;Word Sequence]<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Low $ \\hat{q} $<\/strong> \u2192 high $ \\sigma^2 $ \u2192 <strong>rely more on transition prior<\/strong><\/li>\n\n\n\n<li><strong>High $ \\hat{q} $<\/strong> \u2192 low $ \\sigma^2 $ \u2192 <strong>trust emission likelihood<\/strong><\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>9. Key Takeaways<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Concept<\/th><th>Your Paper<\/th><\/tr><\/thead><tbody><tr><td><strong>Dynamic Programming<\/strong><\/td><td>Avoids $ N^T $ search<\/td><\/tr><tr><td><strong>Transition Prior<\/strong><\/td><td>GPT-style \u2192 <strong>60% WER reduction<\/strong><\/td><\/tr><tr><td><strong>Emission Model<\/strong><\/td><td>Gaussian on RF features<\/td><\/tr><tr><td><strong>Output<\/strong><\/td><td>Word sequence + posterior traces<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>10. LaTeX Figure for Your Paper<\/strong><\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>\\begin{figure}&#91;t]\n\\centering\n\\begin{mermaid}\ngraph LR\n    subgraph t=1\n        A&#91;sierra] --&gt;|V=0.8| B&#91;sierra]\n    end\n    subgraph t=2\n        B --&gt;|\u03c0(charlie|sierra)| C&#91;charlie]\n        B --&gt;|\u03c0(delta|sierra)| D&#91;delta]\n    end\n    style C fill:#90EE90\n    style D fill:#FFB6C1\n\\end{mermaid}\n\\caption{Viterbi path (green) vs pruned path (red). GPT prior favors \"charlie\" after \"sierra\".}\n\\label{fig:viterbi}\n\\end{figure}<\/code><\/pre>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Bottom Line<\/strong><\/h2>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\"><strong>Viterbi<\/strong> = <strong>&#8220;best path&#8221; in a trellis of word hypotheses<\/strong><br><strong>Language prior<\/strong> = <strong>GPS for noisy RF neural data<\/strong><br><strong>Your system<\/strong>: <strong>1.5 ms FFT \u2192 \\hat{q} \u2192 SNR \u2192 Viterbi \u2192 1.1% WER<\/strong><\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n<\/blockquote>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>1. What Is Viterbi Decoding? Viterbi is a dynamic programming algorithm that finds the most likely sequence of hidden states in a Hidden Markov Model (HMM) given an observation sequence. 2. Why Not Brute Force? Viterbi: $ O(T N^2) $ \u2192 polynomial, feasible 3. HMM Components (Your Paper) Component Formula Meaning Emission $ p(x_t |&hellip;&nbsp;<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"class_list":["post-4407","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/pages\/4407","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4407"}],"version-history":[{"count":0,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/pages\/4407\/revisions"}],"wp:attachment":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4407"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}