{"id":4163,"date":"2025-10-23T22:18:41","date_gmt":"2025-10-23T22:18:41","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=4163"},"modified":"2025-10-23T22:18:41","modified_gmt":"2025-10-23T22:18:41","slug":"rl-driven-rf-neuromodulation","status":"publish","type":"post","link":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/?p=4163","title":{"rendered":"RL-Driven RF Neuromodulation"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">\ud83e\udde0 Reinforcement Learning Takes the Wheel: A Smarter Approach to RF Neuromodulation<\/h2>\n\n\n\n<figure class=\"wp-block-embed aligncenter is-type-wp-embed is-provider-spectrcyde wp-block-embed-spectrcyde\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"ImCvSJyw3H\"><a href=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4157\">RL-Driven RF Neuromodulation<\/a><\/blockquote><iframe loading=\"lazy\" class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;RL-Driven RF Neuromodulation&#8221; &#8212; Spectrcyde\" src=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4157&#038;embed=true#?secret=a51FD8Yk8L#?secret=ImCvSJyw3H\" data-secret=\"ImCvSJyw3H\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe>\n<\/div><\/figure>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><a href=\"https:\/\/gemini.google.com\/share\/c5c4bcb9b31f\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"559\" src=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-23.png\" alt=\"\" class=\"wp-image-4166\" srcset=\"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-23.png 1024w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-23-300x164.png 300w, https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/wp-content\/uploads\/2025\/10\/image-23-768x419.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><\/figure>\n<\/div>\n\n\n<p class=\"wp-block-paragraph\">Neuromodulation\u2014using techniques like radiofrequency (RF) energy to precisely tune brain activity\u2014holds immense promise for treating neurological conditions. However, achieving effective and safe closed-loop RF neuromodulation often relies on <strong>laborious, hand-tuned schedules<\/strong> for parameters like beam angle and power<sup>1<\/sup>.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">What if an intelligent agent could learn the optimal settings on its own, ensuring maximum therapeutic effect while strictly adhering to safety limits?<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Our work demonstrates that a <strong>Reinforcement Learning (RL) agent<\/strong> can discover superior, safety-aware, single-beam settings, outperforming traditional scheduled approaches<sup>2<\/sup><sup>2<\/sup><sup>2<\/sup><sup>2<\/sup>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">\ud83d\ude80 The RL-Driven Solution: DQN for Precision Tuning<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">We trained a <strong>Deep Q-Network (DQN)<\/strong> to manage four critical parameters simultaneously: <strong>power ($P$), frequency ($f$), phase ($\\phi$), and angle ($\\theta$)<\/strong><sup>3<\/sup><sup>3<\/sup><sup>3<\/sup><sup>3<\/sup><sup>3<\/sup><sup>3<\/sup><sup>3<\/sup>.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The agent&#8217;s goal is to maximize a target-state proxy while keeping the patient safe. It achieves this by optimizing its reward function:<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">$$r_{t}=\\alpha~I_{target}-\\beta~SAR(P)-\\gamma~slew$$<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">This reward formula encourages <strong>maximizing the measured intensity at the target ($\\alpha~I_{target}$)<\/strong> while <strong>penalizing Specific Absorption Rate (SAR) ($\\beta~SAR(P)$)<\/strong> to maintain safety<sup>4<\/sup><sup>4<\/sup><sup>4<\/sup><sup>4<\/sup>. SAR is the crucial safety constraint, representing the rate at which RF energy is absorbed by the body<sup>5<\/sup><sup>5<\/sup><sup>5<\/sup><sup>5<\/sup>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">\ud83d\udcca Results: Outperforming the Baseline<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">The RL agent demonstrated significant improvements over a traditional <strong>hand-tuned sweep schedule baseline<\/strong><sup>6<\/sup><sup>6<\/sup><sup>6<\/sup>.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Improved Efficacy:<\/strong> Our DQN agent achieved a <strong>25% improvement in evaluation return<\/strong> compared to the baseline<sup>7<\/sup>.<\/li>\n\n\n\n<li><strong>Optimal Performance:<\/strong> The agent reached a median episode return of <strong>100<\/strong><sup>8<\/sup>.<\/li>\n\n\n\n<li><strong>Better State Tracking:<\/strong> The decrease in state reconstruction error alongside increased return suggests the agent is <strong>more effective at tracking the target state<\/strong><sup>9<\/sup><sup>9<\/sup>. The agent successfully reduced state reconstruction error to <strong>0.05 (Mean Squared Error)<\/strong><sup>10<\/sup>.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">The graph below visually compares the performance:<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Fig. 2. Evaluation returns. The DQN agent shows a higher evaluation return than the hand-tuned baseline<sup>11<\/sup><sup>11<\/sup><sup>11<\/sup><sup>11<\/sup><sup>11<\/sup><sup>11<\/sup><sup>11<\/sup><sup>11<\/sup><sup>11<\/sup>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">\ud83c\udfaf Safety and Stability<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">A key component of this research is ensuring the agent operates within safety constraints. The intrinsic penalty on SAR in the reward function is vital. An ablation study confirmed the model&#8217;s ability to handle the safety proxy, consistently achieving high returns<sup>12<\/sup><sup>12<\/sup>.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Furthermore, the <strong>training reward curve (Fig. 1)<\/strong> shows the DQN agent&#8217;s consistent and rapid learning, with the episode return steadily increasing over 5 episodes:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><td><strong>Episode<\/strong><\/td><td><strong>Approximate Episode Return<\/strong><\/td><\/tr><\/thead><tbody><tr><td>1.0<\/td><td>50<\/td><\/tr><tr><td>2.0<\/td><td>60<\/td><\/tr><tr><td>3.0<\/td><td>78<\/td><\/tr><tr><td>4.0<\/td><td>90<\/td><\/tr><tr><td>5.0<\/td><td>95+<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">The agent consistently learns to maximize its return while minimizing the SAR penalty, representing a <strong>safe and efficient control policy<\/strong><sup>13<\/sup><sup>13<\/sup>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">\ud83d\udca1 Conclusion and Future Direction<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Our results show that an RL-driven approach to RF neuromodulation can <strong>consistently outperform a scheduled baseline<\/strong> within the same safety proxy<sup>14<\/sup><sup>14<\/sup>. This work validates the use of a compact DQN with factorized discrete heads for fine-tuning RF parameters<sup>15<\/sup><sup>15<\/sup><sup>15<\/sup><sup>15<\/sup>.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">While this study uses a toy-but-physics-inspired environment<sup>16<\/sup>, future work will focus on:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Richer phantoms<sup>17<\/sup>.<\/li>\n\n\n\n<li>Integrating real scanner latencies<sup>18<\/sup>.<\/li>\n\n\n\n<li>Addressing multi-beam coupling<sup>19<\/sup>.<\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">This is an important step toward autonomous, safe, and effective closed-loop RF neuromodulation therapies.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Sources<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Closed-loop RF neuromodulation often relies on hand-tuned schedules over beam angle and power.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">We investigate whether a value-based agent can discover superior single-beam settings in a constrained, safety-aware loop. Our contributions:\u2026<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The agent consistently outperforms the scheduled base- line within the same safety proxy, and the linear decoder&#8217;s reconstruction error decreases alongside return, suggesting better state tracking.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">\u2026a compact DQN with factorized discrete heads for {power, frequency, phase, angle},\u2026<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Abstract-We train a DQN over power, frequency, phase, angle to maximize a target-state proxy while penalizing SAR.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">\u2026measured intensity follows a single-beam lobe with Gaussian mainlobe width. Reward $r_{t}=\\alpha~I_{targer}-\\beta~SAR(P)-\\gamma$ slew.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Four discrete heads: $P\\in\\mathcal{P}$ $f\\in\\mathcal{F}$ $\\phi\\in\\Phi$, $\\theta\\in\\Theta$. The joint action applies element-wise synth;\u2026<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Compared to a hand-tuned schedule baseline, our agent improves evaluation return by 25% with median episode return 100, and reduces state reconstruction error to 0.05.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Baseline is a hand-tuned sweep schedule over an- gle\/power with fixed f, .<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">DQN\u2026<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Evaluation return\u2026<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Baseline\u2026<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">\u2026\u26aba toy-but-physics-inspired environment with SAR proxy and camera-like noise,\u2026<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/drive-thirdparty.googleusercontent.com\/32\/type\/application\/pdf\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">PDF<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Page 1<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">RL-Driven RF Neuromodulation.pdf<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Future work: richer phantoms, real scanner laten- cies, and multi-beam coupling.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\ud83e\udde0 Reinforcement Learning Takes the Wheel: A Smarter Approach to RF Neuromodulation Neuromodulation\u2014using techniques like radiofrequency (RF) energy to precisely tune brain activity\u2014holds immense promise for treating neurological conditions. However, achieving effective and safe closed-loop RF neuromodulation often relies on laborious, hand-tuned schedules for parameters like beam angle and power1. What if an intelligent agent&hellip;&nbsp;<\/p>\n","protected":false},"author":2,"featured_media":4165,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[11],"tags":[],"class_list":["post-4163","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-signal_scythe"],"_links":{"self":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4163","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4163"}],"version-history":[{"count":0,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4163\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/media\/4165"}],"wp:attachment":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4163"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4163"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4163"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}