{"id":5443,"date":"2026-04-17T20:48:31","date_gmt":"2026-04-17T20:48:31","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=5443"},"modified":"2026-04-17T20:48:31","modified_gmt":"2026-04-17T20:48:31","slug":"scythe-graphops-bot-eve-streamer-mcp-tool","status":"publish","type":"post","link":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/?p=5443","title":{"rendered":"SCYTHE GraphOps Bot eve-streamer MCP tool"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\">The GraphBot is #GoogleAI Gemma 1b &amp; #embeddedGemma + #MetaAI Llama 3b<\/p>\n\n\n\n<h4 class=\"wp-block-heading\"># Epistemic Grounding, Sensor Provenance, and an MCP-Tool Intelligence Stack<\/h4>\n\n\n\n<p class=\"wp-block-paragraph\">A specific failure mode in AI-assisted intelligence: the system sounds confident while reasoning from nothing.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">It doesn&#8217;t hallucinate facts in the obvious sense \u2014 wrong names, wrong dates. It does something subtler. It correctly identifies that the question is &#8220;does this cluster exhibit C2 behavior?&#8221; and then generates a structurally valid, professionally worded answer using only its own prior weights. No sensor data. No graph topology. No flow-level evidence. Just pattern-completion from training.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">This is the problem GraphOps Bot was designed to refuse to do. This post is about how it refuses \u2014 and the new sensor grounding infrastructure that gives it actual data to reason from.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The bot received a question about relay behavior and ASN path coherence. It had context. It had a graph. It had inference rules.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">What it didn&#8217;t have was observed edges \u2014 sensor-backed data that a real network flow had actually occurred between two endpoints.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The result: `evidence_coverage: 0%`. `hallucination_risk: HIGH`. The guardrails fired correctly, refusing to generate the narrative. But they fired on a loop \u2014 because the underlying cause wasn&#8217;t addressed. The bot was running inference over an empty evidence graph.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">That diagnosis led to two things. First: a hard-stop guard to prevent repetition loops when `SPECULATION_DOMINANT` + `EVIDENCE_THIN` are both true. Second, and more importantly: a live sensor grounding path.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8212;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">## The Epistemic Framework<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">GraphOps Bot doesn&#8217;t just run queries. It maintains an internal epistemic state \u2014 a running assessment of how much it should trust its own outputs. This isn&#8217;t a post-hoc disclaimer. It&#8217;s computed from the graph itself, before any LLM inference runs.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The framework has four interlocking concepts:<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">### Trust Posture<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Computed from the ratio of sensor-backed edges (`obs_class: observed`) to inferred edges (`obs_class: inferred`) in the active scope. Three states:<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8211; `sensor-heavy` \u2014 the majority of edges have direct provenance from a capture source (Suricata, PCAP, nDPI, TAK, SDR, Zeek). Inference is grounded. High-confidence outputs are appropriate.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8211; `balanced` \u2014 mix of observed and inferred. The bot proceeds but qualifies claims.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8211; `inference-heavy` \u2014 the graph is largely speculation. Narrative generation is gated. The bot tells you what it needs before it will answer.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">### Evidence Coverage<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Separate from trust posture. This metric tracks inferred edges that carry `evidence_refs` \u2014 backlinks to the observed facts that justified the inference. If the bot inferred a lateral movement edge from two sessions, those sessions should appear as evidence refs on that edge. Low coverage means the inference chain is unverifiable.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">### Hallucination Risk<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">A derived signal. `EVIDENCE_THIN` (few observed edges) combined with `SPECULATION_DOMINANT` (most edges are inferred) triggers `hallucination_risk: HIGH`. This flag gates specific high-stakes output types: threat narrative, attribution, behavioral classification, and C2 hypothesis.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">### Provenance Classification<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Every node and edge in the graph carries `provenance_write.source`. The sensor source set (`suricata`, `pcap_ingest`, `zeek`, `nmap`, `tcpdump`, `ndpi`, `rf_ingest`, `ais_ingest`, `sensor`) is tracked separately from inference sources (`graphops`, `rule_engine`, `embedding`). The ratio determines trust posture.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">This isn&#8217;t metadata decoration. These fields are the epistemic backbone.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8212;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">## The 29-Tool Intelligence Stack<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">GraphOps Bot operates inside the SCYTHE MCP (Model Context Protocol) server. Every action it can take \u2014 every mutation, every query, every analysis \u2014 is expressed as a tool call. The stack now has 29 tools organized into three layers.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">### Layer 1: Graph Infrastructure (Registry-Managed, 24 Tools)<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">These tools are registered through `mcp_registry.build_registry()`. They carry rate limits, mutation budgets, analyst\/executor gating, and full audit trails. The bot calls them; the registry governs them.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Decay and Lifecycle<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| Tool | What it does |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">|&#8212;&#8212;|&#8212;&#8212;&#8212;&#8212;-|<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `decay_now` | Apply exponential decay to all edges \u2014 confidence-weighted, lambda-configurable. This is the forgetting mechanism. Without it, old observations contaminate current intelligence. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `prune_below_weight` | Remove edges below a weight threshold. Used after decay cycles to keep the graph lean. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `reload_rules` | Hot-reload `rule_prompt` definitions without restarting. Lets the operator update detection logic while the system is live. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Ingest and Mutation<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| Tool | What it does |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">|&#8212;&#8212;|&#8212;&#8212;&#8212;&#8212;-|<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `ingest_pcap` | Queue a PCAP ingest job \u2014 offline packet captures become graph topology. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `ingest_live_event` | Commit a batch of queued live events from the `live_ingest` pipeline into the graph. This is the governed commit path for streaming sensor data. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `run_tak_ml` | Run TAK\/ML inference on a network flow \u2014 classifies behavior, updates edge metadata. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `reinforce_edge` | Increase weight on an edge. Used when multiple observations confirm the same relationship. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `export_graph_snapshot` | Export the current graph as a full MCP envelope \u2014 used to generate offline bundles. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Scope and Streaming<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Scopes are time-windowed, filter-bounded views of the graph. The bot can create, query, and manage multiple scopes simultaneously \u2014 each representing a different analytical lens on the same underlying data.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| Tool | What it does |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">|&#8212;&#8212;|&#8212;&#8212;&#8212;&#8212;-|<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `subscribe_scope` | Create a scope subscription with time window, weight floor, and entity filter. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `unsubscribe_scope` | Destroy a scope subscription. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `clear_scope_cache` | Flush streaming scope caches \u2014 used when filters change. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `scrub_scope_time` | Remove edges from a scope older than a timestamp. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `set_scope_filter` | Adjust the `min_weight` filter on a live scope \u2014 dynamically tune signal threshold. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `list_active_scopes` | List all active scope subscriptions and their parameters. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Query and Introspection<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| Tool | What it does |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">|&#8212;&#8212;|&#8212;&#8212;&#8212;&#8212;-|<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `query_hot_entities` | Top N entities by degree \u2014 finds hubs, chokepoints, and dense sub-graphs. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `query_recent_edges` | Edges since a timestamp \u2014 temporal slice for freshness-bounded analysis. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `query_scope_stats` | Aggregated statistics for a scope \u2014 entity counts, edge density, weight distribution. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `get_edge_by_id` | Fetch a specific edge by ID \u2014 used to verify provenance and labels. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `get_engine_metrics` | Full graph metrics: node count, edge count, weight distribution, ingest rate. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `get_decay_config` | Current decay lambda, half-life, and last decay timestamp. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `set_decay_lambda` | Update the decay rate live. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `get_tak_ml_status` | TAK\/ML inference pipeline status and queue depth. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">| `get_socket_metrics` | WebSocket connection counts and scope subscription health. |<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">### Layer 2: GraphOps Reasoning Engine (ToolDef, 4 Tools)<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">These tools are the bot&#8217;s cognitive stack \u2014 they call the `GraphOpsAgent`, which runs the full RAG-grounded investigation loop.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">`graphops_investigate`<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The flagship tool. Takes a natural-language question, extracts entities, generates a DSL query plan, executes it against the live hypergraph, and returns a structured intelligence report with five sections: Situation \/ Change \/ Structure \/ Assessment \/ Direction.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Before generating the report, it checks trust posture. If `inference-heavy`, it surfaces a prerequisite list instead of speculating.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">`graphops_dsl_exec`<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Execute a raw DSL plan directly \u2014 a list of graph query operations without the agent wrapper. Used when the operator wants direct control over the query plan without the investigation loop overhead.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">`graphops_entity_parse`<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Extract structured entities from free text: IPs, ASNs, domains, behavior labels, ports. Returns a list of normalized entities and a primary entity judgment. Used as a preprocessing step before graph traversal.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">`graphops_suggest`<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Given current graph state, suggest the next investigation question. This is the adaptive prompt surfacing mechanism \u2014 it reads graph tension (high-degree nodes, recent edge promotions, decayed evidence, unresolved inferences) and surfaces the probe that would reduce the most uncertainty.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">### Layer 3: Live Sensor Grounding (ToolDef, 1 Tool)<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">`graphops_sensor_stream` \u2190 *new*<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">This is the tool that closes the loop between the bot&#8217;s epistemic guardrails and the live sensor infrastructure.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">When the bot detects `trust_posture: inference-heavy` or `sensor_fraction` is low, it can call `graphops_sensor_stream`. The tool connects as a WebSocket client to the eve-streamer daemon at `:8081\/ws`, collects binary FlatBuffer `FlowEvent` frames for a configurable window (default 5 seconds, up to 200 events), decodes them using the `Nerf.FlowEvent` vtable layout, enriches each IP with GeoIP metadata (ASN, city, country, coordinates) from the MaxMind databases, and injects the flows as observed nodes and edges into the hypergraph with `provenance_write.source: suricata` and `obs_class: observed`.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The result: the graph shifts from `inference-heavy` to `sensor-heavy`. The bot&#8217;s own epistemic state changes. The guardrails relax. Real intelligence becomes possible.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Parameters: `host`, `ws_port` (8081), `window_seconds` (5.0), `max_events` (200), `check_only` (returns capture metrics without injecting).<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8212;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">## The Dispatch Architecture: One Chokepoint<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">All 29 tools share the same call path through `mcp_server._handle_tools_call()`. The dispatch is deliberately ordered:<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">1. Look up the tool name in `handler._tools`. If not present \u2192 `Unknown tool`.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">2. If the tool is known to the `Registry` (the 24 governance-managed tools) \u2192 route through `registry.execute()`, which applies rate limiting, mutation budgets, analyst\/executor mode gating, and audit logging.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">3. If the tool exists in `handler._tools` but not in the Registry (the 5 GraphOps ToolDef tools) \u2192 call `tool.fn(arguments)` directly.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">This ordering means infrastructure tools carry full governance guarantees while reasoning tools stay lightweight. The bot doesn&#8217;t need a mutation budget to investigate a question. It does need one to prune the graph.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8212;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">## The Forensic Artifact Layer<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Parallel to the live bot capabilities, the system now produces two classes of offline forensic artifacts from the same graph state.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">### SESSION Hypergraph Offline Bundle<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">A single self-contained HTML file \u2014 no server, no CDN, no framework dependencies. It embeds the full Three.js scene, all node and edge geometry, and the session metadata. Open it offline, anywhere.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The renderer: camera locks to the center of gravity of the node cluster on load and auto-rotates to give a continuous orbital view of the topology. Click a node to select it \u2014 the selection panel shows entity metadata and all connected neighbors (hop depth and neighbor cap are configurable in the header). Connected neighbors get labeled tooltips that persist until you dismiss them. Double-click empty space to reset the camera. Single-click empty space clears the selection while keeping the camera where you left it \u2014 by design, so you can orbit to a view you want, screenshot it, then dismiss.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Dense graphs (6,000+ nodes, 16,000+ edges) are handled by instanced meshes with an LOD gradient that fades distant nodes to reduce overdraw.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">### CLUSTER INTEL Offline Bundle<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">A portable, single-file Tactical Intelligence Brief. Not a graph. Not a visualization. A forensic artifact with operational intent.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Three data layers:<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8211; Cluster Core \u2014 strategic summary: classification, threat score, confidence, behavior type, ASN, temporal characteristics, phase, control origin, mobility assessment, executive summary, recommendations<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8211; Autopsy Layer \u2014 evidence: participating hosts with IPs and ports interacted with, flows, sessions, behavior groups<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8211; Derived Intelligence Layer \u2014 computed heuristics: heuristic intent scores (Abandoned\/Decaying, Staging Infrastructure, Traffic Relay Mesh), activation cascade model, confidence breakdown by component, detection rationale<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The right panel renders a deterministic SVG flow diagram \u2014 not a force graph, a fixed layout. Reproducible. Court-friendly. The left panel carries the narrative. Evidence is explicit and row-level. Actions are derived from the data, not generic.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Both artifact types are designed to work as chainable intelligence units: exportable to SOC pipelines, incident response workflows, or compliance reporting without modification.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8212;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">## What the Log Line Actually Means<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8220;`<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">[mcp] loaded 24 tools from mcp_registry<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">[eve_sensor_mcp] registered graphops_sensor_stream tool<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">[graphops_copilot] registered 5 MCP tools: investigate, dsl_exec, entity_parse, suggest, sensor_stream<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8220;`<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Three lines. Twenty-nine tools. A live bot that knows when it doesn&#8217;t know enough to answer \u2014 and now has a mechanism to go get what it needs.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The epistemic framework has been here since Stage 7. The guardrails have been firing correctly the whole time. What&#8217;s new is that when they fire, the system has a response other than &#8220;I can&#8217;t answer that.&#8221;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">It goes and gets the data.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">&#8212;<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">GraphOps Bot runs on SCYTHE \u2014 the RF\/network intelligence platform built on NerfEngine. The live sensor stream is provided by eve-streamer, a Go daemon using eBPF packet capture with FlatBuffer binary framing over WebSocket.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The GraphBot is #GoogleAI Gemma 1b &amp; #embeddedGemma + #MetaAI Llama 3b # Epistemic Grounding, Sensor Provenance, and an MCP-Tool Intelligence Stack A specific failure mode in AI-assisted intelligence: the system sounds confident while reasoning from nothing. It doesn&#8217;t hallucinate facts in the obvious sense \u2014 wrong names, wrong dates. It does something subtler. It&hellip;&nbsp;<\/p>\n","protected":false},"author":2,"featured_media":5294,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[11,13],"tags":[],"class_list":["post-5443","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-signal_scythe","category-the-truben-show"],"_links":{"self":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5443","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5443"}],"version-history":[{"count":0,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/5443\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=\/wp\/v2\/media\/5294"}],"wp:attachment":[{"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5443"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5443"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/neurosphere-2.tail52f848.ts.net\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5443"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}