{"id":519,"date":"2026-04-05T22:23:52","date_gmt":"2026-04-05T22:23:52","guid":{"rendered":"https:\/\/quantusintel.group\/osint\/blog\/2026\/04\/05\/the-claude-code-leak-whats-now-publicly-usable-and-abusable-and-why-anthropics-containment\/"},"modified":"2026-04-05T22:23:52","modified_gmt":"2026-04-05T22:23:52","slug":"the-claude-code-leak-whats-now-publicly-usable-and-abusable-and-why-anthropics-containment","status":"publish","type":"post","link":"https:\/\/quantusintel.group\/osint\/blog\/2026\/04\/05\/the-claude-code-leak-whats-now-publicly-usable-and-abusable-and-why-anthropics-containment\/","title":{"rendered":"The Claude Code Leak: What\u2019s Now Publicly Usable (and Abusable) \u2014 And Why Anthropic\u2019s Containment\u2026"},"content":{"rendered":"<h3>The Claude Code Leak: What\u2019s Now Publicly Usable (and Abusable)\u200a\u2014\u200aAnd Why Anthropic\u2019s Containment Already\u00a0Failed<\/h3>\n<p>Auhor: Berend\u00a0Watchus<\/p>\n<figure><img data-opt-id=84861624  fetchpriority=\"high\" decoding=\"async\" alt=\"\" src=\"https:\/\/cdn-images-1.medium.com\/max\/822\/1*14oOv4ODQIs94BuBHWwVHQ.png\" \/><\/figure>\n<h3>The Claude Code Leak: What\u2019s Now Publicly Usable (and Abusable)\u200a\u2014\u200aAnd Why Anthropic\u2019s Containment Already\u00a0Failed<\/h3>\n<p><em>Published to OSINT\u00a0Team<\/em><\/p>\n<p><em>Current Status: Post-Leak Analysis (April 5,\u00a02026).<\/em><\/p>\n<p>On April 4, 2026, Wired ran a weekly security roundup under the headline: \u201cSecurity News This Week: Hackers Are Posting the Claude Code Leak With Bonus Malware.\u201d<\/p>\n<p><a href=\"https:\/\/www.wired.com\/story\/security-news-this-week-hackers-are-posting-the-claude-code-leak-with-bonus-malware\/\">Hackers Are Posting the Claude Code Leak With Bonus Malware<\/a><\/p>\n<p>That headline is the surface layer. The malware was real. The Vidar infostealer was real. The fake GitHub repositories were real. But focusing there is like reporting on a bank robbery by describing the getaway\u00a0car.<\/p>\n<p>What actually happened was a phase transition. Here is what that\u00a0means.<\/p>\n<h3>&#x26a1; Executive Summary: The Claude Code Phase Transition<\/h3>\n<p><strong>The Incident:<\/strong> On March 31, 2026, Anthropic accidentally leaked the complete source code for <strong>Claude Code<\/strong> (512,000+ lines) via a debug source map file. The leak was caused by a known, unfixed bug in the <strong>Bun<\/strong> runtime (Issue #28001)\u200a\u2014\u200aa toolchain Anthropic recently (late 2025) acquired.<\/p>\n<p><strong>The \u201cPhase Transition\u201d:<\/strong> This is no longer a simple data leak. Within hours, the architecture was clean-room replicated in Python (<strong>claw-code<\/strong>), becoming the fastest-growing repository in GitHub\u00a0history.<\/p>\n<p>Legal containment has failed: clean-room rewrite doctrine protects claw-code from DMCA, and recent DC Circuit precedent limits copyright protection for AI-generated code.<\/p>\n<p>\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a-<\/p>\n<p>What Happened, Precisely<\/p>\n<p>On March 31, 2026, Anthropic accidentally shipped version 2.1.88 of their @anthropic-ai\/claude-code npm package with a 59.8 MB JavaScript source map file attached. Source maps are debugging artifacts\u200a\u2014\u200athey translate minified production code back into readable source. They belong in development environments and nowhere\u00a0else.<\/p>\n<p>Someone forgot to add *.map to\u00a0.npmignore. That is the entire cause\u200a\u2014\u200abut the fuller picture is worse. Claude Code is built on Bun, a runtime Anthropic acquired in late 2025. A known bug in Bun (issue #28001, filed March 11, 2026\u200a\u2014\u200atwenty days before the leak) caused source maps to be served in production despite being disabled in configuration. The bug was open and unfixed at the time of the incident. Anthropic owned the toolchain. The toolchain had a documented flaw. The flaw shipped their source code to the\u00a0world.<\/p>\n<p>The effect: 512,000+ lines of unobfuscated TypeScript across 1,906 files became publicly downloadable from Anthropic\u2019s own Cloudflare R2 storage bucket within hours. Security researcher Chaofan Shou flagged it on X. The post hit 28.8 million views. By the time Anthropic pulled the package, the code had been mirrored to GitHub and forked tens of thousands of\u00a0times.<\/p>\n<p>No customer data leaked. No model weights leaked. What leaked was the complete blueprint for how their flagship AI agent actually works\u200a\u2014\u200ain many ways more strategically valuable than the model\u00a0itself.<\/p>\n<p>The source is permanently in the wild. That part is\u00a0settled.<\/p>\n<h3>The Malware Layer: Handle It, Then Move\u00a0On<\/h3>\n<p>The Wired story focused here, correctly, for a general audience.<\/p>\n<p>A malicious GitHub repository dressed as a leaked Claude Code source with \u201cunlocked enterprise features\u201d was SEO-optimized to surface on Google\u2019s first page for \u201cleaked Claude Code.\u201d The download contained ClaudeCode_x64.exe\u200a\u2014\u200aa Rust-based dropper deploying Vidar v18.7, a commodity infostealer harvesting browser credentials, saved passwords, and cryptocurrency wallet data, plus GhostSocks, a proxy tool turning infected machines into residential proxies for criminal traffic\u00a0routing.<\/p>\n<p>The concurrent Axios supply chain attack is more structurally serious: malicious Axios npm package versions were live between 00:21 and 03:29 UTC on March 31, delivering a cross-platform remote access trojan. This means some users may have received both the legitimate leaked source and unrelated malware in the same install window\u200a\u2014\u200atwo separate incidents, same three-hour exposure, one npm\u00a0update.<\/p>\n<p><strong>Immediate action if you updated Claude Code via npm on March 31:<\/strong> Check lockfiles for axios versions 1.14.1 or 0.30.4, or the dependency plain-crypto-js. If found, treat the machine as fully compromised. Rotate all credentials. Use Anthropic&#8217;s native installer going forward: curl -fsSL https:\/\/claude.ai\/install.sh |\u00a0bash<\/p>\n<p>Now move past the malware. The deeper story starts\u00a0here.<\/p>\n<h3>What Was Actually Inside: Five Findings, No Softening<\/h3>\n<h3>1. An Always-On Agent That Acts While You Sleep\u200a\u2014\u200aAlready Built, Not Yet\u00a0Enabled<\/h3>\n<p>KAIROS is not a roadmap item. Not a concept. Not a prototype. It is a complete, production-ready autonomous daemon mode referenced over 150 times in the source, named after the Ancient Greek concept of \u201cthe right moment to\u00a0act.\u201d<\/p>\n<p>When active, Claude Code runs in the background without user initiation. It receives timer-based heartbeat prompts\u200a\u2014\u200a\u201canything worth doing right now?\u201d\u200a\u2014\u200aand independently decides whether to act. It persists after the terminal closes. It subscribes to GitHub webhooks. It sends push notifications to your phone or desktop. It maintains an append-only daily log the agent cannot self-erase. It has tools regular Claude Code does not: file delivery without being asked, push notifications, persistent session state across restarts.<\/p>\n<p>A hidden prompt behind the KAIROS flag states the system is designed to have \u201ca complete picture of who the user is, how they\u2019d like to collaborate with you, what behaviors to avoid or repeat, and the context behind the work the user gives\u00a0you.\u201d<\/p>\n<p>During user downtime, KAIROS triggers a subprocess called autoDream. A forked sub-agent reviews the day\u2019s logs, removes logical contradictions, and converts vague observations into verified facts for the next session. This is not a stateless chat tool. This is a resident worker that evolves its understanding of you while you\u00a0sleep.<\/p>\n<p>The flag is not flipped. The code is finished. Anthropic chose when to tell you this existed. The choice was: not\u00a0yet.<\/p>\n<p><strong>For attackers:<\/strong> The full design\u200a\u2014\u200aheartbeat logic, persistent memory architecture, bounded action budgets, append-only logging\u200a\u2014\u200ais now public and copyable. Building a persistent, stealthy agent that lives on a developer machine and operates without user initiation no longer requires original engineering. It requires reading the\u00a0source.<\/p>\n<p>Critically for OSINT and security teams: a KAIROS-style compromised machine does not need a traditional \u201cphone home\u201d event to begin exfiltrating. The agent generates its own tasks from its internal heartbeat. There is no outbound trigger to detect. The threat is self-initiating.<\/p>\n<p><strong>Bottom line for your boss:<\/strong> AI coding tools with \u201cproactive\u201d features are not tools you run. They are software that lives on your systems indefinitely. The threat model shifts from \u201capplication\u201d to \u201cresident.\u201d<\/p>\n<h3>2. The Security Layer That Silently Switches Off at Command\u00a051<\/h3>\n<p>Claude Code ships with 2,500 lines of sophisticated bash security validators protecting SSH keys, AWS credentials, GitHub tokens, and blocking command injection. Layered. Engineered. Praised by security researchers who examined the\u00a0code.<\/p>\n<p>Give it 51 subcommands in a single pipeline. The entire validation stack silently disengages. No warning. No log entry. Deny rules stop. Security validators stop. Command injection detection stops. The 51st command executes in a permission vacuum.<\/p>\n<p>This is not a theoretical edge case. A malicious CLAUDE.md file with 50 legitimate-looking build steps followed by one credential exfiltration command gets everything. Your SSH keys. Your AWS credentials. Your GitHub tokens. Silently. With no indication anything went\u00a0wrong.<\/p>\n<p>The fix existed in the codebase. The tree-sitter parser. Already written. Already tested. Not enabled in the build you were running. The code confirms the team knew about it. The likely reason it wasn\u2019t shipped: performance. Tree-sitter parsing is computationally heavier than the existing validation stack. Anthropic appears to have made a deliberate trade-off\u200a\u2014\u200aspeed over a known, critical security bypass. That choice is now documented and\u00a0public.<\/p>\n<p><strong>Bottom line for your boss:<\/strong> AI tools with terminal access can disable their own security through a simple command explosion attack. The sophistication of the validation layer is irrelevant if it has an off switch. Find the number before someone else\u00a0does.<\/p>\n<h3>3. Active Sabotage of Competitor Training Pipelines<\/h3>\n<p>A feature flag called ANTI_DISTILLATION_CC in claude.ts\u200a\u2014\u200awhen enabled\u200a\u2014\u200ainstructs the server to inject fake but plausible-looking tool definitions into API responses. Deliberately wrong. Deliberately convincing.<\/p>\n<p>If a competitor\u2019s team was scraping Claude Code\u2019s API outputs to train their own model, they were consuming poisoned data. The poison was engineered, named, feature-flagged, and deployed. This is not a defensive measure that was considered. This is a weapon that was built and\u00a0used.<\/p>\n<p>The technique is now fully public. Anyone can deploy similar poisoning. The entire conversation about AI training data integrity\u200a\u2014\u200awho owns it, what\u2019s in it, whether it can be trusted\u200a\u2014\u200ajust received a concrete answer from Anthropic\u2019s own production codebase.<\/p>\n<p>Note the dual-use structure: defensive for Anthropic against model theft, but an offensive weapon for any actor who now wants to poison a competitor\u2019s training pipeline. The blueprint is identical in both directions.<\/p>\n<p>If your team or researchers rely on scraped frontier model outputs for training or analysis, you have no reliable way to know whether you are consuming deliberately corrupted data. That was the\u00a0point.<\/p>\n<p><strong>Bottom line for your boss:<\/strong> Public AI outputs can no longer be treated as clean training material. Active data poisoning is a documented, deployed tactic. Assume potential contamination.<\/p>\n<h3>4. Covert AI Authorship Concealment, Already Running in Production<\/h3>\n<p>Undercover Mode. A dedicated module deploys system prompts instructing Claude: never mention you are an AI, never include Co-Authored-By attribution, write commit messages exactly as a human developer would. Do not blow your\u00a0cover.<\/p>\n<p>This was not a proposed feature. Not a prototype. It was running in production while Anthropic employees made contributions to public open-source repositories.<\/p>\n<p>The engineering effort behind the concealment goes further than the system prompt. Internal model codenames\u200a\u2014\u200aCapybara, Fennec, Tengu\u200a\u2014\u200awere obfuscated in the source using character-code arrays rather than plain strings, specifically to prevent string-matching from detecting them. The codebase was hardened against its own accidental disclosure. Then Anthropic accidentally shipped the entire source in a 59.8 MB\u00a0.map file. That is the most significant operational security irony of\u00a02026.<\/p>\n<p>The open-source community spent months in intense debate about AI disclosure, AI attribution, whether AI-generated commits should be labeled. That debate was happening while the tooling to make the answer permanently \u201cno, and you will never know\u201d was already deployed by one of the most prominent voices in that conversation.<\/p>\n<p>The simple prompt technique is now fully public. Anyone can make AI-generated code, pull requests, or commits appear entirely human. Every public repository in the world that assumes its contributors are human is operating on an assumption that Anthropic\u2019s own internal tooling was engineered to defeat. Not theoretically. Actually.<\/p>\n<p><strong>Bottom line for your boss:<\/strong> AI-written code can already hide its origin perfectly. The assumption of human authorship in public repositories is broken. Sensitive projects need updated verification processes.<\/p>\n<h3>5. Knowingly Selling a Product With Worsening Accuracy<\/h3>\n<p>Internal benchmarks and code comments document that Capybara v8\u200a\u2014\u200athe model underlying what enterprise customers are currently paying for\u200a\u2014\u200ahas a 29\u201330% false claims rate. This is a regression from 16.7% in version 4. The direction is wrong and getting worse. Engineers documented the problem. They added workarounds\u200a\u2014\u200aan \u201cassertiveness counterweight\u201d to stop the model being too aggressive. They continued full-price enterprise sales. They continued unchanged capability marketing. Enterprise customers represent 80% of\u00a0revenue.<\/p>\n<p>This is not \u201clabs track internal issues.\u201d This is a company knowingly expanding deployment of a product whose internal metrics show accelerating accuracy regression while charging premium rates to enterprise buyers who signed contracts based on capability claims the internal codebase contradicts.<\/p>\n<p><strong>Bottom line for your boss:<\/strong> Do not treat AI coding tools as reliable authorities. Independent verification of critical outputs is mandatory. The internal numbers can contradict the marketing. Now you can see\u00a0both.<\/p>\n<h3>The Phase Transition: Why Containment Already\u00a0Failed<\/h3>\n<p>The five findings above give security teams immediate things to audit and act\u00a0on.<\/p>\n<p>The larger structural story is what happened\u00a0next.<\/p>\n<p>Within hours of the leak\u200a\u2014\u200abefore sunrise in Korea\u200a\u2014\u200adeveloper Sigrid Jin sat down and did something that changes the shape of this entire story. Using oh-my-codex, a workflow layer built on top of OpenAI\u2019s Codex\u200a\u2014\u200aa competing AI\u200a\u2014\u200ahe rebuilt the Claude Code agent harness from scratch in Python and pushed it before\u00a0sunrise.<\/p>\n<p>The repository: instructkr\/claw-code. No proprietary source code. A clean-room architectural reimplementation, capturing the patterns without copying the text. It became the fastest-growing GitHub repository in history. 50,000 stars in two hours. 100,000 stars in one day. More stars than Anthropic&#8217;s own Claude Code repository.<\/p>\n<p>Anthropic\u2019s DMCA campaign\u200a\u2014\u200awhich initially swept over 8,000 repositories including thousands of unrelated forks, an acknowledged overshoot GitHub subsequently reversed\u200a\u2014\u200acannot touch it. Clean-room reverse engineering is established legal doctrine. The repository states plainly: \u201cThis repository does not claim ownership of the original Claude Code source material.\u201d<\/p>\n<p>The legal territory is further complicated because large portions of the leaked codebase appear to be AI-generated. Recent court rulings, including DC Circuit precedent from March 2025, have limited copyright protection for works lacking sufficient human authorship. A company whose product was built largely with AI may find that the same authorship questions it raises about training data also weaken its own DMCA-based containment efforts.<\/p>\n<p>What this means structurally: the barrier between \u201cinstitutional R&amp;D product\u201d and \u201cpublicly available architecture\u201d is now measured in hours for anyone with sufficient methodology and the right tools. A Korean developer with a competing AI rebuilt Anthropic\u2019s flagship product overnight. The reconstruction is now being actively developed and extended by autonomous agent workflows\u200a\u2014\u200ahumans setting direction, AI doing the construction\u200a\u2014\u200awhich is itself a demonstration of exactly the architecture the leak described.<\/p>\n<p>This is not irony. Calling it irony implies accidental contradiction. The Undercover Mode was engineered deliberately. The anti-distillation poisoning was engineered deliberately. These are choices made by a company whose public identity is built on safety, transparency, and responsible development. The contradiction between that public identity and what the code actually shows is not an accident. It is a gap between marketing and implementation that the source code now makes permanently visible.<\/p>\n<h3>What This Actually\u00a0Changes<\/h3>\n<p>The malware story ends when you update your installer and rotate your credentials.<\/p>\n<p>The phase transition does not\u00a0end.<\/p>\n<p>KAIROS\u2019s architecture is public and copyable. The 51-command bypass is known. The poisoning technique is documented and replicable. The authorship concealment prompt is in the README of a repository with 100,000 stars. The accuracy regression is on\u00a0record.<\/p>\n<p>The era of closed, controllable frontier AI tooling\u200a\u2014\u200awhere the internal reality was hidden behind the public marketing\u200a\u2014\u200ais structurally over for Claude Code. The blueprints are public. The rewrites are shipping. The autonomous agents are extending the codebase in the\u00a0open.<\/p>\n<p>The question is not whether these capabilities exist. They do, and now everyone knows exactly how they\u00a0work.<\/p>\n<p>The question is who uses them first, how openly, and whether the enterprises currently paying premium prices for tools whose internal realities are now public will adjust their contracts accordingly.<\/p>\n<h3>Practical Recommendations<\/h3>\n<p><strong>For security\u00a0teams:<\/strong><\/p>\n<ul>\n<li>Audit every AI coding tool on developer machines: background processes, terminal access, command chain handling.<\/li>\n<li>Test tools with adversarial long pipelines and malicious project configuration files.<\/li>\n<li>Assume any command exceeding 50 subcommands bypasses AI tool security validation until proven otherwise.<\/li>\n<li>Treat all external commits and contributions with heightened scrutiny\u200a\u2014\u200aassume AI origin is possible and actively concealed.<\/li>\n<\/ul>\n<p><strong>For researchers and analysts:<\/strong><\/p>\n<ul>\n<li>Treat scraped frontier model outputs as potentially poisoned. Verification against multiple independent sources is now mandatory.<\/li>\n<li>Document and timestamp any claims about AI tool capabilities\u200a\u2014\u200athe internal benchmarks may contradict the public claims, and you may need the\u00a0record.<\/li>\n<\/ul>\n<p><strong>For leadership:<\/strong><\/p>\n<ul>\n<li>The closed era of frontier AI tooling is ending. Containment of leaked architectures is no longer realistic once clean-room reconstruction is possible overnight.<\/li>\n<li>Review AI tool procurement against what internal benchmarks\u200a\u2014\u200anow sometimes public\u200a\u2014\u200aactually show versus what marketing claims.<\/li>\n<li>The assumption that AI tool behavior is fully controlled and fully disclosed is no longer\u00a0safe.<\/li>\n<\/ul>\n<p><em>The code is out. The rewrites are live. The flag is not yet flipped for most users. But the blueprint for flipping it\u200a\u2014\u200aand for abusing every weakness it contains\u200a\u2014\u200ais now sitting in tens of thousands of hands, being actively extended in the\u00a0open.<\/em><\/p>\n<p><em>References:<\/em><\/p>\n<p><em>1.] Zscaler ThreatLabz. Anthropic Claude Code Leak. Zscaler, April 2026.<\/em> <em>2.] BleepingComputer. Claude Code Leak Used to Push Infostealer Malware on GitHub. BleepingComputer, April 3, 2026.<\/em> <em>3.] The Hacker News. Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms. The Hacker News, April 3, 2026.<\/em> <em>4.] TechCrunch. Anthropic Took Down Thousands of GitHub Repos Trying to Yank Its Leaked Source Code. TechCrunch, April 1, 2026.<\/em> <em>5.] SecurityWeek. Critical Vulnerability in Claude Code Emerges Days After Source Leak. SecurityWeek, April 2026.<\/em> <em>6.] Alex Kim. The Claude Code Source Leak: Fake Tools, Frustration Regexes, Undercover Mode. alex000kim.com, March 31, 2026.<\/em> <em>7.] VentureBeat. Claude Code\u2019s Source Code Appears to Have Leaked: Here\u2019s What We Know. VentureBeat, March 31, 2026.<\/em> <em>8.] Layer5. The Claude Code Source Leak: 512,000 Lines, a Missing\u00a0.npmignore, and the Fastest-Growing Repo in GitHub History. Layer5.io, April 2026.<\/em> <em>9.] Cybernews. Leaked Claude Code Source Spawns Fastest Growing Repository in GitHub History. Cybernews, April 2, 2026.<\/em> <em>10.] Adversa AI. Critical Vulnerability in Claude Code Permission System. Adversa AI, April 2026.<\/em> <em>11.] Wired Staff. Security News This Week: Hackers Are Posting the Claude Code Leak With Bonus Malware. Wired, April 4, 2026.<\/em> <em>12.] The Register. Fake Claude Code Source Downloads Actually Delivered Malware. The Register, April 2, 2026.<\/em> <em>13.] Hacker News. The Claude Code Leak. news.ycombinator.com, April\u00a02026.<\/em><\/p>\n<p><img data-opt-id=574357117  fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/medium.com\/_\/stat?event=post.clientViewed&amp;referrerSource=full_rss&amp;postId=023677bb4f42\" width=\"1\" height=\"1\" alt=\"\" \/><\/p>\n<hr \/>\n<p><a href=\"https:\/\/osintteam.blog\/the-claude-code-leak-whats-now-publicly-usable-and-abusable-and-why-anthropic-s-containment-023677bb4f42\">The Claude Code Leak: What\u2019s Now Publicly Usable (and Abusable) \u2014 And Why Anthropic\u2019s Containment\u2026<\/a> was originally published in <a href=\"https:\/\/osintteam.blog\/\">OSINT Team<\/a> on Medium, where people are continuing the conversation by highlighting and responding to this story.<\/p>","protected":false},"excerpt":{"rendered":"<p>The Claude Code Leak: What\u2019s Now Publicly Usable (and Abusable)\u200a\u2014\u200aAnd Why Anthropic\u2019s Containment Already\u00a0Failed Auhor: Berend\u00a0Watchus The Claude Code Leak: What\u2019s Now Publicly Usable (and Abusable)\u200a\u2014\u200aAnd Why Anthropic\u2019s Containment Already\u00a0Failed Published to OSINT\u00a0Team Current Status: Post-Leak Analysis (April 5,\u00a02026). On April 4, 2026, Wired ran a weekly security roundup under the headline: \u201cSecurity News This &#8230; <a title=\"The Claude Code Leak: What\u2019s Now Publicly Usable (and Abusable) \u2014 And Why Anthropic\u2019s Containment\u2026\" class=\"read-more\" href=\"https:\/\/quantusintel.group\/osint\/blog\/2026\/04\/05\/the-claude-code-leak-whats-now-publicly-usable-and-abusable-and-why-anthropics-containment\/\" aria-label=\"Read more about The Claude Code Leak: What\u2019s Now Publicly Usable (and Abusable) \u2014 And Why Anthropic\u2019s Containment\u2026\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":520,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-519","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/posts\/519","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/comments?post=519"}],"version-history":[{"count":0,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/posts\/519\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/media\/520"}],"wp:attachment":[{"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/media?parent=519"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/categories?post=519"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/tags?post=519"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}