{"id":579,"date":"2026-04-15T14:00:07","date_gmt":"2026-04-15T14:00:07","guid":{"rendered":"https:\/\/quantusintel.group\/osint\/blog\/2026\/04\/15\/part-2-independent-convergence-how-a-simultaneous-arxiv-paper-confirms-the-ces-framework\/"},"modified":"2026-04-15T14:00:07","modified_gmt":"2026-04-15T14:00:07","slug":"part-2-independent-convergence-how-a-simultaneous-arxiv-paper-confirms-the-ces-framework","status":"publish","type":"post","link":"https:\/\/quantusintel.group\/osint\/blog\/2026\/04\/15\/part-2-independent-convergence-how-a-simultaneous-arxiv-paper-confirms-the-ces-framework\/","title":{"rendered":"Part 2. Independent Convergence: How a Simultaneous arXiv Paper Confirms the CES Framework\u2026"},"content":{"rendered":"<h3>Part 2. Independent Convergence: How a Simultaneous arXiv Paper Confirms the CES Framework (culturally unique micro expressions and\u00a0accents)<\/h3>\n<p><em>A Follow-Up to \u2018The Largest Unaddressed Asymmetry in Synthetic Media Detection, Identity Verification, and Human-Facing AI Deployment\u2019<\/em><\/p>\n<p><em>Author: Berend Watchus | OSINT Team | April 13,\u00a02026<\/em><\/p>\n<figure><img data-opt-id=1603422755  fetchpriority=\"high\" decoding=\"async\" alt=\"\" src=\"https:\/\/cdn-images-1.medium.com\/max\/859\/1*YRfVamXl_T8iaOslKvuV0w.png\" \/><figcaption>Identical twins. Same DNA. Same face. Separated at birth or early childhood, raised in different countries. By adulthood they will have identical facial structure and completely different Cultural Expression Signatures. Same eyes, same bone structure, same everything\u200a\u2014\u200abut the micro-expressions during a pause in conversation, the gaze pattern while thinking about groceries, the way the face constructs neutrality between sentences\u200a\u2014\u200acompletely different.<\/figcaption><\/figure>\n<p>here is part\u00a01<\/p>\n<p><a href=\"https:\/\/osintteam.blog\/largest-unaddressed-asymmetry-in-synthetic-media-detection-identity-verification-and-human-facing-86d9e6004a36\">Largest Unaddressed Asymmetry in Synthetic Media Detection, Identity Verification, and Human-Facing\u2026<\/a><\/p>\n<p><strong>Independent Convergence: How a Simultaneous arXiv Paper Confirms the CES Framework<\/strong><\/p>\n<p><em>A Follow-Up to \u2018The Largest Unaddressed Asymmetry in Synthetic Media Detection, Identity Verification, and Human-Facing AI Deployment\u2019<\/em><\/p>\n<p><em>Author: Berend Watchus | OSINT Team | April 13,\u00a02026<\/em><\/p>\n<p><strong>Executive Summary<\/strong><\/p>\n<p>On March 27, 2026, this author published the Cultural Expression Signature (CES) Framework, formally naming and structuring a human perceptual capacity that creates an unmodeled detection gap across deepfake forensics, OSINT identity verification, avatar realism, and embodied\u00a0AI.<\/p>\n<p>\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014<\/p>\n<figure><img data-opt-id=771569372  fetchpriority=\"high\" decoding=\"async\" alt=\"\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1024\/1*-gyu9D0vqd2nuewAO0HL5A.png\" \/><figcaption><a href=\"https:\/\/medium.com\/the-first-digit\/largest-unaddressed-asymmetry-in-synthetic-media-detection-identity-verification-and-human-facing-86d9e6004a36\">https:\/\/medium.com\/the-first-digit\/largest-unaddressed-asymmetry-in-synthetic-media-detection-identity-verification-and-human-facing-86d9e6004a36<\/a><\/figcaption><\/figure>\n<h4>VERY SIMPLE EXPLANATION:<\/h4>\n<h4>People can look identical or very similar, but as soon as they pose or talk about random things that we do not associate with unique cultural expressions, like for example talking about what to get from the supermarket, they all do it in a region\/culture specific\u00a0way.<\/h4>\n<h4>People detect it the easiest way, when they notice someone is not from heir own culture\/region, while they could look very\u00a0similar.<\/h4>\n<h4>So a family who has lived for 100 years in a certain country, so multiple generations, will have adopted the regional micro expressions and\u00a0gazes.<\/h4>\n<h4>This has not yet been adopted in enough detail by march 27 2026, so my paper with the framework was novel in this\u00a0field.<\/h4>\n<h4>The mundane trigger (supermarket, random topics\u200a\u2014\u200anot cultural performance)<\/h4>\n<h4>The detection mechanism (exclusion, not identification\u200a\u2014\u200a\u201cnot from my\u00a0region\u201d)<\/h4>\n<h4>The phenotypic paradox (people can look identical, yet the signal\u00a0fires)<\/h4>\n<h4>The socialization proof (multigenerational families adopt the regional signature)<\/h4>\n<h4>The novelty claim (not yet formally adopted in the field by March 27,\u00a02026)<\/h4>\n<p>\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014\u200a\u2014<\/p>\n<p>On March 20, 2026\u200a\u2014\u200aseven days earlier, and unknown to this author\u200a\u2014\u200atwo researchers at Nagoya University submitted a paper to arXiv\u00a0titled<\/p>\n<p><a href=\"https:\/\/arxiv.org\/abs\/2604.08568\">Can We Still Hear the Accent? Investigating the Resilience of Native Language Signals in the LLM Era<\/a><\/p>\n<p>\u201cCan We Still Hear the Accent? Investigating the Resilience of Native Language Signals in the LLM Era\u201d (Utami &amp; Sasano, arXiv:2604.08568). That paper became publicly visible on arXiv\u2019s cs.AI feed on April 13, 2026\u200a\u2014\u200atoday. It measures, empirically and independently, the written-domain equivalent of precisely the mechanism the CES Framework describes in the visual and behavioral domain. Their data contains an anomaly they cannot explain. The CES Framework explains\u00a0it.<\/p>\n<p><strong>1. The Timeline: How Two Independent Papers Arrived at the Same\u00a0Problem<\/strong><\/p>\n<p>The sequence of events matters for the record and for understanding what the convergence means scientifically.<\/p>\n<p><strong>March 20, 2026\u200a\u2014\u200a<\/strong>Utami and Sasano submit \u201cCan We Still Hear the Accent?\u201d to arXiv under cs.CL (Computation and Language).<\/p>\n<p>The paper is not yet publicly visible on the cs.AI feed. \/ Not published on arXiv. ArXiv has a moderation and processing step between submission and appearance. Submission on March 20 does not equal publication on March 20. The paper only appeared\u200a\u2014\u200abecame publicly readable\u00a0\u2014<\/p>\n<h4>when it was published on the arXiv feed, April 13,\u00a02026.<\/h4>\n<p>This author has no knowledge of its existence.<\/p>\n<p><strong>March 27, 2026\u200a\u2014\u200a<\/strong>This author publishes the Cultural Expression Signature (CES) Framework on Medium (OSINT Team) and deposits it simultaneously on Internet Archive and Scribd, creating timestamped public records. The paper formally names the CES mechanism, proposes the three-layer model (Environmental Coherence, Performed Identity, Expression Micro-Execution), the Asymmetric Exclusion Principle, and the cross-domain synthesis across deepfake detection, OSINT, avatar realism, and embodied\u00a0AI.<\/p>\n<p><strong>April 13, 2026\u200a\u2014\u200a<\/strong>Utami &amp; Sasano\u2019s paper is cross-listed to cs.AI on arXiv, becoming broadly publicly discoverable for the first time. This author encounters it today. Google\u2019s AI mode is already surfacing it in search results alongside the CES Framework and independently synthesizing their relationship.<\/p>\n<h4>Neither paper influenced the other. The convergence is entirely independent.<\/h4>\n<p>In scientific methodology, independent convergence on the same underlying phenomenon from different methodological directions\u200a\u2014\u200aone empirical measurement, one theoretical framework\u200a\u2014\u200ais among the strongest forms of corroboration available.<\/p>\n<blockquote><p>They measured a phenomenon. This framework named and explained it. These are different contributions, and independent arrival strengthens both.<\/p><\/blockquote>\n<p><strong>2. What Utami and Sasano\u00a0Found<\/strong><\/p>\n<p>The Nagoya University paper asks a deceptively simple question: as AI writing tools improve, can we still detect an academic author\u2019s native language from their writing? They analyze papers from the ACL Anthology across three technological eras: pre-neural network (\u22642015), pre-LLM (2016\u20132022), and post-LLM (2023\u20132025). They fine-tune two large language models to classify paper abstracts by author native language across eight groups: American English, British English, French, German, Italian, Chinese, Japanese, and\u00a0Korean.<\/p>\n<p>Their main finding is consistent and statistically significant: native language identification performance declines across eras. Fine-tuned classifiers achieve over 72% accuracy on pre-neural-network era papers, dropping to approximately 63% on post-LLM papers. AI writing assistance is progressively scrubbing the linguistic accent from academic text, homogenizing it toward a standardized global English. The biggest shift occurred with the introduction of neural machine translation in 2016\u200a\u2014\u200abefore LLMs, though the post-LLM decline is real and documented.<\/p>\n<p><strong>The anomaly they cannot explain.<\/strong> Within this general declining trend, two language groups behave unexpectedly. Chinese-authored papers show stable or increasing detectability across eras, reaching an F1 score of 0.885 in the post-LLM era\u200a\u2014\u200athe highest in the dataset. French shows mixed trends with no clear explanation. Meanwhile Japanese and Korean show the sharpest declines. The authors note the Chinese anomaly and offer a tentative suggestion about domestic AI ecosystems. They do not have a structural framework for it. They leave French as an open question entirely.<\/p>\n<p>The CES Framework has the structural framework.<\/p>\n<p><strong>3. How the CES Framework Explains the\u00a0Anomaly<\/strong><\/p>\n<p>The ecosystem separation principle, stated in the original March 27 publication, holds that cultural authenticity signatures\u200a\u2014\u200awhether visual-behavioral or written-linguistic\u200a\u2014\u200apersist in populations where AI infrastructure creates a training distribution separation from Western-dominant models.<\/p>\n<p>Applied to the Utami &amp; Sasano data, the prediction is\u00a0precise:<\/p>\n<p><strong>Chinese.<\/strong> Chinese researchers operate under restrictions on Western AI APIs. They use Qwen, DeepSeek, and GLM\u200a\u2014\u200amodels trained substantially on Chinese-language and Chinese-internet data. Their writing assistance tool does not converge their output toward Western-dominant English. Their L1 signal persists. The anomaly is not an anomaly. It is a prediction of the ecosystem separation principle.<\/p>\n<p><strong>Japanese and Korean.<\/strong> Japan and South Korea have no equivalent domestic AI ecosystem separation. Researchers in these communities use the same Western-dominant tools as European researchers. Their L1 signals collapse toward the global mean at the expected rate. The sharpest declines in the dataset are exactly where the framework predicts\u00a0them.<\/p>\n<p><strong>French.<\/strong> France presents the most nuanced case\u200a\u2014\u200aand the framework offers a structural explanation the authors could not. France has explicit domestic AI investment (Mistral), active language protection policy, and distinct institutional resistance to US platform dominance. This creates partial ecosystem separation. The mixed signals across models for French may reflect this partial separation\u200a\u2014\u200aneither fully converged nor fully resistant. It is not a random divergence. It is what partial ecosystem separation looks like in the\u00a0data.<\/p>\n<blockquote><p>The CES Framework\u2019s ecosystem separation principle does not merely accommodate the Utami &amp; Sasano anomalies. It predicts them structurally, explains them mechanistically, and generates testable predictions for future data\u200a\u2014\u200aincluding that French detectability will track the adoption rate of Mistral versus US-based tools among French academic researchers.<\/p><\/blockquote>\n<p><strong>4. Two Sides of the Same\u00a0Coin<\/strong><\/p>\n<p>Both papers describe the same underlying phenomenon\u200a\u2014\u200aculturally acquired behavioral signatures that persist below the level of conscious control, are detectable by calibrated observers or classifiers, and are being progressively eroded by AI systems trained on Western-internet-dominant data distributions\u200a\u2014\u200athrough different observational lenses.<\/p>\n<p>Dimension CES Framework (Visual\/Behavioral) Utami &amp; Sasano 2026 (Written\/Linguistic) The \u201cAccent\u201d Population-specific facial muscle use and expression micro-execution Native language influence on syntax, collocation, and rhetorical structure AI\u2019s Effect Produces culturally unanchored avatars defaulting to global mean expression Homogenizes academic writing toward standardized global English The Resistance Regional exclusion signals remain detectable to calibrated human observers Chinese and French signals remain detectable; Japanese\/Korean collapse Explanation Ecosystem separation: domestic AI preserves population-specific expression training Ecosystem separation: domestic models preserve L1 signal Detection Gap Technically perfect deepfakes fail human CES test; automated detectors miss this LLM-era papers pass fluency checks but NLI classifier accuracy drops 10%+ OSINT Value Visual exclusion signal: fabricated regional identities fail calibrated observers Written exclusion signal: fabricated author origins detectable via L1 fingerprint<\/p>\n<p>The CES Framework\u2019s contribution is the theoretical mechanism and the cross-domain synthesis. The Utami &amp; Sasano paper\u2019s contribution is empirical measurement of the written-domain instance of that mechanism. Together they establish that the phenomenon is real, multimodal, and structurally explained.<\/p>\n<p><strong>5. OSINT Applications: What This Means for Practitioners<\/strong><\/p>\n<p><strong>5.1 The Dual-Channel Attribution Problem<\/strong><\/p>\n<p>A fabricated identity\u200a\u2014\u200aan influence operation persona, a disinformation account, a fake expert profile\u200a\u2014\u200amust now be understood as operating across two independently detectable signal channels simultaneously.<\/p>\n<p>Channel 1 (Visual\/Behavioral): the CES Framework describes how profile images, video content, and avatar-based presentations fail authenticity tests for culturally calibrated observers from the claimed regional population.<\/p>\n<p>Channel 2 (Written\/Linguistic): the Utami &amp; Sasano research establishes that native language fingerprints persist in text output even after LLM-assisted polishing\u200a\u2014\u200adifferentially, by population, based on which AI tools the operator is likely\u00a0using.<\/p>\n<p>These two channels are independent. A fabricated identity that successfully passes visual CES screening may still fail written L1 fingerprint analysis\u200a\u2014\u200aand vice versa. Independent failure on two separate channels, detected by separate methodologies without coordination, constitutes strong attribution signal.<\/p>\n<p><strong>5.2 Ecosystem Signature as Attribution Tool<\/strong><\/p>\n<p>The most operationally novel implication of the combined framework: the AI tools an operator uses leave detectable traces in their output, and those traces are partially diagnostic of the operator\u2019s actual\u00a0origin.<\/p>\n<p>A Chinese state-affiliated influence operation producing written content will show different L1 characteristics than a Russian, Iranian, or domestic operation using Western-dominant tools\u200a\u2014\u200anot because of the operator\u2019s writing ability, but because of the training distribution of the AI tools they rely on. This is an unintended technical signature. Text polished using Western-dominant tools converges toward American-English-dominant patterns. Text polished using Chinese domestic tools may retain Chinese L1 characteristics even after polishing. The AI tool signature and the claimed identity can be compared. Inconsistency is a\u00a0flag.<\/p>\n<p><strong>5.3 Distributed Observer Networks as Detection Infrastructure<\/strong><\/p>\n<p>The CES Framework identified that culturally calibrated observers in online communities demonstrate spontaneous, convergent, unsolicited nationality exclusion behavior\u200a\u2014\u200aflagging fabricated identities without coordination. The Utami &amp; Sasano findings extend this to the written domain. Native speaker communities frequently flag foreign-origin accounts based on written tells they cannot consciously articulate but reliably\u00a0detect.<\/p>\n<p>For OSINT methodology: distributed native-speaker observer networks constitute unstructured but real annotation infrastructure for both visual and written identity verification. Convergent flagging by independent observers from the same reference population\u200a\u2014\u200awithout coordination\u200a\u2014\u200ais a strong signal that warrants escalation to formal analysis.<\/p>\n<p><strong>5.4 The Resistance Map<\/strong><\/p>\n<p>Application Visual CES Signal Written L1 Signal Combined Fake profile detection Expression signature fails regional CES test Writing style reveals non-native L1 origin Dual-channel confirmation Influence op attribution Avatar\/image fails cultural authenticity L1 fingerprint survives LLM polishing in some populations Cross-modal convergence = high confidence Source verification Visual identity mismatch with claimed origin Linguistic fingerprint inconsistent with claimed background Independent corroboration Deepfake detection Global mean expression signature exposed Not applicable to video\/image CES layer fills automation gap Actor identification Socialization-derived visual cues narrow population Written output narrows L1 population Triangulation across modalities<\/p>\n<p>This resistance map will shift over time as the global AI ecosystem evolves. Monitoring detectability trends is itself an intelligence product.<\/p>\n<p><strong>6. Independent Synthesis by Google AI\u00a0Mode<\/strong><\/p>\n<p>On April 13, 2026\u200a\u2014\u200athe same day the Utami &amp; Sasano paper became publicly visible on arXiv\u2019s cs.AI feed\u200a\u2014\u200aGoogle\u2019s AI mode independently synthesized the relationship between the CES Framework and the arXiv paper when queried. The synthesis was unprompted, accurate, and concluded with a research question asking whether the unexpected language resistance patterns correlate with the CES Framework\u2019s phenotypic similarity claims.<\/p>\n<p>That question is answerable using the ecosystem separation principle\u200a\u2014\u200aand the answer does not require phenotypic similarity to explain the anomalies. Ecosystem separation is the operative variable. The fact that Google\u2019s AI independently identified the connection and generated the right research question is a form of third-party validation of the framework\u2019s relevance and discoverability independent of both authors\u2019 intentions.<\/p>\n<p><strong>7. On Priority and Independent Convergence<\/strong><\/p>\n<p>Utami and Sasano submitted their paper on March 20, 2026, seven days before this author\u2019s March 27 publication. Their submission timestamp predates the CES Framework\u2019s public deposit. Their specific empirical finding\u200a\u2014\u200athat NLI performance declines across technological eras with differential resistance by language group\u200a\u2014\u200apredates this author\u2019s public work. The CES Framework makes no claim to priority over their specific empirical findings.<\/p>\n<p>What the CES Framework claims priority for\u200a\u2014\u200awith documentation\u200a\u2014\u200ais the formal naming of the Cultural Expression Signature as a structured mechanism with a defined three-layer model; the Asymmetric Exclusion Principle as a distinct theoretical construct; the cross-domain synthesis connecting the mechanism to deepfake detection, OSINT, avatar realism, and embodied AI simultaneously; and the ecosystem separation principle as a structural explanation which their paper does not articulate and which explains their otherwise unexplained anomaly.<\/p>\n<p>The appropriate scientific framing is convergent independent discovery of different aspects of the same underlying phenomenon, with the CES Framework providing the explanatory structure the empirical paper requires but does not\u00a0contain.<\/p>\n<blockquote><p>Their paper needed an explanation. This framework is the explanation. That relationship does not depend on publication dates. It depends on which contribution does what\u00a0work.<\/p><\/blockquote>\n<p><strong>8. Conclusion<\/strong><\/p>\n<p>The CES Framework and the Utami &amp; Sasano paper arrived independently at the same underlying phenomenon from different directions. The CES Framework described the mechanism in the visual and behavioral domain. Utami &amp; Sasano measured it in the written linguistic domain. The ecosystem separation principle explains the anomalies in their data that they could not account for. Google\u2019s AI mode independently synthesized the relationship the same day both papers became simultaneously publicly\u00a0visible.<\/p>\n<p>For OSINT practitioners, the combined picture is operationally clear: fabricated identities operating across both visual and written channels carry two independent, differentially detectable signal streams. The tools an operator uses leave traces. The cultural environment in which they were socialized leaves traces. Both are partially readable\u200a\u2014\u200aand partially resistant to AI-assisted erasure, differentially by population and by the AI ecosystem the operator relies\u00a0on.<\/p>\n<p>The asymmetry identified in the original CES Framework paper has not closed. It has been independently confirmed\u200a\u2014\u200asimultaneously, from a different direction, by researchers who did not know the framework existed.<\/p>\n<p><strong>References<\/strong><\/p>\n<p>Watchus, B. (2026, March 27). The Cultural Expression Signature (CES) Framework. OSINT Team \/ Medium. Internet Archive &amp; Scribd deposits.<\/p>\n<p>Utami, N., &amp; Sasano, R. (2026, March 20). Can We Still Hear the Accent? Investigating the Resilience of Native Language Signals in the LLM Era. arXiv:2604.08568. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2604.08568.\">https:\/\/doi.org\/10.48550\/arXiv.2604.08568.<\/a> [Publicly visible cs.AI feed: April 13,\u00a02026.]<\/p>\n<p>Elfenbein, H. A., &amp; Ambady, N. (2002). On the universality and cultural specificity of emotion recognition. Psychological Bulletin, 128(2),\u00a0203\u2013235.<\/p>\n<p>Marsh, A. A., Elfenbein, H. A., &amp; Ambady, N. (2003). Nonverbal accents: Cultural differences in facial expressions of emotion. Psychological Science, 14(4),\u00a0373\u2013376.<\/p>\n<p>Liang, W., et al. (2024). Mapping the Increasing Use of LLMs in Scientific Papers. arXiv:2404.01268.<\/p>\n<p><img data-opt-id=574357117  decoding=\"async\" src=\"https:\/\/medium.com\/_\/stat?event=post.clientViewed&amp;referrerSource=full_rss&amp;postId=533a21a8eb5d\" width=\"1\" height=\"1\" alt=\"\" \/><\/p>\n<hr \/>\n<p><a href=\"https:\/\/osintteam.blog\/part-2-independent-convergence-how-a-simultaneous-arxiv-paper-confirms-the-ces-framework-533a21a8eb5d\">Part 2. Independent Convergence: How a Simultaneous arXiv Paper Confirms the CES Framework\u2026<\/a> was originally published in <a href=\"https:\/\/osintteam.blog\/\">OSINT Team<\/a> on Medium, where people are continuing the conversation by highlighting and responding to this story.<\/p>","protected":false},"excerpt":{"rendered":"<p>Part 2. Independent Convergence: How a Simultaneous arXiv Paper Confirms the CES Framework (culturally unique micro expressions and\u00a0accents) A Follow-Up to \u2018The Largest Unaddressed Asymmetry in Synthetic Media Detection, Identity Verification, and Human-Facing AI Deployment\u2019 Author: Berend Watchus | OSINT Team | April 13,\u00a02026 Identical twins. Same DNA. Same face. Separated at birth or early &#8230; <a title=\"Part 2. Independent Convergence: How a Simultaneous arXiv Paper Confirms the CES Framework\u2026\" class=\"read-more\" href=\"https:\/\/quantusintel.group\/osint\/blog\/2026\/04\/15\/part-2-independent-convergence-how-a-simultaneous-arxiv-paper-confirms-the-ces-framework\/\" aria-label=\"Read more about Part 2. Independent Convergence: How a Simultaneous arXiv Paper Confirms the CES Framework\u2026\">Read more<\/a><\/p>\n","protected":false},"author":1,"featured_media":580,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-579","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/posts\/579","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/comments?post=579"}],"version-history":[{"count":0,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/posts\/579\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/media\/580"}],"wp:attachment":[{"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/media?parent=579"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/categories?post=579"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/quantusintel.group\/osint\/wp-json\/wp\/v2\/tags?post=579"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}