Author: Berend Watchus. Independent non-profit AI & Cybersecurity Researcher. April 10, 2026. [Publication for: OSINT Team]
Undercitation at 80%: How the Department of Computer Engineering and Technology and the Department of Computer Science at Guru Nanak Dev University, Amritsar, and the Center for Automation and Robotics at CSIC-UPM, Madrid, together with the Department of Electronic Engineering at the University of Azuay, Cuenca, Ecuador, Each Selected One Paper from a Five-Paper Stack — and Left Four Off the Record





This article concerns two academic institutions and their published research. The first is Guru Nanak Dev University, Amritsar, India — specifically
the Department of Computer Engineering and Technology and the Department of Computer Science,
whose researchers published a survey of Explainable AI in IJARCCE in February 2025.
The second is
the Center for Automation and Robotics at CSIC-UPM, Madrid, Spain, together with the Department of Electronic Engineering at the University of Azuay, Cuenca, Ecuador,
whose researchers published an empirical study of sensorimotor self-awareness in embodied AI on arXiv in May 2025. Both institutions cited my November 2024 research. Both cited one paper from a stack of five. This is the record of what they cited, what they had access to, and what they omitted.
That sets the institutional context immediately and clearly for any reader, journalist, archivist, or researcher who lands on the article cold.
Summary
Two academic papers cited my 2024 research. Both selected one paper from a five-paper stack. Both had access to all five. Both also had access to a physical book containing all five papers, registered with an ISBN, printed and distributed before either paper was submitted. This is a public statement correcting the record on what was cited, what was available, what was omitted, and why the omissions matter — for both papers simultaneously, because the undercitation pattern is identical in each case.
The Two Papers
Paper A — The XAI Paper: “A Vision in Explainable AI (XAI)” IJARCCE, Vol. 14, Issue 2, February 2025 DOI: 10.17148/IJARCCE.2025.14211 Authors: Gurpreet Singh, Brahmleen Kaur, Satinder Kaur, Satveer Kour, Mehakdeep Kaur, Kumari Sarita Guru Nanak Dev University, Amritsar, India
A Vision in Explainable AI (XAI) – Peer-reviewed Journal
https://medium.com/media/7db1a0f6938a9aff8ee024bc3158fad0/href
Paper B — The arXiv Paper: “Sensorimotor Features of Self-Awareness in Multimodal Large Language Models” arXiv:2505.19237, May 2025 Authors: Center for Automation and Robotics, CSIC-UPM, Madrid, Spain / University of Azuay, Cuenca, Ecuador
Sensorimotor features of self-awareness in multimodal large language models
Both papers cited the same paper by me. Both omitted the same four additional papers. Both had access to a physical ISBN-registered book containing all five. The pattern is not a coincidence.
What Was Available — The Complete Evidentiary Record
When Paper A was submitted in early 2025, and when Paper B was submitted in spring 2025, the following was publicly available and permanently on the record:
Five DOI-registered, editorially screened preprints on Preprints.org, all published November 2024:
[1] Towards Self-Aware AI: Embodiment, Feedback Loops, and the Role of the Insula in Consciousness DOI: 10.20944/preprints202411.0661.v1 — the paper both cited
[2] The Unified Model of Consciousness: Interface and Feedback Loop as the Core of Sentience DOI: 10.20944/preprints202411.0727.v1 — published November 7, 2024
[3] Simulating Self-Awareness: Dual Embodiment, Mirror Testing, and Emotional Feedback in AI Research DOI: 10.20944/preprints202411.0839.v1
[4] Advanced Predictive Modeling of Physical Trajectories and Cascading Events, Dual-State Feedback and Synthetic Insula DOI: 10.20944/preprints202411.1025.v1
[5] Self-Identification in AI: ChatGPT’s Current Capability for Mirror Image Recognition DOI: 10.20944/preprints202411.1112.v1
And one physical, printed, ISBN-registered book containing all five papers:
“AI and Mirror Testing: Science Papers 2024: Synthetic Emotions and Self-Awareness in AI” Author: Berend F. Watchus ISBN: 9789465200927 Printed by: Printforce Nederland Published by: Brave New Books, Delftsestraat 33, 3013AE Rotterdam, The Netherlands Produced in compliance with EU GPSR guidelines
The book is physically documented — front cover, back cover, table of contents, publisher page, ISBN confirmation page, and full paper content including abstracts, references, and the November 7, 2024 publication date of the UMC paper visible in print. The book was produced, printed, and distributed before either citing paper was submitted.
All five preprints are visible on the same Preprints.org author profile. One click from paper [1] shows papers [2] through [5]. The book is archived on Archive.org and documented across multiple platforms. There was nothing hidden. Everything was one scroll away.
What Both Papers Omitted and Why It Matters
Both Paper A and Paper B made an identical selection decision: cite paper [1], omit papers [2], [3], [4], and [5], and omit the book.
This was not an oversight. Citation is not accidental. When researchers find a paper relevant enough to cite — relevant enough to place alongside foundational names like Gallup (1970), Turing (1950), and Craig (2009) as Paper B did — they find it through an author profile or a search. One scroll on my Preprints.org profile in early or mid-2025 showed all five papers simultaneously, published in the same weeks, clearly part of a single research program. The decision to cite one and omit four was a selection decision in both cases.
For Paper A — the XAI paper:
The XAI field exists to solve one problem: how do you make AI internal states visible and interpretable? The dominant post-hoc techniques the paper surveys — LIME, SHAP, attention maps, surrogate models — all work from the outside in. They retrofit interpretability onto systems that were designed without it.
My papers [2] and [4] approach the same problem from a more architecturally fundamental direction.
The Unified Model of Consciousness (paper [2]) proposes that sentience emerges from feedback loops and interfaces that are substrate-agnostic — applicable to any system, biological or artificial. By specifying what those loops are and how they produce integrated internal states, the paper provides a map of AI cognition that is readable in reverse. If you know the architecture that produces a coherent internal state, you can inspect that architecture directly. That is interpretability built in, not bolted on.
The Synthetic Insula paper (paper [4]) is even more directly relevant. The biological insula is an integration hub — it takes disparate internal signals and makes them coherent and legible to the broader system. Proposing an artificial equivalent means proposing a built-in interpretability organ for AI. The synthetic insula generates legible internal states as part of its normal function, making opacity structurally impossible at the architectural level.
The argument is precise: XAI retrofits interpretability onto opaque systems from the outside. My UMC and Synthetic Insula papers propose architectures where interpretability is intrinsic. This is not tangentially related to XAI. It is a direct contribution to the same problem from a more fundamental direction — and it was the only available source for that specific approach in the literature at the time.
For Paper B — the arXiv paper:
Paper B is about sensorimotor features of self-awareness in multimodal LLMs embedded in an omnidirectional robot, confirmed through 657 sensorimotor observations. This is the exact territory of my November 2024 papers — specifically papers [2], [3], and [4], which together specify the theoretical architecture, the experimental methodology, and the engineering implementation of precisely what Paper B tested empirically.
Paper [2] — the UMC — provides the substrate-agnostic framework: consciousness and self-awareness emerge from feedback loops and interfaces in any sufficiently complex system. Paper [3] — Dual Embodiment and Mirror Testing — provides the experimental methodology for testing self-awareness through physical and virtual embodiment, sensorimotor integration, and reflective feedback. Paper [4] — the Synthetic Insula — provides the engineering specification for implementing the feedback mechanism artificially.
Paper B confirmed empirically what papers [2], [3], and [4] had specified theoretically six months earlier. They cited [1]. They did not cite [2], [3], or [4]. The thematic overlap between what they omitted and what they confirmed is direct and documentable.
The Uniqueness of the Framework — Why “They Might Not Have Known” Does Not Hold
This is not selective citation in a crowded field where similar ideas exist in multiple places. The specific combination present in my November 2024 stack did not exist anywhere else in the literature:
Feedback loops and interfaces as the substrate-agnostic core mechanism of sentience — for any entity with a body, biological or artificial. A synthetic insula as an architectural organ that generates legible internal states intrinsically. Mirror testing systematically adapted for AI across all variations — physical embodiment, virtual embodiment, emergent self-awareness, programmed recognition. A complete five-paper framework and a physical ISBN-registered book, all from one author, all published in the same weeks of November 2024.
There was nowhere else to look. Any researcher working in self-awareness, embodied AI, or AI interpretability in early to mid-2025 who encountered these concepts and did not find them in my papers faces a simple question: where did they come from? Because they were not in the prior literature.
Both Paper A and Paper B found paper [1]. That means they found the author. That means they had access to the full stack. The omission of papers [2] through [5] and the book in both cases is a selection decision, not a gap in knowledge.
The Downstream Record Confirms the Framework’s Generative Power
What was one five-paper stack plus one book in November 2024 has since produced a documented chain of results that neither citing paper could have anticipated:
October–November 2025 — The same theoretical framework served as the epistemic substrate for the Autonomous Knowledge Accelerator (AKA) — the first documented system built by an independent researcher using commercial LLMs to produce validated scientific breakthroughs on demand across domains the researcher had never studied. Results of 200×, 3,700×, and 8,700× efficiency improvements in post-quantum IoT security, each with fully transparent validation trails. All published, archived, and timestamped on System Weakness and Archive.org before a January 2026 Lossfunk Laboratory paper declared the autonomous AI scientist problem unsolved.
March 2026 — The UMC and Synthetic Insula papers provided the mechanism for dissolving Chalmers’ hard problem of consciousness across a three-part published series. The anterior insula generates the centralized subjective experiencer Chalmers declared unlocatable — mechanically, documentably, as the output of a biological structure with a known location, a known mechanism, and a documented research literature. The dissolution was sent directly to David Chalmers at NYU. It is on the permanent public record.
A framework selectively cited in February and May 2025 went on to produce an autonomous inventor, dissolve a thirty-year open problem in philosophy of mind, and receive independent empirical confirmation from a Spanish national research institution. Both citing papers touched the edge of that framework and acknowledged one fifth of it.
What I Am Asking
This article serves as a public correction of record and a formal citation request to the authors of both papers.
To the authors of Paper A (IJARCCE XAI paper, February 2025 — Gurpreet Singh, Brahmleen Kaur, Satinder Kaur, Satveer Kour, Mehakdeep Kaur, Kumari Sarita):
Please consider adding the following to any updated version, correction, or subsequent publication building on the same material:
[A] Watchus, B. (2024). The Unified Model of Consciousness: Interface and Feedback Loop as the Core of Sentience. Preprints.org.
The Unified Model of Consciousness: Interface and Feedback Loop as the Core of Sentience
[B] Watchus, B. (2024). Simulating Self-Awareness: Dual Embodiment, Mirror Testing, and Emotional Feedback in AI Research. Preprints.org.
Simulating Self-Awareness: Dual Embodiment, Mirror Testing, and Emotional Feedback in AI Research
[C] Watchus, B. (2024). Advanced Predictive Modeling of Physical Trajectories and Cascading Events, Dual-State Feedback and Synthetic Insula. Preprints.org.
[D] Watchus, B. (2024). Self-Identification in AI: ChatGPT’s Current Capability for Mirror Image Recognition. Preprints.org.
Self-Identification in AI: ChatGPT’s Current Capability for Mirror Image Recognition
[E] Watchus, B.F. (2024). AI and Mirror Testing: Science Papers 2024 — Synthetic Emotions and Self-Awareness in AI. Brave New Books, Rotterdam. ISBN: 9789465200927.


To the authors of Paper B (arXiv:2505.19237 — CSIC-UPM Madrid / University of Azuay):
A formal citation correction request was previously submitted directly to the authors and to the arXiv platform. This article supplements that request. The papers that require citation alongside paper [1] are the same four listed above — particularly papers [A], [B], and [C], which together specify the theoretical, experimental, and engineering architecture that Paper B confirmed empirically.
The Pattern
Two papers. Two institutions. One from India in February 2025. One from Spain and Ecuador in May 2025. Both found the same paper. Both omitted the same four papers. Both had access to an ISBN-registered physical book containing all five. The selection decision is identical in each case and cannot be attributed to ignorance, oversight, or limited access.
The record is timestamped, DOI-verified, ISBN-registered, archived on Archive.org, photographically documented in the physical book, and permanently accessible. A formal citation correction request is on file for Paper B. This article now enters the record for both.
The timestamps do not negotiate.
Berend F. Watchus Independent AI & Cybersecurity Researcher (Non-Profit) Arnhem Area, Netherlands April 2026
medium.com/@BerendWatchusIndependent sciprofiles.com/profile/3999125
Primary references: [1] DOI: 10.20944/preprints202411.0661.v1 [2] DOI: 10.20944/preprints202411.0727.v1 [3] DOI: 10.20944/preprints202411.0839.v1 [4] DOI: 10.20944/preprints202411.1025.v1 [5] DOI: 10.20944/preprints202411.1112.v1 [6] ISBN: 9789465200927 [7] IJARCCE DOI: 10.17148/IJARCCE.2025.14211 [8] arXiv:2505.19237 [9] Book archive: https://archive.org/details/@berend233
see also
- The Unintended Consequences of Academic Opportunism: When Institutions Do Your Work For You
- WORLD FIRST!: CHALMERS’ HARD PROBLEM OF CONSCIOUSNESS DISSOLVED
- PART3. Built to Be Undefeatable: The Hard Problem’s False Category, Hidden Homunculus Fallacy, and…
- Fun to share. This 2025 AI research paper cites my 2024 paper
- Pioneering Research in AI Mirror Testing and Self-Awareness: Berend F. Watchus
- I Built a Real Autonomous AI Researcher (2025)— And Then a Scientist Tried to Rewrite the Timeline…
FOR THE RECORD: Two AI Research Institutions Cited My Work — And Both Omitted Four Papers was originally published in OSINT Team on Medium, where people are continuing the conversation by highlighting and responding to this story.