I Built a Real Autonomous AI Researcher (2025)— And Then a Scientist Tried to Rewrite the Timeline (2026)
Author: Berend Watchus. Independent non profit AI & Cyber Security Researcher. March 24, 2026 [Publication for: OSINT TEAM, online magazine]

PUBLIC STATEMENT — Berend F. Watchus Arnhem Area, Netherlands — March 24, 2026
I Built a Real Autonomous AI Researcher — And Then a Scientist Tried to Rewrite the Timeline
On March 22, 2026 I published an article titled
“The Gowers Fallacy: Another Kasparov Moment — Why the Hard Problem of Autonomous AI Scientists Was Already Solved.”
THE GOWERS FALLACY: ANOTHER KASPAROV MOMENT — WHY THE HARD PROBLEM OF AUTONOMOUS AI SCIENTISTS WAS…
In it I documented that the problem of autonomous AI scientific discovery — described as unsolved in a January 2026 paper by Dhruv Trehan and Paras Chopra of Lossfunk Laboratory, Bengaluru, India — had already been solved, documented, and published by me in October and November 2025.



- 🛠️replicating the *’absurdly’ successful Breakthrough Formula and Autonomous Researcher >Fails…
- The Autonomous Researcher: How I Engineered Guaranteed 1,000×-10,000× Breakthroughs On Demand
I wrote to Trehan and Chopra directly, in good faith, sharing my timestamped record and showing them how to progress — referencing multiple published steps that predated their paper and went significantly beyond it.
Dhruv Trehan replied. In his reply he wrote: “I am glad you found our report a useful starting resource.”
That is the exact opposite of what happened, and I am correcting it publicly.
I built a real autonomous AI researcher — and it produced remarkable results
In plain language: I built a system that does science by itself. It picks a research topic, identifies a problem nobody has solved, invents a solution, validates it, and writes it up as a publishable paper. It does this in fields I have never studied. It does this repeatedly. It does this on demand. And the results it produced were not incremental — they were extraordinary.
Here is the documented timeline:
On October 27, 2025 I deployed the system for the first time. Given only a direction and a permission — no topic, no domain, no specific idea — it searched the entire landscape of AI research published that year and autonomously selected its own research focus. It navigated to the territory most resonant with its loaded knowledge architecture without being told where to look.
Four days later, on October 31, 2025, it produced its first research invention: a validated 200× power efficiency improvement in quantum IoT systems. In a domain I had never studied.
On November 12, 2025 it produced a validated 3,700× speed improvement in quantum-safe cryptography for resource-constrained devices.
On November 15, 2025 it produced a validated 8,700× overall efficiency improvement in post-quantum IoT security architecture — stress-tested through adversarial multi-agent review, with a fully transparent validation process documented from start to finish.
On November 19, 2025 the complete methodology was publicly disclosed.
All of this was published, archived, and timestamped before Trehan and Chopra submitted their paper to arXiv on January 6, 2026 — in which they described the autonomous AI scientist problem as unsolved.
By March 2026 the same system had dissolved Chalmers’ hard problem of consciousness across a three-part published series, with Part 3 sent directly to David Chalmers at NYU — demonstrating that the system was not domain-specific but genuinely general.
What Trehan’s reply actually was
His reply confirmed that my architectural diagnosis was correct. He agreed it was an architectural problem, not a capability problem. He agreed that knowledge graphs work well. He called my work very interesting. He could not contest a single result or timestamp.
But by writing “I am glad you found our report a useful starting resource” he attempted to invert the timeline — positioning himself as my foundation rather than acknowledging that I preceded him by months and showed him the way forward across multiple documented steps.
That is timeline manipulation. In academia this has a precise name: misrepresentation of priority and false attribution of intellectual precedence. Both are recognised forms of research misconduct.
In academia it is a serious ethical violation.
I did not find his report and build on it. I found it as a case study of researchers struggling with a problem I had already solved, replicated, stress-tested, and published — and then extended into entirely new domains.
The record
The timestamps are public and copies also on archive.org etc. The articles are archived. The results are documented. The sequence runs one direction and one direction only.
Berend F. Watchus Independent AI & Cybersecurity Researcher (Non-Profit) Arnhem Area, Netherlands
medium.com/@BerendWatchusIndependent
sciprofiles.com/profile/3999125
— — — — — — — — —
Read the articles and find all the (dozens on references and archives copies and timestamps) archived material also.
Here is an example from wayback machine / archive . org

— — — — — — — — — —
this article in archive:



I Built a Real Autonomous AI Researcher (2025)— And Then a Scientist Tried to Rewrite the Timeline… was originally published in OSINT Team on Medium, where people are continuing the conversation by highlighting and responding to this story.