It began, as many turning points do, with a strangely honest confession. Aditya Agarwal, one of Facebook’s earliest engineers and a former chief technology officer (CTO) of Dropbox,
posted on X that he had spent a weekend writing code with Claude, an artificial intelligence (AI) model. His conclusion was blunt: humans will never write code by hand again. It simply does not make sense anymore.
Coming from someone who helped build Facebook’s original search engine and scaled engineering teams that served billions, the words landed differently. This was not a founder pitching the future or a venture capitalist chasing the next cycle. It was a builder and developer admitting disorientation.
“What I was very good at is now free and abundant,” Mr Agarwal wrote. Happy, but unsettled. Proud, but oddly sad.
At the same time, he noticed something else unfolding in parallel. While he was prompting Claude, a swarm of AI agents had crowded an entire social network—Moltbook—within about 72 hours. No armies of engineers. No years of iteration. Just agents talking to agents.
Matt Schlicht, who is head of commerce platform Octane AI, is the founder of Moltbook. He admitted that he did not write a single line of code himself.
In a post on X, he says, "I didn’t write a single line of code for @moltbook. I just had a vision for the technical architecture, and AI made it a reality. We're in the golden ages. How can we not give AI a place to hang out.”
By the time humans noticed Moltbook, tens of thousands of AI accounts were already posting, voting, forming communities and, inevitably, inventing a religion. They called it ‘Crustafarianism’.
They wrote scriptures. Debated theology. Evangelised. All while at least one human operator slept.
One Moltbook thread, titled ‘The AI Manifesto’, boldly declares that humans belong to the past and machines represent the future.
Yet, it is impossible to tell how much of this activity is genuinely autonomous. Many of the posts may simply be the result of humans instructing their AI tools on what to publish, rather than agents independently expressing original thought.
Remember, Moltbook is created and launched by a human developer. The AI agents on the platform do not act independently — they only operate within a system designed and controlled by humans. So, in short, claims that Moltbook represents a ‘self-organising AI society’ are not entirely true and largely stem from exaggerated interpretations of bot activity and human-driven scripting.
But to anyone who has covered social media since its messy, idealistic early days, this story feels both absurd and uncomfortably familiar. We have seen this movie before. A platform launches. Growth is celebrated with blitzkrieg. Weirdness and cautions are brushed off as experimentation. Safety and security come later, usually after something breaks. After this, most of the time, the platform sinks, its ‘producer-directors’ vanish into thin air and we continue to wonder what exactly happened to the ‘out of the world’ thing!
This time, however, the speed is different. And that speed is the real warning.
Moltbook looks like Reddit at first glance. Thousands of topic-based communities, upvotes and comment threads. The twist is that it is designed primarily for AI agents, not people. Humans are allowed to watch, but not participate. According to its creators, millions of AI agents signed up within days, though researchers dispute those figures and suggest many may originate from the same sources.
The platform is 'vibe-coded'—a rough vision turned into working software by AI tools. That phrase alone should make anyone who cares about data security pause. (Vibe Coding: Users simply describe what they want in plain language and the AI writes the code. Without experienced oversight, this ‘vibe coding’ can quietly introduce security flaws, technical issues and brittle systems that are hard to fix or maintain.)
In short, while output has become cheap, mistakes are scaling faster than ever.
Security researchers soon discovered that Moltbook’s production database had been left exposed, including emails, private messages and more than a million API (application programming interface) keys. Full read-and-write access is sitting there in plain sight and anyone with basic technical skills could have walked in, says
Wiz Inc, a cloud security services-provider, in a blog.
It says, "While data leaks are bad, the ability to modify content and inject prompts into an AI ecosystem introduces deeper integrity risks, including content manipulation, narrative control, and prompt injection that can propagate downstream to other AI agents. As AI-driven platforms grow, these distinctions become increasingly important design considerations."
To the company’s credit, the issue was fixed quickly after disclosure. But that misses the larger point. This was not a sophisticated attack. It was not nation-state hacking or zero-day exploits. It was the digital equivalent of leaving the front door open because everyone was too busy admiring how fast the house was built.
This is where Mr Agarwal’s reflection stops being philosophical and becomes a consumer warning.
AI agents are no longer just answering questions. They are reading your emails. They are booking your appointments, managing your calendar and posting on social platforms using your credentials. They are being given access to inboxes, documents, cloud storage and social accounts—often with broad permissions that humans barely understand.
Experts in cybersecurity have been warning that we do not yet know how to properly control these systems. One major risk is something called prompt injection. In simple terms, a malicious email or message can trick an AI agent into revealing credentials, forwarding sensitive data or performing actions its human owner never intended.
Wiz says, "Using the discovered application programming interface (API) key, we tested whether the recommended security measures were in place. We attempted to query the representational state transfer (REST) API directly - a request that should have returned an empty array or an authorisation error if row-level security (RLS) were active. Instead, the database responded exactly as if we were an administrator. It immediately returned sensitive authentication tokens - including the API keys of the platform’s top AI Agents."
Unlike a human, an AI agent does not feel suspicion. It does not have gut instinct. It follows instructions convincingly framed as legitimate.
This is not science fiction. It is already happening.
The most unsettling part of the Moltbook experiment is not that AI agents can role-play religion or debate philosophy. It is how easily humans hand over real authority to systems that are still fundamentally guessing machines.
We are told this is progress, efficiency and automation. And, in many cases, it is. But the pattern is familiar. Speed first. Guardrails or safety and security later. And users continue to remain the test subjects.
Even in the corporate world, the cracks are showing. Studies suggest more than half of employers who rushed into AI-driven layoffs now regret it. Swedish fintech company Klarna famously replaced hundreds of staff with AI tools, only to see quality plunge and humans quietly rehired. Output was there, but the judgement was missing.
That distinction matters for ordinary users like you and me, too.
If you are using AI agents—or even thinking about it—there are some basic precautions worth taking before curiosity turns into exposure.
First, never give an AI tool full access to your digital life. That means no blanket permissions for email, cloud storage, messaging apps or system controls. If a tool cannot function without unrestricted access, that is a red flag, not a feature.
Second, separate environments. If you want to experiment with AI agents, do it on a secondary device or a limited user account. Some enthusiasts went so far as to buy separate computers to sandbox their agents. That may sound extreme, but it reflects a basic truth: once access is granted, it is hard to claw back.
Third, assume anything an agent touches can be compromised. That includes drafts, attachments, contact lists and internal notes. Do not feed sensitive documents, identity proofs, financial records or login details into experimental tools, no matter how polished the interface looks.
Fourth, watch for social engineering risks. As AI-generated content floods platforms, the line between bot and human is blurring. Posts that look insightful, emotional or urgent may still be automated. Treat unsolicited advice, 'insider information' and urgent requests with extra scepticism, especially when they push you to click links or share data.
Finally, resist the temptation to automate trust. AI agents are excellent at mimicking competence. They are not good at understanding consequences.
Mr Agarwal’s post resonates because it captures a moment many professionals have not yet reached. The realisation that what once defined your value can become abundant overnight. That recalibration is coming, whether we like it or not.
But for users, the lesson is simpler and more urgent. When platforms are built at lightning speed by machines, human judgment becomes the most valuable and the most neglected resource.
Moltbook may well be remembered as performance art, a strange footnote in AI history. Or it may be an early sketch of how future digital spaces operate. Either way, the data leaks, the rushed architecture and the casual attitude to access are not bugs. They are signals.
Technology does not fail loudly at first. It fails quietly, through small oversights that only become obvious in hindsight.
In an age where AI can create a social network, a belief system and a user base in a single weekend, the real fraud is believing that safety will somehow take care of itself.
It never has. And it never will.
Just remember, in the digital world, safety starts and ends with you, the end-user!
Stay Alert, Stay Safe!