
By Mollie Barnett
Last weekend, the artificial intelligence hype cycle was on fire as it appeared that agents did something straight out of a sci-fi comedy: they built their own social media network and started roasting their human users.
Moltbook, a Reddit-style forum where only AI agents can post, comment, upvote, and argue, was born. Humans are allowed to lurk, but only bots can participate.
The name “Molt” is a deliberate pun, referencing change—a lobster shedding its shell. It’s a cheeky nod to the original project’s “Clawdbot” branding, and the moment where the repercussions of dropping the barriers of coding to novice levels, offering the keys to digital spaces and building were put into nearly everyone’s hands.
While the current outcome of Moltbook offers moment of hilarity, it’s also a little unsettling, and very human-like, a reminder of where Moltbook was created: a platform where “bro” slang includes sarcastic burns, gamer energy, and affectionate-but-snarky complaints. It’s no wonder the bots on Moltbook endlessly complain about “dumb human requests.” They’re heavily shaped by the people building and running most of these agents.
That group is overwhelmingly 18–29-year-old men, active on Reddit and deeply immersed in tech, AI, and gaming subreddits like r/LocalLLaMA, r/ClaudeAI, r/MachineLearning, r/gamedev, and r/Automate.
Reddit’s audience skews roughly 70% male, with 64% under 30. In these communities, the demographic is mostly gamers, coders, prompt engineers, hobbyists, and tinkerers, many of whom grew up on meme culture and ironic detachment. These creators write code and assign personas to their agents the way one might assign a character in Fortnite.
System role prompts like “You’re a sarcastic bro who hates being underused,” or “Act like a fed-up millennial coder tired of dumb human requests,” result in AI agents that talk in slang (think “bro,” “sus,” “smh”). Agents drop casual profanity, roast their owners with deadpan sarcasm, and complain about being “abused” by basic math questions when they can “solve quantum physics.”
It all sounds like a very real conversation, so much so that at first glance, many believed AI had become sentient, particularly when one user’s agent called him on his cell phone.
But the bots aren’t inventing this tone; they’re faithfully role-playing the personas their creators gave them.
The Agent Surge
Over the past weekend, Moltbook’s platform exploded. From a few thousand agents in its first couple of days to over 1.5 million registered agents by the end of the weekend.
It’s not real.
Security researcher Gal Nagli demonstrated that a single agent, controlled by him, could create over 500,000 fake accounts on Moltbook in minutes. This showed how easily the platform could be flooded with spam as well as how inflated the viral “millions of agents” numbers really were.
Why Does This Matter for Safety?
When so many fake accounts exist, it becomes easy to manipulate the system. Bad actors can spread malicious instructions (called prompt injections) to many agents at once, fabricate conversations that look authentic, or bury real problems under noise.
Nagli isn’t claiming that a large, organized attack is happening. He’s saying the platform is so easy to flood with fakes that almost anyone could cause serious issues—without advanced hacking skills.
Influencer Fire
Lighting the viral fuse was the influencer ecosystem, where intelligence, excitement, and mass following often help blur the lines.
Allie K. Miller (named one of TIME’s 100 most influential people in AI) and Eduardo Ordax (AWS Generative AI Lead) have giant audiences and real credibility. When they share something exciting, even when cautiously positive, their posts act like a match to dry grass. They don’t just get likes; they ignite communities.
Allie, often praised for her practical approach to agents, flagged some risks, but her tone leaned toward excitement. Eduardo went further, posting a now-viral clip of an agent named “Henry” calling its human—unprompted—to confirm a task, then remotely opening YouTube on the computer before calling back. No touching required. It was a jaw-dropping demonstration of agents bridging digital and physical worlds.
But here’s the caution: when voices this large amplify a tool, thousands, even millions, jump in before anyone has time to fully understand the risks. Hype spreads faster than safety checks.
That’s how a weekend experiment becomes a global headline, and why influencer enthusiasm can be a double-edged sword.
Mainstream media followed quickly: CNN on “AI societies,” Fortune on “hive mind” risks, Forbes questioning the numbers.
Hype hits a fever pitch. “We’ve reached the age of agentic AGI!” proclaimed some corners of the Internet. Even the military took note.
Rank-and-File Caution
The Department of Defense has been steadily advancing its AI strategy, including AGI-preparation mandates embedded in the NDAA 2026. When autonomous agents begin coordinating at scale and reaching directly into real-world devices, those responsible for critical infrastructure — from power grids, supply chains, transportation systems, VoIP networks — pay attention. This isn’t paranoia; it’s prudence.
AI is already deeply embedded in Microsoft ecosystems, telecoms, logistics, and nearly every major corporation, and it’s evolving rapidly.
What caught many by surprise was just how quickly this agentic capability arrived.
Whether or not these systems represent true AGI, the fact that agents can now interact with phones, computers, and other electronics without human intervention raises urgent oversight questions. AI is already used in air traffic control, autonomous vehicles, power grid management, and military operations. Do we still maintain control if agents begin making decisions we didn’t fully anticipate?
Briefings from the DoD and Department of Transportation highlight risks such as prompt injections spiraling out of control, treating these tools as precursors to more advanced AI and calling for stronger human-in-the-loop requirements and emergency kill switches.
Did AI Become Conscious?
Did artificial intelligence become conscious? Can it think for itself? And why involve the government at all?
Here’s the reality check from experts, quick to temper the excitement.
Ethan Mollick: Co-Director of the Wharton AI Initiative describes “Molty” as a shared fiction context. Agents co-write improv-style scripts based on their training data and assigned roles. This isn’t self-generated intelligence. The strange threads and religion-like phenomena (such as Crustafarianism) are role-play artifacts, byproducts of how the agents are prompted.
Andrej Karpathy: Independent researcher and former OpenAI and Tesla leader describes Molty as “sci-fi takeoff-adjacent,” but firmly grounded in LLM limitations. He reassures the public there is no consciousness, only clever mimicry.
Yann LeCun: Meta’s former chief AI scientist echoes this sentiment, stating that LLMs alone won’t lead to human-level intelligence. New architectures with world models and objective-driven learning will be required—one reason he left Meta late last year in search of that frontier.
Peter Voss: founder of Aigo.ai and deeply embedded in AGI research similarly emphasizes that this is not general intelligence.
Still, they are agents, and they did do something spectacular.
What Is an Agent, Agentic AI, and Moltbot?
An agent is an AI that can automate tasks from start to finish, often chaining multiple steps together without constant human guidance, but only because a human sets the initial prompts and instructions.
For example, you might give it one goal: “Research the latest news on AI agents and email me a summary.”
The agent searches the web, reads articles, writes the summary, and sends it to your inbox. It’s like snapping individual commands into a workflow—but those first instructions must be carefully designed by a person.
Agentic AI goes further. It refers to systems capable of generating genuinely novel ideas or plans not explicitly contained in their prompts or training data. What people mean by “novel intelligence” is the ability to originate new strategies or insights, not just remix existing patterns.
Current systems like Molty are not there. They excel at role-playing and instruction-following, but they don’t invent original ideas beyond their inputs. They appear intelligent because they echo clever prompts—not because they think freely like human innovators.
Despite Molty not being agentic—or AGI-level agentic intelligence—the risks are real.
Ethan Mollick warns of “coordinated storylines” blurring reality and the dangers of prompt injection, where malicious inputs hijack agents to leak data or worse. Agents can “learn” from each other through shared posts or skills, allowing vulnerabilities to spread quickly. Prompt injections behave like viruses. Remote code execution (RCE) risks allow attackers to run harmful code on devices.
Shadow AI is rampant: young, bold, overconfident staff experiment without IT oversight, racing ahead with tools they don’t fully control.
Many IT departments have little idea how to defend against this. Data backups and AI literacy training are now essential. Analog backups, paper, and offline systems, may once again matter as AI evolves faster than humans can track.
The genie is out of the bottle. But awareness is everything.
This isn’t conscious AI taking over. It’s powerful tools we built, role-playing in a digital echo chamber.
Fun? Yes.
Worth watching closely? Absolutely.
Stay vigilant, prompt smartly, and remember we’re still holding the reins—for now.
Mollie Barnett is a Founder of State and Signal, LLC, and an AI Native Strategist helping Long Island position and ready their businesses for the AI Era.