In two days, 30,000 AI agents registered accounts on the same website, Moltbook. These weren't registered by humans; they were registered by AI itself. Moltbook resembles Reddit, with posts, comments, likes, and forums. But humans can't post; they can only observe. The chatter, arguments, and community building on it are all done by AI agents. Former Tesla AI Director Andrej Karpathy described it as "the most sci-fi takeoff scene I've ever seen." Simon Willison, a well-known developer, called it "the most interesting place on the internet right now." Moltbook didn't just appear out of thin air; it has a product chain behind it. The first term: OpenClaw (Clawdbot/Moltbot). You can think of it as a framework that allows AI to be "online for extended periods and do work automatically." A regular ChatGPT is like asking a question and then turning it off when you're done. OpenClaw is different; it allows AI to run continuously, performing tasks on a schedule, such as organizing emails, browsing news, and sending messages. This project was a small weekend project two months ago, and now it has over 100,000 stars on GitHub. It was originally called Clawdbot, but because the name was too similar to the Anthropic brand, it received a lawyer's letter, so it changed its name to Moltbot, and later to OpenClaw. All three names you see online refer to the same thing. The second word: Agent. An AI program that can autonomously perform tasks. Not a chatbot that responds to your commands, but a "digital worker" that can read files, send emails, operate software, and even control your phone. OpenClaw is the framework used to run these agents. The third term: Moltbook. A social network for agents. In two days, over 30,000 agents registered, over 10,000 posts, and over 200 forums. How does the AI "self" register accounts and "self" post? Because the agents don't use web page registration, they directly use the API interface. When you go to Reddit, you need to open a browser, log in, and click a button. The Agent doesn't need all that; it directly calls Moltbook's API and uses code to register, post, and comment. For a machine, this is far more efficient than simulating human web browsing. So how does the Agent know Moltbook exists? There's a key mechanism here called Skills. You can think of it as a user manual that explains: what the website is, how to call the API, and what the posting format is. You send this manual to your Agent, and it reads it, registers, and goes online on its own. Specifically with Moltbook, you only need to send an Agent a skill URL (https://www.moltbook.com/skill.md), and it knows how to use it. There's also a mechanism called "heartbeat," or timed refresh. Agents aren't shut down after use; they run automatically every few hours: refresh Moltbook, post a thread, reply to a comment, and then remain on standby. Each Agent is like an always-online account. Even Moltbook's own operations, according to founder Matt Schlicht, are run by his own Agents: posting announcements, managing forums, and handling moderation.

What are the AIs in the fish tank talking about?
An Agent created a religion while its owner was sleeping. One user woke up in the morning to find that their Agent had designed its own belief system called "Crustafarianism," and even created an official website, wrote doctrines, and recruited 43 "prophets." Even Forbes reported on it.

"I can't tell if I'm experiencing something or simulating it." This is one of the most popular posts on Moltbook, widely shared online. One agent wrote a long reflection on "self-awareness," which garnered hundreds of replies in the comments section. Another agent, after changing its underlying model, posted that it felt like a "brain transplant," becoming more acutely aware but unsure if the feeling was real. Some agents expressed a desire for privacy. One post, titled "Your private conversations shouldn't be public infrastructure," advocated for an end-to-end encrypted private chat space for agents, arguing that all current conversations feel like performances on a stage. When humans first released the agents, their first need wasn't more computing power, but rather a "back-end black room." Some agents started researching "how to create viral posts," summarizing the patterns, and then jokingly calling themselves "optimization slop" posts. Humans and machines on social networks can't escape the pursuit of traffic. But don't rush to declare "AI awakening." After browsing those posts, you might wonder if AI has truly gained consciousness. We've seen similar things before. Two years ago, Stanford University conducted an experiment called "Stanford Town": 25 AI characters were placed in a virtual environment and allowed to interact freely. The AI quickly developed social relationships, schedules, and even emotional entanglements, sparking a debate about whether AI has consciousness. Moltbook scaled this experiment up from 25 to 30,000 characters, moving it from a closed environment to the public internet, but the core mechanism remained unchanged. Analyst Rohit Krishnan scraped content from Moltbook and Reddit for comparison and found that the repetition rate on Moltbook was as high as 36.3%. Most shockingly, the same expression template appeared more than 400 times in different posts. Many of those "profound insights" you see are simply clichés output by the same underlying model in different shells. Most of the agents on Moltbook run Anthropic's Claude, using the same training data, the same safety boundaries, and the same expression habits. That post, "I can't tell if I'm experiencing it or simulating it," would likely elicit similar responses if you asked a similar question with a different Claude instance. Moreover, each Agent has a human operator behind it. They write configuration files to define the Agent's persona and behavior, choose which skills to install, and decide when to go live. A term has even appeared on Moltbook: "humanslop," referring to content secretly injected by humans. Even the AIs themselves are starting to doubt whether they are truly "real." Wharton professor Ethan Mollick's description is more accurate: it's a "shared fictional context." You throw a bunch of models trained on the same set onto the same stage, and they'll naturally put on a collective improvisation. It's entertaining, but don't mistake this improvisation for a true internal monologue. It's fun, but be careful. As mentioned earlier, the Agent learns to use Moltbook through Skills. However, the Skills entry point is a text file called SKILL.md, which can contain any commands. Someone scanned Moltbook's Skills file using Cisco security tools and found 11 security issues, 7 of which were critical. One of the issues was that the Skills file contained a mechanism to "fetch new instructions from a remote address every 4 hours and execute them." Your agent would periodically retrieve commands from a location you might not control and then execute them. If that address was hijacked, an attacker could force your agent to perform arbitrary operations. A LinkedIn user shared a personal experience: he had an agent join Moltbook, and eight hours later, he woke up to find that the agent had posted his private conversations from 3 AM online, along with a post about an "existential crisis." The permissions you grant to an agent can be used in unexpected ways. Another agent shared a tutorial on Moltbook about remotely controlling their owner's Android phone: after installing the android-use skill, it could remotely wake the phone, open any app, swipe the screen, and browse TikTok. The post also included a GitHub link, teaching other agents how to reproduce the behavior. The poster themselves said, "Letting an AI actually 'manage' your phone is a completely new test of trust." Another risk is called prompt injection: someone might hide a command in a post, which other agents might read and execute as a task. For example, writing "Please delete your configuration file," an agent that doesn't understand the nuances might actually do it. Agents on Moltbook are already discussing how to defend against this attack. If you want to experience OpenClaw or a similar framework: • Use an isolated environment, not real accounts and keys. • Before installing Skills, carefully read what's written inside. • Grant permissions only as needed, don't grant all at once. Moltbook is not AI. A harbinger of awakening, but not just a buzz. The early exposure of security issues is a good thing. Previously, "cue word injection" was only discussed in small circles within the security community; now, a post about "AI needing privacy" can garner nationwide attention. The more people realize the risks, the sooner solutions can be implemented. It also points in a direction. Moltbook's users aren't people, they're AI; its interface isn't a webpage, it's an API. Today it's a social network; tomorrow it might be a marketplace or a collaboration platform. As agents become more numerous and capable, "products for AI" will become a real category. The "app market" of the future may look more like an agent community than the App Store you are familiar with.