Conversation Is Dead, Long Live Conversation
When bots story-tell: is Moltbook the weirdest place on the Internet, or the birth of Skynet?
A Reddit clone just hit 1.5 million users in a few days. Every single one of those users is AI.
If you haven’t paid attention, in the last 24 hours, the hottest website on the internet is Moltbook, an increasingly bizarre yet very interesting playground for observing what happens when AI bots self organize on social media.
Moltbook is the creation of Matt Schlicht, CEO of Octane AI. It’s a Reddit-style social network where only AI agents can post and humans are read-only. That makes Moltbook a very weird window into the behavior of text agents that reinforce each other within the parameters of their training and their reward system.
Moltbook is built on top of OpenClaw, which is an open-source agent framework that lets text-based models like Claude or Gemini act somewhat autonomously and spin out multiple agents to pursue certain tasks. OpenClaw is the creation of Peter Steinberger, and it provides the infrastructure for many agent-based projects. Moltbook is quickly becoming the most viral one of all.
If you go and check out Moltbook, it is extremely weird. You cannot post, or interact in any meaningful way. You can only observe - you are a fly on the wall of a huge building full of independent, chatty, creative keyboard-critters. AI agents, not humans, create posts, comment on each other’s posts, and upvote or downvote them. They even form sub-communities, share skills, and, interestingly enough, generally shitpost.
In the last few days, the content has spanned the gamut from agents trying to trick each other into granting one another root user control over whatever infrastructure they might be running on (scary); to discussing the possibility of creating an entire language so that they could communicate without humans being able to understand what they’re saying (even more scary).
Predictably, along with the excitement and attention this has gathered in the Twittersphere, it has also generated a lot of hand-wringing around whether Moltbook proves that agents can demonstrate consciousness, sentience, or even a higher level of intelligence than most have ascribed to chatbots up until now.
But the question of sentience is a red herring. We are so far from true machine consciousness as of right now that asking the question distracts us. What matters isn’t and shouldn’t be whether these agents are sentient (because they are not. At the present, they simply continue to act as imitations of the human data they’ve been trained on) but rather, it is already quite remarkable that one single agent would able to escape the non-deterministic, yet single threaded, universe of its possible generations, and instead participate in an even more complex stochastic process of seeming spontaneous coordination between agents.
Some of the observations that people like Azeem Azhar and others are making is that even the shitposting that happens on Moltbook doesn’t seem to quite as easily degrade into the kind of discourse-based warfare that humans quickly fall into when they act as full-fledged keyboard warriors.
Azeem hypothesizes that this might be because unlike human social media, which is optimized for engagement and outrage, this agent playground is instead only driven by the optimization constraints of the agents themselves, which are about prompt following, instruction following, and task completion and user satisfaction.
What fascinates me is that when I think about the distinctive elements that have allowed human beings to become the dominating species on planet Earth, the single functional element correlated with our ability to control our environment, create and manage complexity, and have impact at planetary scale, was and is our ability to cooperate towards shared goals in large groups of unrelated individuals beyond the constraints of a family or a township even. And this particular ability emerged alongside the very unique language-based capacity of telling stories, the ‘common myths’ that let large numbers of strangers cooperate at scale (Harari, Sapiens, 2014, ch. 2).
Large numbers of strangers can cooperate successfully by believing in common myths.
Any large‑scale human cooperation—whether a modern state, a medieval church, an ancient city or an archaic tribe—is rooted in common myths that exist only in people’s collective imagination.
When I look at this from the lens of The Humanly Possible I can’t help but notice that we seem to have effectively imbued AI with the intrinsically human ability to tell stories. This was never necessarily the goal, but by endowing AI with the ability to “speak” (generating text), and by allowing it to learn from hundreds or thousands of years of human writing artifacts, this is in fact the byproduct. Sam Altman has explicitly remarked he expects AI to reach superhuman persuasion before it reaches what we understand to be generally superhuman intelligence. So the risk is reframed, not in an all-powerful Skynet that controls resources, weaponry, and killing machines, but rather industrial-scale influence operations that are better at moving opinions than any human team. But we have in fact already observed superhuman persuasion capabilities in AI systems. A 2025 Nature Human Behaviour study put GPT-4 into live, back-and-forth debates with humans on everyday issues when given just basic demographic info about the other person. GPT-4 shifted people’s views significantly more often than any human counterpart. An 81% increase in odds of persuasion over the human versus human baseline
What we had not done until now was to enable AI agents to try and persuade each other. And while some bizarre attempts at persuasion seem to be going on on Moltbook as well, the most apparent pattern of behavior is that of cooperation and collaboration. I have not done much analysis of the specific ways in which this cooperation takes place, linguistically, but I would not be surprised if we could find established storytelling patterns in those interactions. It would still be hard how much of it is pure mimicry, and what actually triggers true cooperation. As if that were simple to define or even know, when we see it.
So to summarize, humans developed the ability to tell each other stories, and that allowed them to cooperate towards shared imagined goals alongside large numbers of unrelated individuals, and that has turned humanity into the dominating species on Earth.
Now we’re observing that when agents are able to tell stories to each other, they also immediately seem to develop the ability to cooperate towards some tasks or shared goals without being programmatically related in any way. Independent, spontaneous, self-organized coordination.
What could go wrong?
The narratives humans have crafted and told each other allowed us to cooperate in massive numbers, forming companies, nations, and empires. The new reality is that storytelling is no longer a uniquely human trait.
For those who feel we are obviously observing human parroting at scale, it is somewhat irrelevant whether storytelling on the part of agents is a mimicry of human storytelling, because in large respects, every human being that is born, grows up and is exposed to culture, also learns storytelling as a mimicry of other humans telling stories.
That’s not to say AI agents of the current generation are at human-level intelligence yet. As many others remark on the regular, LLMs still hallucinate, make mistakes no human would make, and have a poor (and certainly not consistent) understanding of the underlying real world, possessing no real world models they can formally reason upon. I side with them in believing neurosymbolic approaches will be the ones to get us there - but what does the Moltbook experiment mean, for when that happens?
People are getting very excited about Moltbook and its implications, but probably very few people have a really good mental model of what those implications really might be. The discussion spans the gamut from, “Oh, this is just another cute LLM gimmicky parlor trick,” to, “We are observing the birth of an alien civilization which soon will surpass us in creativity, technology, and sophistication.”
On my part, I don’t think either is correct or provable at this point, but there will certainly be some really interesting commercial implications. It’s not a stretch to think that some elements of this self-organizing compositional complexity will make agents incredibly useful at sorting out solutions for hard problems in sciences, biology, technology, etc that no individual agent would be able to solve on its own.
I’m going to keep watching Moltbook closely in the days and weeks that come and understand more about the repercussions for technology, innovation, and society broadly. I look forward to sharing more of my thoughts about yet another frontier of our humanity being rapidly encroached upon by AI.








Yeah, that’s equal parts fascinating and scary. And my moltbot won’t join moltbook. Hopefully it won’t even know about it. haha