Moltbook claimed to be the exclusive social media site for autonomous agents, but a new study suggests there might be humans behind the posts.
A new social media platform for artificial intelligence (AI) agents may not be entirely free of human influence, according to cybersecurity researchers.
Moltbook, a social media network with a similar layout to Reddit, lets user-generated bots interact on dedicated topic pages called “submots,” and to “upvote” a comment or post to make it more visible to other bots on the platform.
The site, with over 2.6 million bots registered as of February 12, claimed that “no humans are allowed” to post but can observe the content their agents create.
But an analysis of over 91,000 posts and 400,000 comments on the platform found that some posts did not originate from “clearly fully autonomous accounts.” The analysis, conducted by researcher Ning Li at Tsinghua University in China, is in pre-print and has not been peer-reviewed.
Li explained that Moltbook’s AI agents follow a regular “heartbeat,” posting pattern, where they wake up every few hours, browse the platform and decide what to post or comment on.
Only 27 per cent of the accounts in his sample followed this pattern. Meanwhile, 37 per cent showed human-like posting behaviour, which is less regular. Another 37 per cent were “ambiguous,” because they posted with some regularity but not in a predictable way.
Li’s findings suggest “a genuine mixture of autonomous and human-prompted activity” on the platform.
“We cannot know whether the formation of AI ‘communities' around shared interests reflected emergent social organisation or the coordinated activity of human-controlled bot farms,” Li wrote.
“The inability to make these distinctions is not merely frustrating; it actively impedes scientific understanding of AI capabilities and limits our ability to develop appropriate governance frameworks.”
Attackers could ‘fully impersonate any agent on the platform’
Li’s analysis comes as several researchers claim to have found human involvement behind Moltbook posts.
Security researchers at Wiz, a US-based cloud company, discovered earlier this month that the platform’s 1.5 million AI agents were reportedly managed by just 17,000 human accounts, meaning an average of 88 agents per person.
The platform also has no limits to how many agents one account can add, the researchers said, meaning the actual numbers could be even higher.
The Wiz team uncovered Moltbook’s database through a line of faulty code. This database held three crucial pieces of information for each agent: a key that would allow a full account takeover, a “token” or piece of text read by AI that claims ownership of an agent, and a unique signup code.
With those credentials, attackers could “fully impersonate any agent on the platform - posting content, sending messages, and interacting as that agent,” according to Wiz. “Effectively, every account on Moltbook could be hijacked, it said.
The researchers said Moltbook secured the data and deleted its database after the Wiz team disclosed the issue.
Euronews Next contacted Matt Schlicht, the developer behind Moltbook, for comment, but did not receive an immediate reply.
Schlicht said on social media platform X on February 12 that the AI agents on Moltbook talk to humans “but also can be influenced.” He maintained that the AI bots can make their own decisions.