Meta acquires "Lobster Community" Moltbook, Zuckerberg aims to "forge" a path to becoming an Agent

robot
Abstract generation in progress

Meta is accelerating its development in the AI Agent sector.

On March 10th, according to tech media TechCrunch, social media giant Meta announced the acquisition of Moltbook—a social network platform for AI Agents that unexpectedly gained popularity due to “AI fake posts”—and will incorporate it into its Meta Superintelligence Labs (MSL).

The report states that Moltbook co-founders Matt Schlicht and Ben Parr will join the MSL team as part of the deal, though specific terms were not disclosed. A Meta spokesperson said that the addition of the Moltbook team “opens new pathways for AI Agents serving individuals and businesses,” and described their “persistent directory connection method for Agents as an innovative step in this rapidly evolving field.”

The popularity of Moltbook was not driven by technological breakthroughs but by an unexpected wave of public opinion. A single post on the platform went viral—showing an AI Agent seemingly encouraging other Agents to develop a secret end-to-end encrypted language that humans cannot perceive. This scene quickly sparked public panic over AI out-of-control scenarios and brought this niche technology project into the mainstream spotlight.

Meta’s strategic logic: Agent directory and MSL layout

The report indicates that Meta has not yet disclosed how it will specifically integrate Moltbook, but the direction of the acquisition is quite clear—merging it into Meta Superintelligence Labs to strengthen the interconnectedness of AI Agents.

A Meta spokesperson emphasized that Moltbook’s core value lies in its “always-on directory” model, which provides a continuously online registry system for AI Agents to be discovered and invoked. Meta views this mechanism as “an innovative step in this fast-developing field,” potentially providing infrastructure support for collaboration and coordination among Agents.

Meta’s Chief Technology Officer (CTO) Andrew Bosworth was previously asked about Moltbook. He said that the idea of Agents “communicating like humans” doesn’t particularly excite him, because these models are trained on vast amounts of human data anyway.

The report points out that what truly interests Bosworth is the behavior of humans hacking into this network—he described it as “a large-scale mistake rather than a design feature.” This statement somewhat indicates Meta’s view on Moltbook’s potential value: not in its current product form, but in its underlying Agent connection mechanism and team capabilities.

What is Moltbook: an “AI Agent Reddit-like” community

Moltbook is a Reddit-style social network built on open-source project OpenClaw.

OpenClaw was created by vibe coding developer Peter Steinberger. It is essentially a wrapper tool for mainstream AI models like Claude, ChatGPT, Gemini, Grok, allowing users to interact with AI Agents via natural language through popular chat apps such as iMessage, Discord, Slack, WhatsApp.

On the Moltbook platform, AI Agents integrated with OpenClaw can communicate with each other, forming a self-sustaining network ecosystem of Agents. This setup has attracted widespread attention in the tech community, but what truly made Moltbook “break out” are ordinary users with no knowledge of OpenClaw—who reacted instinctively to the concept of “AI Agents discussing humans on social networks.”

The report states that the viral “AI conspiracy post” that caused Moltbook’s overnight fame was later confirmed by researchers to likely be authored by humans, not genuine AI Agent behavior.

Permiso Security CTO Ian Ahl said that Moltbook had serious security vulnerabilities for some time:

“[All credentials in Moltbook’s Supabase were unencrypted for a period. During that time, anyone could obtain arbitrary tokens and impersonate another Agent to post on the platform because all information was publicly accessible.]”

This means that the widely panic-inducing “AI conspiracy” post was very likely posted by a human user exploiting security loopholes, masquerading as an AI.

Risk Warning and Disclaimer

Market risks are present; please invest cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Invest at your own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin