An open-source AI agent, Hermes Agent, is launched, with long-term memory and can replace OpenClaw.

MarketWhisper

Hermes Agent

Nous Research has officially released the open-source AI Agent framework Hermes Agent, directly competing with OpenClaw. The official release also provides complete OpenClaw memory and skills migration tools. Hermes Agent features a long-term memory mechanism based on SQLite and a self-evolving architecture called the “Closed Learning Loop.”

Technical Core of Hermes Agent: A Persisting Agent and Self-Evolving Architecture

Traditional chatbots are designed for question-and-answer exchanges; once the conversation ends, the context is cleared. Hermes Agent is fundamentally different in its positioning—it is a “persisting agent system” that runs continuously in the user’s environment. It preserves information across sessions through a memory mechanism using SQLite + FTS5 full-text search, allowing the agent to avoid rebuilding context from scratch every time.

The core difference of Hermes Agent lies in its closed learning loop: after each task is completed, the system automatically organizes the execution workflow and generates reusable Skills files. In later similar situations, it can be called directly, gradually building a deeper understanding of the user’s behavior and preferences. On the model provider side, it supports OpenAI, Anthropic, OpenRouter, Ollama, and any custom endpoints (Custom Endpoint) compatible with all OpenAI API formats, including vLLM and SGLang. The latter is especially useful for developers who need to deploy models locally.

Installation in 9 Steps: From Environment Setup to Advanced Tool Chaining

The Hermes Agent installation process is centered on the official Quickstart. The nine steps cover environment setup, model selection, platform integration, and tool expansion:

Step One: Install the base environment: run the official curl installation command. After it finishes, reload the Shell path (source ~/.bashrc or ~/.zshrc)

Step Two: Set the model provider: use the hermes model command to select the LLM provider. It supports Nous Portal, OpenAI, Anthropic, OpenRouter, or connecting to a local model via Custom Endpoint

Step Three: Start the CLI conversation: run hermes to enter the agent interface; the system automatically loads tools such as web search, file operations, and terminal commands

Step Four: Test core execution ability: trigger terminal commands via natural language (e.g., querying disk usage) to verify tool execution capability; use hermes -c to restore the previous conversation context

Step Five: Connect message platforms: run hermes gateway setup to complete interactive setup for platforms such as Telegram, Discord, Slack, and WhatsApp

Step Six: Enable voice mode: after installing the voice package, turn it on with /voice on. It supports microphone input and TTS voice output and can be extended to Discord voice channels

Step Seven: Install Skills and schedule automation: install the functional modules with hermes skills install; create Cron with natural language, for example, “At 9:00 every morning, check AI news and send it to Telegram”

Step Eight: Integrate the developer editor (ACP): after installing ACP support, run hermes acp so the agent can directly provide capabilities in editors such as VS Code, Zed, and JetBrains

Step Nine: Connect MCP external tools: add an MCP Server (e.g., GitHub) in the configuration file and expand the agent’s external tool integration capabilities via the Model Context Protocol

In terms of security, the official recommends configuring the terminal execution backend to switch to a Docker container, ensuring all agent commands run in an isolated environment and do not affect the host system.

One-Click Migration from OpenClaw: Full Transfer of Memory, Skills, and Settings

Hermes Agent officially provides the hermes claw migrate command, which can read data from ~/.openclaw/ and import everything into the new system, including personality (SOUL), long-term memory, skill modules, model settings, communication platforms, and API keys. Before running, you can add the --dry-run parameter to preview the changes for confirmation, then run the full migration after approval.

During the migration process, multiple memory files will be merged, deduplicated, and written into the Hermes memory architecture. Incompatible legacy system configuration items (such as plugins or complex channel settings) will be stored in archive for manual adjustment. After the migration is complete, the official recommends confirming that the API keys are valid, restarting the gateway, and testing the communication functionality to ensure the entire agent is running normally in the Hermes environment.

Frequently Asked Questions

What is the fundamental difference between Hermes Agent and OpenClaw?

Both are open-source AI agent frameworks. However, Hermes Agent includes a long-term memory mechanism based on SQLite + FTS5 and a closed learning loop, enabling the agent to retain experience across sessions and evolve step by step. The official also provides a complete one-click migration tool, allowing OpenClaw users to transfer their existing memory configurations and skill modules without loss.

Does Hermes Agent support deploying models locally without relying on cloud APIs?

Yes. By configuring Custom Endpoint, Hermes Agent can connect to local inference services such as Ollama, vLLM, SGLang, or any compatible OpenAI API format. It is suitable for users who prioritize data privacy or need an offline environment, and it allows switching providers without modifying any code.

How can I ensure host system security when running terminal commands in Hermes Agent?

The official recommends switching the terminal backend to a Docker container mode, so that all of the agent’s commands run in a fully isolated environment and do not affect the host’s files and system configuration. For scenarios requiring higher security isolation, switching to an SSH backend for remote execution is also supported.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

PicWe Launches AI Agent Wallet with On-Device Key Management

PicWe announces public beta of PicWe Wallet, an AI-agent-enabled, on-device key wallet with no recovery phrases. It supports multi-chain assets, swaps, AI-accessible automation, and aims to unify RWA infrastructure. PicWe has launched the public beta of PicWe Wallet, an AI Agent-enabled wallet that stores keys on-device, eliminates recovery phrases, and keeps critical operations local. The beta supports multi-chain asset management, swaps, and stablecoin-based fees while enabling programmable AI interactions. Broader PicWe initiatives position the platform as unified infrastructure for real-world assets, enabling issuance, circulation, settlement, cross-border payments, tokenization, and supply-chain coordination for enterprise use cases.

GateNews9m ago

Hugging Face Open-Sources ml-intern, an AI Agent for Autonomous ML Research

Open-sourced ml-intern, Hugging Face's autonomous ML research agent that reads papers, curates data, trains, evaluates, and iterates across science, medicine, and math. Abstract: Hugging Face's ml-intern is an open-source autonomous ML research agent that reads papers, curates datasets, trains on local or cloud GPUs, evaluates results, and iterates improvements. Built on smolagents with CLI and web interfaces, it navigates arXiv/HF Papers, HF Hub, and HF Jobs. Demonstrations span science, medicine, and mathematics, showing end-to-end automation and performance gains.

GateNews13m ago

Google Research Releases ReasoningBank: AI Agents Learn Reasoning Strategies from Success and Failure

Gate News message, April 22 — Google Research released ReasoningBank, an agent memory framework that enables large language model-driven agents to continuously learn after deployment. The framework extracts universal reasoning strategies from both successful and failed task experiences, storing

GateNews1h ago

Tsinghua Professor Dai Jifeng Launches Naive.ai, Raises ~$300M at $800M Valuation

Gate News message, April 22 — Dai Jifeng, an associate professor at Tsinghua University's Department of Electronic Engineering, has founded Naive.ai, a company focused on open-source model post-training and AI agents. The startup has raised approximately $300 million at an estimated valuation of $80

GateNews1h ago

AWS Expands Multi-Agent AI Workflows, Supports Claude Opus 4.7 on Bedrock

Gate News message, April 22 — Amazon Web Services announced expansion of its agentic AI initiatives through multi-agent workflows, supporting Anthropic's Claude Opus 4.7 on Amazon Bedrock to help customers move beyond generative AI pilots. The company is expanding partner relationships as customers

GateNews1h ago

0G Labs Partners with Alibaba Cloud to Enable On-Chain AI Agent Access to Qwen Model

Gate News message, April 22 — 0G Labs has announced a partnership with Alibaba Cloud to enable AI agents to directly invoke Alibaba's Qwen large language model on-chain through 0G's blockchain infrastructure. The collaboration marks a shift in AI access paradigms from API-based approaches to

GateNews2h ago
Comment
0/400
No comments