Hermes Agent installs with a single command and runs on Linux, macOS, or WSL2. But installation is the easy part — the difference between "it runs" and "it actually improves over time" is in the configuration. Most users who dismiss Hermes as underwhelming never enabled the features that make it different.

This guide covers everything from first install to your first automated task, with the configuration steps most tutorials skip.

Key Takeaway

Installation takes 2 minutes. Configuration takes 15. The critical steps most people miss: enabling persistent_memory and skill_generation in the config. Without these, Hermes behaves like any other single-session agent.

What Do You Need Before Installing?

Almost nothing. Hermes's installer handles dependencies automatically. You need:

Requirement Details
Operating SystemLinux, macOS, or Windows with WSL2
LLM API KeyAt least one: Anthropic (Claude), OpenAI (GPT), Google (Gemini), or OpenRouter
HardwareAny modern machine. For always-on: $5-10/month VPS works fine
Terminal accessBasic command line familiarity

How Do You Install Hermes Agent?

Step 1: Run the installer.

curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash

This downloads and sets up everything — Node.js, Python dependencies, SQLite, and the Hermes runtime. Takes 1-3 minutes depending on your connection.

Step 2: Configure your LLM provider.

After installation, Hermes asks which LLM provider to use. The most common options:

📋 PROVIDER CONFIG EXAMPLES

# For Claude (best quality, higher cost) provider: anthropic model: claude-sonnet-4-20250514 api_key: sk-ant-... # For GPT (good balance) provider: openai model: gpt-5.4 api_key: sk-... # For budget setups (free via OpenRouter) provider: openrouter model: qwen/qwen-3.5 api_key: sk-or-...

Community consensus on models: GPT 5.4 with thinking mode on medium+ is the most popular daily driver. Qwen 3.5 on OpenRouter is free and capable enough for routine automation. Claude Opus produces the best quality but costs significantly more — and Anthropic has been restricting heavy third-party usage.

Step 3: Enable the features that matter.

This is where most tutorials fail you. Open your Hermes config file and enable these two settings:

persistent_memory: true skill_generation: true

Without these, Hermes forgets everything between sessions and never creates reusable skills. It becomes just another chatbot wrapper. With them enabled, every complex task (5+ tool calls) automatically creates a skill file, and all conversations are searchable across sessions.

---

📬 Getting value from this? We publish weekly guides on AI tools and workflows. Get it in your inbox →

---

How Do You Run Your First Task?

Start Hermes in your terminal:

hermes

You're now in an interactive session. Try these starter tasks to verify everything works:

Test basic conversation: "What can you help me with?" — verifies the LLM connection works.

Test web search: "Search for the latest news about Nous Research and summarize it in 3 bullets." — verifies tool calling works.

Test skill creation: "Research the top 5 AI agent frameworks in 2026, compare their features, and write a summary report." — this should trigger 5+ tool calls, which means Hermes will automatically create a skill file when it's done. Check your skills directory to verify the file was created.

Test memory: Close the session, start a new one, and ask "What did we discuss last time?" If persistent memory is working, Hermes will recall the previous conversation.

How Do You Connect Messaging Platforms?

Hermes supports Discord, Telegram, Slack, Microsoft Teams, and more. Each platform requires creating a bot token and adding it to the config. Here's Discord as an example:

1. Create a Discord bot at discord.com/developers/applications

2. Copy the bot token

3. Add to your Hermes config:

gateways: discord: enabled: true token: YOUR_DISCORD_BOT_TOKEN

4. Restart Hermes. Your bot appears in your Discord server, ready to respond.

The same pattern applies to Telegram (BotFather token), Slack (app OAuth token), and other platforms. Hermes v0.10.0 added Microsoft Teams as a plugin-shipped platform and supports 18+ messaging platforms total.

How Do You Set Up Profiles for Multiple Use Cases?

Hermes v0.6.0 introduced profiles — isolated instances with separate configuration, memory, skills, and gateway connections. This lets you run a "work" profile with Slack integration and a "personal" profile with Telegram, each with their own memory and learned skills.

# Create a new profile hermes profile create work # Switch to it hermes profile use work # Each profile has independent memory, skills, and config

What Are the Most Common Setup Mistakes?

Mistake Symptom Fix
persistent_memory not enabledAgent forgets everything between sessionsSet persistent_memory: true in config
skill_generation not enabledNo skill files created after complex tasksSet skill_generation: true in config
Using low effort model settingsOutput quality feels like a downgradeSet effort to high or xhigh
Running on a public VPS without hardeningSecurity exposureEnable container isolation, read-only root
Not setting up checkpoint/rollbackCan't recover from agent mistakesEnable filesystem checkpoints in config

For a broader understanding of how AI agents work and where Hermes fits in the ecosystem, see our complete guide. And for better prompts when interacting with any AI agent, try the free Prompt Optimizer.

---

📬 Want more like this? Practical AI setup guides, weekly. Subscribe free →

---

Frequently Asked Questions

How long does setup take?

Installation is 2 minutes. Basic LLM configuration is 5 minutes. Enabling memory, skills, and messaging integrations takes another 10-15 minutes. Total: under 30 minutes to a fully functional agent.

Can I run Hermes Agent on a Raspberry Pi?

Technically yes (it runs on Linux), but performance will be limited. A $5-10/month VPS from DigitalOcean or Hetzner provides better performance and always-on capability.

What happens if I change LLM providers later?

Switching is a config change — no code modifications needed. Your memory, skills, and conversation history are preserved. Only the LLM responses will change in quality/style based on the new model.

Do I need Docker?

No. The standard installation doesn't require Docker. Docker is optional and recommended only for production deployments where you want container isolation for security.

Disclosure: Some links in this article are affiliate links. We only recommend tools we've personally tested and use regularly. See our full disclosure policy.