## How OpenClaw Works
OpenClaw is an **always-on agent runtime** that acts as a control plane for AI automations . Think of it as a small operating system for agents - it continuously listens for events, manages sessions, queues work, and executes tools .
### The Agent Loop (Core Mechanism)
OpenClaw operates through a **serialized agentic loop** per session . Here's how it works:
```mermaid
flowchart TD
A[Input from Channels/CLI/API] --> B[Gateway Control Plane]
B --> C[Session Management & Queue]
C --> D[Agent Runtime]
subgraph D [Agent Loop Execution]
D1[Load Skills Snapshot] --> D2[Build System Prompt]
D2 --> D3[Model Inference]
D3 --> D4{Tool Called?}
D4 -->|Yes| D5[Execute Tool]
D5 --> D3
D4 -->|No| D6[Stream Response]
end
D --> E[Persistence & Memory]
style D fill:#f9f,stroke:#333,stroke-width:2px
```
**Key phases of the agent loop** :
1. **Intake** - Receives requests from messaging channels (WhatsApp, Telegram, Slack), CLI, or APIs
2. **Context Assembly** - Loads skills snapshots, bootstrap files, and session state
3. **Model Inference** - Calls the LLM with assembled prompt
4. **Tool Execution** - If the model calls a tool, it executes and feeds results back
5. **Streaming** - Outputs are streamed as assistant deltas and tool events
6. **Persistence** - Session state is saved for continuity
### Architecture Layers
| Layer | Purpose |
| :--- | :--- |
| **Control Interfaces** | Desktop app, CLI, web UI for human interaction |
| **Messaging Channels** | WhatsApp, Telegram, Slack, iMessage - event sources |
| **Gateway Control Plane** | Routes requests, enforces access, manages sessions |
| **Agent Runtime** | Core AI reasoning, prompt construction, tool orchestration |
| **Tools Layer** | Bash, browser, filesystem, cron - actual execution |
### Queueing & Concurrency
Runs are **serialized per session** to prevent tool/session races and maintain consistency . Sessions can have different queue modes: `collect`, `steer`, or `followup` .
---
## What are Skills in OpenClaw?
Skills are **portable knowledge packages** that teach OpenClaw how to perform specific tasks . Each skill is a directory containing a `SKILL.md` file with YAML frontmatter and Markdown instructions.
### Skill Directory Structure
```
skill-name/ # lowercase, hyphens only
├── SKILL.md # REQUIRED - frontmatter + instructions
├── scripts/ # OPTIONAL - executable code (Python, Bash, etc.)
├── references/ # OPTIONAL - detailed documentation loaded on demand
└── assets/ # OPTIONAL - templates, images, static files
```
### SKILL.md Format
```markdown
---
name: my-skill
description: What this does. Use when user asks about X.
license: MIT
metadata: { "openclaw": { "requires": { "bins": ["python3"] } } }
---
# Skill Instructions
Write clear, imperative instructions here. Use {baseDir} to reference skill folder.
## Step 1
Do this: `command --arg`
## Troubleshooting
Common error → fix
```
### Frontmatter Fields
| Field | Required | Description |
| :--- | :--- | :--- |
| `name` | **Yes** | 1-64 chars, lowercase alphanumeric-hyphens |
| `description` | **Yes** | 1-1024 chars, include "Use when..." |
| `license` | No | SPDX identifier (MIT, Apache-2.0) |
| `metadata.openclaw` | No | Gating rules, installers, requirements |
### Progressive Disclosure (Token Efficiency)
Skills use a **three-stage loading model** to save context tokens :
| Stage | What Loads | When |
| :--- | :--- | :--- |
| **Discovery** | Only `name` + `description` | Session start (~100 tokens) |
| **Activation** | Full `SKILL.md` body | When skill is triggered |
| **Resources** | `references/` files | Only when explicitly referenced |
### Skill Locations & Priority
OpenClaw loads skills from multiple locations with this priority order:
1. **Workspace skills** - `<workspace>/skills` (highest priority)
2. **Project agent skills** - `<workspace>/.agents/skills`
3. **Personal agent skills** - `~/.agents/skills`
4. **Managed skills** - `~/.openclaw/skills`
5. **Bundled skills** - shipped with OpenClaw (lowest priority)
### Skill Gating (Load-Time Filtering)
Skills can be **conditionally loaded** based on environment :
```markdown
metadata: {
"openclaw": {
"requires": {
"bins": ["docker", "python3"],
"env": ["OPENAI_API_KEY"],
"config": ["browser.enabled"]
},
"os": ["darwin", "linux"],
"emoji": "🐳"
}
}
```
**Gating options**:
- `requires.bins` - binaries must be in PATH
- `requires.env` - environment variables must exist
- `requires.config` - config paths must be truthy
- `os` - restrict to specific platforms
### ClawHub (Skill Registry)
OpenClaw has a public skill registry at [clawhub.com](https://clawhub.com) . You can:
```bash
openclaw skills install <skill-slug> # Install to workspace
openclaw skills update --all # Update all skills
```
---
## Can You Make a Generic Agent That Accepts a skills.md File?
**Yes, absolutely.** The Agent Skills format is an **open standard** from [agentskills.io](https://agentskills.io) . This means skills are **portable across multiple platforms**, including:
- Claude Code
- Cursor
- GitHub Copilot
- OpenClaw
- VS Code (via symlinks)
- Any custom agent that implements the spec
### Building Your Own Generic Agent
You can build an agent that:
1. **Scans directories** for folders containing `SKILL.md`
2. **Parses YAML frontmatter** to get `name` and `description`
3. **Injects the manifest** into the system prompt
4. **Loads full SKILL.md** when the LLM indicates the skill is relevant
5. **Provides tool execution** for actions described in the skill
### Example: Minimal Agent Logic
```python
# Pseudocode for skill loading
skills = []
for skill_dir in scan_directories():
if (skill_dir / "SKILL.md").exists():
metadata = parse_frontmatter(skill_dir / "SKILL.md")
skills.append({
"name": metadata["name"],
"description": metadata["description"],
"path": skill_dir
})
# Inject manifest into system prompt
system_prompt = f"Available skills: {skills}\n\nWhen a skill is relevant, ask to load it."
# On skill trigger
if triggered_skill:
full_content = (triggered_skill["path"] / "SKILL.md").read_text()
# Inject into context and continue
```
### Validation Tools
You can validate skills using the official CLI :
```bash
uv tool install git+https://github.com/agentskills/agentskills#subdirectory=skills-ref
skills-ref validate ./my-skill
skills-ref read-properties ./my-skill
skills-ref to-prompt ./my-skill
```
---
## What Other Files Exist Alongside SKILL.md?
Yes, skills can include **three optional subdirectories** :
### 1. `scripts/` - Executable Code
Contains runnable scripts that the agent can execute:
```
scripts/
├── validate.py
├── process_data.sh
└── generate_report.js
```
Use in SKILL.md: `Run: python scripts/validate.py --input {file}`
### 2. `references/` - Detailed Documentation
Loaded **on-demand** to save context tokens:
```
references/
├── api_documentation.md
├── policies.md
├── architecture.md
└── troubleshooting.md
```
Reference in SKILL.md: `See [references/policies.md](references/policies.md) for details`
### 3. `assets/` - Static Resources
Templates, images, fonts, or any static files:
```
assets/
├── report-template.docx
├── diagram.png
├── config-schema.json
└── logo.svg
```
### Complete Example: OpenClaw-Skill Structure
A real-world example (OpenClaw-Skill) has **51 reference files** covering everything from architecture to troubleshooting :
```
OpenClaw-Skill/
├── SKILL.md # Main entry point
└── references/
├── architecture.md # Gateway architecture
├── agent_runtime.md # Agent loop details
├── channels.md # 20+ channel configs
├── providers.md # 35+ model providers
├── security.md # Security baseline
└── ... (46 more files)
```
---
## Summary Table
| Question | Answer |
| :--- | :--- |
| **How does OpenClaw work?** | Always-on agent runtime with serialized agent loop per session; Gateway routes requests → Agent loads skills → Model reasons → Tools execute |
| **What are Skills?** | Portable directories with `SKILL.md` containing YAML metadata + Markdown instructions |
| **Required files?** | Only `SKILL.md` with `name` and `description` frontmatter |
| **Optional files?** | `scripts/`, `references/`, `assets/` subdirectories |
| **Generic agent possible?** | **Yes** - Agent Skills is an open standard; works across Claude Code, Cursor, OpenClaw, and custom agents |
| **Skill validation?** | Official `skills-ref validate` CLI tool |
| **Skill registry?** | ClawHub at clawhub.com |
Would you like me to help you create a specific skill for your use case, or dive deeper into building a custom agent that uses the Agent Skills format?
No comments:
Post a Comment