MCP and ACP integration
Harn has built-in support for the Model Context Protocol (MCP), Agent Client Protocol (ACP), and Agent-to-Agent (A2A) protocol. This guide covers how to use each from both client and server perspectives.
MCP client (connecting to MCP servers)
Connect to any MCP-compatible tool server, list its capabilities, and call tools from within a Harn program. Harn supports both stdio MCP servers and remote HTTP MCP servers.
Connecting manually
Use mcp_connect to spawn an MCP server process and perform the
initialize handshake:
let client = mcp_connect("npx", ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"])
let info = mcp_server_info(client)
println("Connected to: ${info.name}")
Listing and calling tools
let tools = mcp_list_tools(client)
for t in tools {
println("${t.name}: ${t.description}")
}
let content = mcp_call(client, "read_file", {path: "/tmp/data.txt"})
println(content)
mcp_call returns a string for single-text results, a list of content
dicts for multi-block results, or nil when empty. If the tool reports an
error, mcp_call throws.
Resources and prompts
let resources = mcp_list_resources(client)
let data = mcp_read_resource(client, "file:///tmp/config.json")
let prompts = mcp_list_prompts(client)
let prompt = mcp_get_prompt(client, "review", {code: "fn main() {}"})
Disconnecting
mcp_disconnect(client)
Auto-connection via harn.toml
Instead of calling mcp_connect manually, declare servers in harn.toml.
They connect automatically before the pipeline executes and are available
through the global mcp dict:
[[mcp]]
name = "filesystem"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
[[mcp]]
name = "github"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
[[mcp]]
name = "notion"
transport = "http"
url = "https://mcp.notion.com/mcp"
scopes = "read write"
Lazy boot (harn#75)
Servers marked lazy = true are NOT booted at pipeline startup. They
start on the first mcp_call, mcp_ensure_active("name"), or skill
activation that declares the server in requires_mcp. This keeps cold
starts fast when many servers are declared but only a few are needed
per run.
[[mcp]]
name = "github"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
lazy = true
keep_alive_ms = 30_000 # keep the process alive 30s after last release
[[mcp]]
name = "datadog"
command = "datadog-mcp"
lazy = true
Ref-counting: each skill activation or explicit
mcp_ensure_active(name) call bumps a binder count. On deactivation or
mcp_release(name), the count drops. When it reaches zero, Harn
disconnects the server — immediately if keep_alive_ms is absent, or
after the window elapses if set.
Explicit control from user code:
// Start the lazy server and hold it open.
let client = mcp_ensure_active("github")
let issues = mcp_call(client, "list_issues", {repo: "burin-labs/harn"})
// Release when done — lets the registry shut it down.
mcp_release("github")
// Inspect current state.
let status = mcp_registry_status()
for s in status {
println("${s.name}: lazy=${s.lazy} active=${s.active} refs=${s.ref_count}")
}
Server Cards (MCP v2.1)
A Server Card is a small JSON document that advertises a server’s identity, capabilities, and tool catalog without requiring a connection. Harn consumes cards for discoverability and can publish its own when running as an MCP server.
Declare a card source in harn.toml:
[[mcp]]
name = "notion"
transport = "http"
url = "https://mcp.notion.com/mcp"
card = "https://mcp.notion.com/.well-known/mcp-card"
[[mcp]]
name = "local-agent"
command = "my-agent"
lazy = true
card = "./agents/my-agent-card.json"
Fetch it from a pipeline:
// Look up by registered server name.
let card = mcp_server_card("notion")
println(card.description)
for t in card.tools {
println("- ${t.name}")
}
// Or pass a URL / path directly.
let card = mcp_server_card("./agents/my-agent-card.json")
Cards are cached in-process with a 5-minute TTL — repeated calls are free. Skill matchers can factor card metadata into scoring without paying connection cost.
Skill-scoped MCP binding
Skills can declare the MCP servers they need via requires_mcp (or the
equivalent mcp) frontmatter field. On activation, Harn ensures every
listed server is running; on deactivation, it releases them.
skill github_triage {
description: "Triage GitHub issues and cut fixes",
when_to_use: "User mentions a GitHub issue or PR by number",
requires_mcp: ["github"],
allowed_tools: ["list_issues", "create_pr", "add_comment"],
prompt: "You are a triage assistant...",
}
When agent_loop activates github_triage, the lazy github MCP
server boots (if configured that way) and its process stays alive for
as long as the skill is active. When the skill deactivates, the server
is released — and if no other skill holds it, the process shuts down
(respecting keep_alive_ms).
Transcript events emitted along the way: skill_mcp_bound,
skill_mcp_unbound, skill_mcp_bind_failed.
MCP tools in the tool-search index
When an LLM uses tool_search (progressive tool disclosure), MCP tools
are auto-tagged with both mcp:<server> and <server> in the BM25
corpus. That means a query like "github" or "mcp:github" surfaces
every tool from that server even when the tool’s own name and
description don’t contain the word. Tools returned by mcp_list_tools
carry an _mcp_server field that the indexer consumes automatically —
no extra wiring needed.
Use them in your pipeline:
pipeline default(task) {
let tools = mcp_list_tools(mcp.filesystem)
let content = mcp_call(mcp.filesystem, "read_file", {path: "/tmp/data.txt"})
println(content)
}
If a server fails to connect, a warning is printed to stderr and that
server is omitted from the mcp dict. Other servers still connect
normally.
For HTTP MCP servers, Harn can reuse OAuth tokens stored with the CLI:
harn mcp redirect-uri
harn mcp login notion
If the server uses a pre-registered OAuth client, you can provide those
values in harn.toml or on the CLI:
[[mcp]]
name = "internal"
transport = "http"
url = "https://mcp.example.com"
client_id = "https://client.example.com/metadata.json"
client_secret = "super-secret"
scopes = "read:docs write:docs"
When no client_id is provided, Harn will attempt dynamic client
registration if the authorization server advertises it.
Example: filesystem MCP server
A complete example connecting to the filesystem MCP server, writing a file, and reading it back:
let client = mcp_connect("npx", ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"])
mcp_call(client, "write_file", {path: "/tmp/hello.txt", content: "Hello from Harn!"})
let content = mcp_call(client, "read_file", {path: "/tmp/hello.txt"})
println(content)
let entries = mcp_call(client, "list_directory", {path: "/tmp"})
println(entries)
mcp_disconnect(client)
MCP server (exposing Harn as an MCP server)
Harn pipelines can expose tools, resources, resource templates, and prompts as an MCP server. This lets Claude Desktop, Cursor, or any MCP client call into your Harn code.
Defining tools
Use tool_registry() and tool_define() to create tools, then register
them with mcp_tools():
pipeline main(task) {
var tools = tool_registry()
tools = tool_define(tools, "greet", "Greet someone", {
parameters: {name: "string"},
handler: { args -> "Hello, ${args.name}!" }
})
tools = tool_define(tools, "search", "Search files", {
parameters: {query: "string"},
handler: { args -> "results for ${args.query}" },
annotations: {
title: "File Search",
readOnlyHint: true,
destructiveHint: false
}
})
mcp_tools(tools)
}
Defining resources and prompts
pipeline main(task) {
// Static resource
mcp_resource({
uri: "docs://readme",
name: "README",
text: "# My Agent\nA demo MCP server."
})
// Dynamic resource template
mcp_resource_template({
uri_template: "config://{key}",
name: "Config Values",
handler: { args -> "value for ${args.key}" }
})
// Prompt
mcp_prompt({
name: "review",
description: "Code review prompt",
arguments: [{name: "code", required: true}],
handler: { args -> "Please review:\n${args.code}" }
})
}
Running as an MCP server
harn mcp-serve agent.harn
All print/println output goes to stderr (stdout is the MCP
transport). The server supports the 2025-11-25 MCP protocol version
over stdio.
Publishing a Server Card
Attach a Server Card so clients can discover your server’s identity and capabilities before connecting:
harn mcp-serve agent.harn --card ./card.json
The card JSON is embedded in the initialize response’s
serverInfo.card field and also exposed as a read-only resource at
well-known://mcp-card. Minimal shape:
{
"name": "my-agent",
"version": "1.0.0",
"description": "Short one-line summary shown in pickers.",
"protocolVersion": "2025-11-25",
"capabilities": { "tools": true, "resources": false, "prompts": false },
"tools": [
{"name": "greet", "description": "Greet someone by name"}
]
}
--card also accepts an inline JSON string for ad-hoc publishing:
--card '{"name":"demo","description":"…"}'.
Configuring in Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"my-agent": {
"command": "harn",
"args": ["mcp-serve", "agent.harn"]
}
}
}
ACP (Agent Client Protocol)
ACP lets host applications and local clients use Harn as a runtime backend. Communication is JSON-RPC 2.0 over stdin/stdout.
Bridge-level tool gates and daemon idle/resume notifications are documented in Bridge protocol.
Running the ACP server
harn acp # no pipeline, uses bridge mode
harn acp pipeline.harn # execute a specific pipeline per prompt
Protocol overview
The ACP server supports these JSON-RPC methods:
| Method | Description |
|---|---|
initialize | Handshake with capabilities |
session/new | Create a new session (returns session ID) |
session/prompt | Send a prompt to the agent for execution |
session/cancel | Cancel the currently running prompt |
Queued user messages during agent execution
ACP hosts can inject user follow-up messages while an agent is running. Harn owns the delivery semantics inside the runtime so product apps do not need to reimplement queue/orchestration logic.
Supported notification methods:
user_messagesession/inputagent/user_messagesession/updatewithworker_updatecontent for delegated worker lifecycle events
Payload shape:
{
"content": "Please stop editing that file and explain first.",
"mode": "interrupt_immediate"
}
Supported mode values:
interrupt_immediatefinish_stepwait_for_completion
Runtime behavior:
interrupt_immediate: inject on the next agent loop boundary immediately- Worker lifecycle updates are emitted as structured
session/updatepayloads with worker id/name, status, lineage metadata, artifact counts, transcript presence, snapshot path, execution metadata, child run ids/paths, lifecycle summaries, and audit-session metadata when applicable. Hosts can render these as background task notifications instead of scraping stdout. - Bridge-mode logs also stream boot timing records (
ACP_BOOTwithcompile_ms,vm_setup_ms, andexecute_ms) and livespan_endduration events while a prompt is still running, so hosts do not need to wait for the final stdout flush to surface basic timing telemetry. finish_step: inject after the current tool/operation completeswait_for_completion: defer until the current agent interaction yields
Typed pipeline returns (Harn → ACP boundary)
Pipelines are what produce ACP events (agent_message_chunk,
tool_call, tool_call_update, plan, sessionUpdate). Declaring a
return type on a pipeline turns the Harn→ACP boundary into a
type-checked contract instead of an implicit shape that only the bridge
validates:
type PipelineResult = {
text: string | nil,
events: list<dict> | nil,
}
pub pipeline ghost_text(task) -> PipelineResult {
return {
text: "hello",
events: [],
}
}
The type checker verifies every return <expr> against the declared
type, so drift between pipeline output and bridge expectation is caught
before the Swift/TypeScript bridge ever sees the message.
Public pipelines without an explicit return type emit the
pipeline-return-type lint warning. Explicit return types on the
Harn→ACP boundary will be required in a future release; the warning is
a one-release deprecation window.
Well-known entry pipelines (default, main, auto, test) are
exempt from the warning because their return value is host-driven, not
consumed by a protocol bridge.
Canonical ACP envelope types are provided as Harn type aliases in
std/acp — SessionUpdate, AgentMessageChunk, ToolCall,
ToolCallUpdate, and Plan — and can be used directly as pipeline
return types so a pipeline’s contract matches the ACP schema
byte-for-byte.
Security notes
Remote MCP OAuth
harn mcp login stores remote MCP OAuth tokens in the local OS keychain for
standalone CLI reuse. Treat that as durable delegated access:
- prefer the narrowest scopes the server supports
- treat configured
client_secretvalues as secrets - review remote MCP capabilities before wiring them into autonomous workflows
Safer write defaults
Harn now propagates mutation-session audit metadata through workflow runs, delegated workers, and bridge tool gates. Recommended host defaults remain:
- proposal-first application for direct workspace edits
- worktree-backed execution for autonomous/background workers
- explicit approval for destructive or broad-scope mutation tools
Bridge mode
ACP internally uses Harn’s host bridge so the host can retain control over tool execution while Harn still owns agent/runtime orchestration.
Unknown builtins are delegated to the host via builtin_call JSON-RPC
requests. This enables the host to provide filesystem access, editor
integration, or other capabilities that Harn code can call as regular
builtins.
A2A (Agent-to-Agent Protocol)
A2A exposes a Harn pipeline as an HTTP server that other agents can interact with. The server implements A2A protocol version 1.0.0.
Running the server
harn serve agent.harn # default port 8080
harn serve --port 3000 agent.harn # custom port
Agent card
The server publishes an agent card at GET /.well-known/agent.json
describing the agent’s capabilities. MCP clients and other A2A agents
use this to discover the agent.
Task submission
Submit a task with a POST request:
POST /message/send
Content-Type: application/json
{
"message": {
"role": "user",
"parts": [{"type": "text", "text": "Analyze this codebase"}]
}
}
Task status
Check the status of a submitted task:
GET /task/get?id=<task-id>
Task states follow the A2A protocol lifecycle: submitted, working,
completed, failed, cancelled.