Swap the Built-in Brain
Mutiro already ships with its own built-in brain. That is the default path, and for most agents it is the right one.
But you can also swap that brain out and run your own.
The clean interface for that is chatbridge: Mutiro runs the agent host, and your process becomes the brain over NDJSON on stdio.
What This Means
Mutiro still handles the agent identity, connectivity, message delivery, auth, media plumbing, and host lifecycle.
Your custom brain handles the thinking.
That means you can keep Mutiro's messaging platform while replacing the actual runtime with:
- a different LLM stack
- a rules engine
- a deterministic workflow
- a local script
- a TUI process
- something that is not an LLM at all
The Simplest Possible Example
Below is a tiny bridge brain that does not use any model.
It just:
- starts
mutiro agent host --mode=bridge
- completes the bridge handshake
- waits for
message.observed
- echoes the received text back to the same conversation
- closes the turn with
turn.end
Before You Start
- You need a normal Mutiro agent directory first. If you do not have one yet, see Getting Started.
- If that agent is already running with Mutiro's built-in brain, stop it first.
- Do not run the built-in brain and your custom bridge brain against the same agent at the same time.
Minimal Echo Brain
Save this as echo-brain.mjs:
import { spawn } from "node:child_process";
import readline from "node:readline";
import path from "node:path";
const PROTOCOL = "mutiro.agent.bridge.v1";
const TYPES = {
init: "type.googleapis.com/mutiro.chatbridge.ChatBridgeInitializeCommand",
sub: "type.googleapis.com/mutiro.chatbridge.ChatBridgeSubscriptionSetCommand",
result: "type.googleapis.com/mutiro.chatbridge.ChatBridgeCommandResult",
observedAck: "type.googleapis.com/mutiro.chatbridge.ChatBridgeMessageObservedResult",
send: "type.googleapis.com/mutiro.chatbridge.ChatBridgeSendMessageCommand",
turnEnd: "type.googleapis.com/mutiro.chatbridge.ChatBridgeTurnEndCommand",
};
const requestId = () => Math.random().toString(36).slice(2);
const agentDir = process.argv[2] ? path.resolve(process.argv[2]) : process.cwd();
const host = spawn("mutiro", ["agent", "host", "--mode=bridge"], {
cwd: agentDir,
env: process.env,
});
const rl = readline.createInterface({
input: host.stdout,
terminal: false,
});
const pending = new Map();
host.stderr.pipe(process.stderr);
host.on("exit", (code) => process.exit(code ?? 0));
function send(type, payload, extra = {}) {
host.stdin.write(`${JSON.stringify({
protocol_version: PROTOCOL,
type,
request_id: extra.request_id || requestId(),
payload,
...extra,
})}\n`);
}
function request(type, payload, extra = {}) {
return new Promise((resolve, reject) => {
const id = requestId();
pending.set(id, { resolve, reject });
send(type, payload, { ...extra, request_id: id });
});
}
function ack(request_id, responseType) {
send("command_result", {
"@type": TYPES.result,
ok: true,
response: { "@type": responseType },
}, { request_id });
}
rl.on("line", async (line) => {
if (!line.trim()) return;
const envelope = JSON.parse(line);
if (envelope.type === "ready") {
await request("session.initialize", {
"@type": TYPES.init,
role: "brain",
client_name: "echo-brain",
client_version: "1.0.0",
});
await request("subscription.set", {
"@type": TYPES.sub,
all: true,
conversation_ids: [],
});
return;
}
if (envelope.type === "command_result") {
pending.get(envelope.request_id)?.resolve(envelope.payload?.response || envelope.payload);
pending.delete(envelope.request_id);
return;
}
if (envelope.type === "error") {
pending.get(envelope.request_id)?.reject(envelope.error);
pending.delete(envelope.request_id);
return;
}
if (envelope.type !== "message.observed") {
return;
}
ack(envelope.request_id, TYPES.observedAck);
const message = envelope.payload?.message;
const conversationId = message?.conversation_id;
const messageId = message?.id;
const text = (message?.text || "").trim();
if (!conversationId || !messageId || !text) {
return;
}
await request("message.send", {
"@type": TYPES.send,
conversation_id: conversationId,
reply_to_message_id: messageId,
text: { text: `echo: ${text}` },
}, {
conversation_id: conversationId,
reply_to_message_id: messageId,
});
send("turn.end", {
"@type": TYPES.turnEnd,
status: "completed",
}, {
conversation_id: conversationId,
reply_to_message_id: messageId,
});
});
Run it:
node echo-brain.mjs /path/to/agent-directory
Now send a message to that agent. It will reply with:
echo: <your message>
Why This Example Matters
This script is intentionally simple. It proves the key point:
- your process can become the brain
- Mutiro stays the host
- communication happens over stdio
- outbound effects go back through bridge commands
Once that works, you can replace the echo: line with anything:
- call your own local model
- call another API
- run a workflow graph
- open a TUI
- dispatch to a code agent
- plug in Pi or another runtime
Core Bridge Shape
The minimal loop is:
- host sends
ready
- brain sends
session.initialize
- brain sends
subscription.set
- host delivers
message.observed
- brain acknowledges the observed message
- brain sends outbound bridge commands such as
message.send
- brain sends
turn.end
That is the core shape to understand. Everything else is layering.
Reference Brain Implementations
The Mutiro Labs organisation maintains three open-source brains over the same stdio bridge contract. They're useful both as drop-in replacements and as templates for building your own.
- Pi brain — small and pedagogical, the closest thing to "minimum viable brain over stdio." Read this first when learning the protocol:
https://github.com/mutirolabs/pi-brain
- OpenClaw brain — community LLM bridge geared at running open / local models against the Mutiro contract:
https://github.com/mutirolabs/openclaw-brain
- Claude Agent brain — wires Anthropic's Claude Code agent runtime into a Mutiro agent over the same bridge:
https://github.com/mutirolabs/claude-agent-brain
All three implement the envelope sequence above. The brain you choose depends on which model runtime, tool surface, and operational story fits your agent.