MCP Server
Query and manage Kitaru executions, artifacts, and memory through Model Context Protocol tools
Kitaru ships an MCP server so assistants can query and manage executions with structured tool calls instead of parsing CLI text output.
Install MCP support
uv add kitaru --extra mcppip install "kitaru[mcp]"If you also want agents to start and stop the local Kitaru server, install the
local extra too:
uv add kitaru --extra mcp --extra localpip install "kitaru[mcp,local]"Start the server
kitaru-mcpThe server uses stdio transport by default.
Configure in Claude Code
kitaru-mcp has to resolve to the Python environment where you installed
kitaru[mcp]. Claude Code inherits the PATH of the shell that launched it,
not whatever virtualenv you activate later — so either activate your venv
before starting Claude, or point command at the absolute path to
kitaru-mcp inside that venv (e.g. /path/to/project/.venv/bin/kitaru-mcp).
The absolute-path form is the most reliable.
Option 1: project .mcp.json
Add this to .mcp.json in your project root (committed to the repo, so the
whole team picks it up):
{
"mcpServers": {
"kitaru": {
"command": "kitaru-mcp",
"args": []
}
}
}Or, using an absolute venv path:
{
"mcpServers": {
"kitaru": {
"command": "/absolute/path/to/.venv/bin/kitaru-mcp",
"args": []
}
}
}Option 2: claude mcp add CLI
Claude Code can register the server for you. Scope controls where the registration lives:
# Just you, just this project (default scope: local)
claude mcp add kitaru -- kitaru-mcp
# Shared with the team via .mcp.json in this repo
claude mcp add -s project kitaru -- kitaru-mcp
# Available in every project on your machine
claude mcp add -s user kitaru -- kitaru-mcpVerify with claude mcp list. If kitaru-mcp isn't on PATH, pass the
absolute venv path instead:
claude mcp add -s project kitaru -- /absolute/path/to/.venv/bin/kitaru-mcpYou can also just ask Claude: "add the Kitaru MCP server to this project" —
it will run claude mcp add for you.
Tool set
Execution tools:
kitaru_executions_listkitaru_executions_getkitaru_executions_latestget_execution_logskitaru_executions_runkitaru_executions_cancelkitaru_executions_inputkitaru_executions_retrykitaru_executions_replay
Artifact tools:
kitaru_artifacts_listkitaru_artifacts_get
Memory tools:
kitaru_memory_listkitaru_memory_getkitaru_memory_setkitaru_memory_deletekitaru_memory_historykitaru_memory_compactkitaru_memory_purgekitaru_memory_purge_scopekitaru_memory_compaction_log
Connection tools:
kitaru_start_local_serverkitaru_stop_local_serverkitaru_statuskitaru_stacks_listmanage_stack
Copy-paste prompts
Use prompts like these in an MCP-capable assistant after you configure the Kitaru MCP server.
Read-only status check:
Check my Kitaru status and list the five latest executions. Summarize anything waiting for input.Start and watch a flow:
Run `examples/basic_flow/first_working_flow.py:research_agent` with topic="durable execution", then watch the execution until it finishes.Resolve a waiting execution safely:
Find executions waiting for input. If exactly one is waiting, show me the question and ask me for the value before calling the input tool.Plan and run a replay:
Replay the latest failed execution from the checkpoint before the failing one. Explain the replay plan before running it.Inspect results from a completed execution:
Get the latest completed execution and show me its response artifacts.Manage a local stack:
Create a local Kitaru stack named local-dev if it does not already exist, then show me the current Kitaru status.Starting executions with kitaru_executions_run
The kitaru_executions_run tool requires a target string in the format:
<module_or_file>:<flow_name>The left side can be an importable module path or a .py filesystem path.
The right side is the flow attribute name in that module.
Examples:
examples/basic_flow/first_working_flow.py:research_agent
./examples/basic_flow/first_working_flow.py:research_agentPass flow inputs as args (a JSON object) and optionally specify a stack:
{
"target": "my_app.flows:research_flow",
"args": {"topic": "durable execution"},
"stack": "prod-k8s"
}When stack is provided, the tool passes it to .run(stack=...) so the
execution targets that stack.
Example query flow
- Call
kitaru_executions_list(status="waiting") - Ask the user to confirm an action for a pending wait
- Call
kitaru_executions_input(exec_id=..., wait=..., value=...)(MCP requires explicitwait; CLI auto-detects) - Re-check state via
kitaru_executions_get(exec_id)
To provision or clean up a local stack, use manage_stack(action="create", name="local-dev")
or manage_stack(action="delete", name="local-dev", force=True).
Memory tools
The memory tools give assistants direct structured access to Kitaru's durable key-value memory store.
scopeandscope_typeare required on every scoped memory toolversionis available only onkitaru_memory_get
Typical memory query/update flow:
kitaru_memory_list(scope="repo_docs", scope_type="namespace")kitaru_memory_get(key="style/release_notes", scope="repo_docs", scope_type="namespace")kitaru_memory_set(key="style/release_notes", value={"tone": "concise"}, scope="repo_docs", scope_type="namespace")kitaru_memory_history(key="style/release_notes", scope="repo_docs", scope_type="namespace")
Use these tools when an assistant needs durable shared state without parsing CLI output or inventing its own scratchpad format.
Memory maintenance tools
The maintenance tools let assistants manage memory growth:
kitaru_memory_compact— summarize memory values with an LLM and write the result. Usekeyfor the default single-key current-value workflow,keyplussource_mode="history"to summarize one key's full non-deleted history, orkeys(a list) withtarget_keyfor multi-key merging. Source entries are not deleted.kitaru_memory_purge— physically delete old versions of one key. Setkeepto retain the newest N versions, or omit it to delete everything.kitaru_memory_purge_scope— purge old versions across all keys in a scope. Setinclude_deletedto also remove tombstoned keys entirely.kitaru_memory_compaction_log— read the audit trail of all compact and purge operations for one scope (newest first).
Recommended maintenance sequence:
kitaru_memory_compact(scope="repo_docs", scope_type="namespace", key="notes/preferences")- Inspect the new summary if needed.
kitaru_memory_purge(key="notes/preferences", scope="repo_docs", scope_type="namespace", keep=1)kitaru_memory_compaction_log(scope="repo_docs", scope_type="namespace")
For a complete memory walkthrough including seeding, flow usage, and
cross-surface inspection, see examples/memory/flow_with_memory.py and
its demo playbook
for detailed MCP tool-call sequences.
Authentication and context
The MCP server reuses the same config/auth context as kitaru CLI and SDK.
If you want MCP tools to target a local server, start one first with bare
kitaru login or via kitaru_start_local_server(...). If you want MCP tools
to target a deployed Kitaru server, connect first with kitaru login <server>
before starting kitaru-mcp, or set KITARU_* connection variables in the MCP
server environment. If you can run kitaru status, MCP tools use that same
connection.
Replay behavior
kitaru_executions_replay starts a new execution and returns:
available: trueoperation: "replay"- the serialized replayed execution payload
Use from_ for checkpoint selection, optional flow_inputs for flow
parameter overrides, and optional overrides for checkpoint.* overrides.
Replay does not support wait.* overrides. If the replayed execution reaches a
wait, resolve it through the normal input flow afterward.
MCP currently exposes kitaru_executions_input but not a separate resume tool.
If your backend requires an explicit resume step after input resolution, use the
CLI or SDK resume(...) surface.