Kitaru
Agent Native

MCP Server

Query and manage Kitaru executions, artifacts, and memory through Model Context Protocol tools

Kitaru ships an MCP server so assistants can query and manage executions with structured tool calls instead of parsing CLI text output.

Install MCP support

uv add kitaru --extra mcp
pip install "kitaru[mcp]"

If you also want agents to start and stop the local Kitaru server, install the local extra too:

uv add kitaru --extra mcp --extra local
pip install "kitaru[mcp,local]"

Start the server

kitaru-mcp

The server uses stdio transport by default.

Configure in Claude Code

kitaru-mcp has to resolve to the Python environment where you installed kitaru[mcp]. Claude Code inherits the PATH of the shell that launched it, not whatever virtualenv you activate later — so either activate your venv before starting Claude, or point command at the absolute path to kitaru-mcp inside that venv (e.g. /path/to/project/.venv/bin/kitaru-mcp). The absolute-path form is the most reliable.

Option 1: project .mcp.json

Add this to .mcp.json in your project root (committed to the repo, so the whole team picks it up):

{
  "mcpServers": {
    "kitaru": {
      "command": "kitaru-mcp",
      "args": []
    }
  }
}

Or, using an absolute venv path:

{
  "mcpServers": {
    "kitaru": {
      "command": "/absolute/path/to/.venv/bin/kitaru-mcp",
      "args": []
    }
  }
}

Option 2: claude mcp add CLI

Claude Code can register the server for you. Scope controls where the registration lives:

# Just you, just this project (default scope: local)
claude mcp add kitaru -- kitaru-mcp

# Shared with the team via .mcp.json in this repo
claude mcp add -s project kitaru -- kitaru-mcp

# Available in every project on your machine
claude mcp add -s user kitaru -- kitaru-mcp

Verify with claude mcp list. If kitaru-mcp isn't on PATH, pass the absolute venv path instead:

claude mcp add -s project kitaru -- /absolute/path/to/.venv/bin/kitaru-mcp

You can also just ask Claude: "add the Kitaru MCP server to this project" — it will run claude mcp add for you.

Tool set

Execution tools:

  • kitaru_executions_list
  • kitaru_executions_get
  • kitaru_executions_latest
  • get_execution_logs
  • kitaru_executions_run
  • kitaru_executions_cancel
  • kitaru_executions_input
  • kitaru_executions_retry
  • kitaru_executions_replay

Artifact tools:

  • kitaru_artifacts_list
  • kitaru_artifacts_get

Memory tools:

  • kitaru_memory_list
  • kitaru_memory_get
  • kitaru_memory_set
  • kitaru_memory_delete
  • kitaru_memory_history
  • kitaru_memory_compact
  • kitaru_memory_purge
  • kitaru_memory_purge_scope
  • kitaru_memory_compaction_log

Connection tools:

  • kitaru_start_local_server
  • kitaru_stop_local_server
  • kitaru_status
  • kitaru_stacks_list
  • manage_stack

Copy-paste prompts

Use prompts like these in an MCP-capable assistant after you configure the Kitaru MCP server.

Read-only status check:

Check my Kitaru status and list the five latest executions. Summarize anything waiting for input.

Start and watch a flow:

Run `examples/basic_flow/first_working_flow.py:research_agent` with topic="durable execution", then watch the execution until it finishes.

Resolve a waiting execution safely:

Find executions waiting for input. If exactly one is waiting, show me the question and ask me for the value before calling the input tool.

Plan and run a replay:

Replay the latest failed execution from the checkpoint before the failing one. Explain the replay plan before running it.

Inspect results from a completed execution:

Get the latest completed execution and show me its response artifacts.

Manage a local stack:

Create a local Kitaru stack named local-dev if it does not already exist, then show me the current Kitaru status.

Starting executions with kitaru_executions_run

The kitaru_executions_run tool requires a target string in the format:

<module_or_file>:<flow_name>

The left side can be an importable module path or a .py filesystem path. The right side is the flow attribute name in that module.

Examples:

examples/basic_flow/first_working_flow.py:research_agent
./examples/basic_flow/first_working_flow.py:research_agent

Pass flow inputs as args (a JSON object) and optionally specify a stack:

{
  "target": "my_app.flows:research_flow",
  "args": {"topic": "durable execution"},
  "stack": "prod-k8s"
}

When stack is provided, the tool passes it to .run(stack=...) so the execution targets that stack.

Example query flow

  1. Call kitaru_executions_list(status="waiting")
  2. Ask the user to confirm an action for a pending wait
  3. Call kitaru_executions_input(exec_id=..., wait=..., value=...) (MCP requires explicit wait; CLI auto-detects)
  4. Re-check state via kitaru_executions_get(exec_id)

To provision or clean up a local stack, use manage_stack(action="create", name="local-dev") or manage_stack(action="delete", name="local-dev", force=True).

Memory tools

The memory tools give assistants direct structured access to Kitaru's durable key-value memory store.

  • scope and scope_type are required on every scoped memory tool
  • version is available only on kitaru_memory_get

Typical memory query/update flow:

  1. kitaru_memory_list(scope="repo_docs", scope_type="namespace")
  2. kitaru_memory_get(key="style/release_notes", scope="repo_docs", scope_type="namespace")
  3. kitaru_memory_set(key="style/release_notes", value={"tone": "concise"}, scope="repo_docs", scope_type="namespace")
  4. kitaru_memory_history(key="style/release_notes", scope="repo_docs", scope_type="namespace")

Use these tools when an assistant needs durable shared state without parsing CLI output or inventing its own scratchpad format.

Memory maintenance tools

The maintenance tools let assistants manage memory growth:

  • kitaru_memory_compact — summarize memory values with an LLM and write the result. Use key for the default single-key current-value workflow, key plus source_mode="history" to summarize one key's full non-deleted history, or keys (a list) with target_key for multi-key merging. Source entries are not deleted.
  • kitaru_memory_purge — physically delete old versions of one key. Set keep to retain the newest N versions, or omit it to delete everything.
  • kitaru_memory_purge_scope — purge old versions across all keys in a scope. Set include_deleted to also remove tombstoned keys entirely.
  • kitaru_memory_compaction_log — read the audit trail of all compact and purge operations for one scope (newest first).

Recommended maintenance sequence:

  1. kitaru_memory_compact(scope="repo_docs", scope_type="namespace", key="notes/preferences")
  2. Inspect the new summary if needed.
  3. kitaru_memory_purge(key="notes/preferences", scope="repo_docs", scope_type="namespace", keep=1)
  4. kitaru_memory_compaction_log(scope="repo_docs", scope_type="namespace")

For a complete memory walkthrough including seeding, flow usage, and cross-surface inspection, see examples/memory/flow_with_memory.py and its demo playbook for detailed MCP tool-call sequences.

Authentication and context

The MCP server reuses the same config/auth context as kitaru CLI and SDK. If you want MCP tools to target a local server, start one first with bare kitaru login or via kitaru_start_local_server(...). If you want MCP tools to target a deployed Kitaru server, connect first with kitaru login <server> before starting kitaru-mcp, or set KITARU_* connection variables in the MCP server environment. If you can run kitaru status, MCP tools use that same connection.

Replay behavior

kitaru_executions_replay starts a new execution and returns:

  • available: true
  • operation: "replay"
  • the serialized replayed execution payload

Use from_ for checkpoint selection, optional flow_inputs for flow parameter overrides, and optional overrides for checkpoint.* overrides.

Replay does not support wait.* overrides. If the replayed execution reaches a wait, resolve it through the normal input flow afterward.

MCP currently exposes kitaru_executions_input but not a separate resume tool. If your backend requires an explicit resume step after input resolution, use the CLI or SDK resume(...) surface.

On this page