Overview
Durable execution and memory for long-running agents
Kitaru is an open-source runtime and orchestration layer for long-running Python agents. It keeps agent workflows persistent, replayable, observable, and stateful without requiring you to learn a graph DSL or change your Python control flow.
Create a durable agent
import kitaru
from kitaru import checkpoint, flow
@checkpoint
def research(topic: str) -> str:
return kitaru.llm(f"Summarize {topic} in two sentences.")
@checkpoint
def draft_report(summary: str) -> str:
return kitaru.llm(f"Write a short report based on: {summary}")
@flow
def research_agent(topic: str) -> str:
summary = research(topic)
return draft_report(summary)
if __name__ == "__main__":
research_agent.run(topic="Why do AI agents need durable execution?")Each @checkpoint is a durable unit of work — its output is persisted
automatically. If the flow fails at draft_report, replaying it skips
research and reuses its recorded result. kitaru.llm() tracks model
calls with prompt, response, usage, and cost capture built in.
See the Quickstart to install and run this yourself.
What your agent can do with Kitaru
These are the shipped primitives Kitaru adds to ordinary Python agent code — no rewrites required.
- Durable execution: Wrap steps in
@checkpointand your agent picks up where it left off without re-running expensive work - Replay from failure: Re-run only the failed part of a flow by replaying from a checkpoint instead of starting from scratch
- Wait and resume: Add
kitaru.wait()and let agents pause for a human, another system, or later input while compute is released - Durable memory:
kitaru.memorystores scoped, versioned key-value state you can seed, inspect, compact, and reuse across executions - Execution management:
KitaruClientlets you inspect, replay, retry, resume, and cancel executions from code or CLI - Tracked LLM calls: Use
kitaru.llm()and every call gets automatic secret resolution, prompt/response capture, and cost tracking - Persistent data:
kitaru.save()/kitaru.load()let agents store and retrieve files, objects, and results across executions - Structured observability:
kitaru.log()attaches key-value metadata to any checkpoint or flow for debugging and the UI - Runtime configuration:
kitaru.configure()sets your model, log store, and stack defaults in one call - Framework and infrastructure portability: Keep your Python control flow, use your preferred framework, and run locally or on remote stacks across Kubernetes, Vertex, SageMaker, and AzureML
Next Steps
Installation
Install Kitaru with uv or pip
Quickstart
Run a tiny flow end to end
Examples
Browse runnable workflows grouped by goal
Core Concepts
Understand flows, checkpoints, and the execution model for long-running agents
Execution Management
Inspect runs, replay, retry, resume, and fetch logs
Memory
Seed and inspect durable, versioned key-value state across Python, CLI, and MCP
Wait, Input, and Resume
Pause flows for external input and continue the same execution
Tracked LLM Calls
Use kitaru.llm() with aliases, secrets, and captured artifacts
Secrets + Model Registration
Store provider credentials, register a model alias, and use kitaru.llm()
Configuration
Set runtime defaults and understand override precedence
Stacks
Create, inspect, switch, and clean up local and remote stacks across Kubernetes, Vertex, SageMaker, and AzureML
MCP Server
Query and manage executions via MCP tools
Claude Code Skills
Install the Kitaru scoping and authoring skills
CLI Reference
Browse the generated command reference
Blog
Read essays on durable execution, long-running agents, and Kitaru's design