llm
LLM call primitive for tracked model interactions.
kitaru.llm() wraps one provider SDK completion call with Kitaru tracking.
Built-in runtime support covers openai/*, anthropic/*, ollama/*,
and openrouter/* models. Ollama and OpenRouter use the OpenAI-compatible
API and require the openai package (pip install kitaru[openai]).
funcllm(prompt, *, model=None, system=None, temperature=None, max_tokens=None, name=None) -> strMake a tracked LLM call.
parampromptstr | list[dict[str, Any]]User prompt text or a chat-style message list.
parammodelstr | None= NoneModel alias or provider/model identifier
(e.g. openai/gpt-5-nano).
paramsystemstr | None= NoneOptional system prompt.
paramtemperaturefloat | None= NoneOptional sampling temperature.
parammax_tokensint | None= NoneOptional maximum response tokens.
paramnamestr | None= NoneOptional display name for this call.
Returns
strThe model response text.