Skip to content

Python SDK

Official Python SDK for Converra — automate AI agent improvement.

Feature parity with the Node.js SDK. Same wrap() pattern, same multi-agent tracing, same A/B testing.

Installation

bash
pip install converra

Quick Start

python
from converra import Converra
from openai import OpenAI

converra = Converra(api_key="sk_live_...").init()
client = converra.wrap(OpenAI())

# Use client normally — conversations captured automatically
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)

OpenAI

python
from converra import Converra
from openai import OpenAI

converra = Converra().init()  # Reads CONVERRA_API_KEY from env
client = converra.wrap(OpenAI())

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful support agent."},
        {"role": "user", "content": "I need help with my order"},
    ],
)

Async

python
from openai import AsyncOpenAI

async_client = converra.wrap(AsyncOpenAI())

response = await async_client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)

Anthropic

python
from anthropic import Anthropic

client = converra.wrap(Anthropic())

response = client.messages.create(
    model="claude-sonnet-4-20250514",
    system="You are a helpful assistant.",
    messages=[{"role": "user", "content": "Hello"}],
    max_tokens=100,
)

Async

python
from anthropic import AsyncAnthropic

async_client = converra.wrap(AsyncAnthropic())

LangChain

Use the callback handler for LangChain integration:

python
from converra.instrumentors import ConverraCallbackHandler

handler = ConverraCallbackHandler(converra.create_interceptor())

# Pass to any LangChain component
chain.invoke({"input": "Hello"}, config={"callbacks": [handler]})

Multi-Agent Tracing

Link related LLM calls into one trace using a context manager:

python
with converra.trace("session-123") as t:
    # All LLM calls inside are linked
    r1 = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "system", "content": "Classify this request..."}],
    )
    r2 = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "system", "content": "You are a billing agent..."}],
    )

Traces use contextvars — works across async boundaries.

Explicit Prompt ID

For dynamic prompts (e.g., with RAG context), pass an explicit prompt ID:

python
client = converra.wrap(OpenAI(), prompt_id="support-agent")

Configuration

python
converra = Converra(
    api_key="sk_live_...",                    # Required (or CONVERRA_API_KEY env var)
    base_url="https://app.converra.ai/api/v1",  # Optional
    timeout=30.0,                             # Optional, seconds
    cache_ttl=300.0,                          # Optional, prompt cache TTL in seconds
    batch_size=10,                            # Optional, batch size for transport
    flush_interval=5.0,                       # Optional, flush interval in seconds
)

Environment Variables

VariableDescriptionDefault
CONVERRA_API_KEYAPI key
CONVERRA_BASE_URLAPI base URLhttps://api.converra.io/v1

Graceful Shutdown

Flush pending data before process exit:

python
converra.shutdown()

Or use atexit:

python
import atexit
atexit.register(converra.shutdown)

Requirements

Next Steps