Appearance
Python SDK
Official Python SDK for Converra — automate AI agent improvement.
Feature parity with the Node.js SDK. Same wrap() pattern, same multi-agent tracing, same A/B testing.
Installation
bash
pip install converraQuick Start
python
from converra import Converra
from openai import OpenAI
converra = Converra(api_key="sk_live_...").init()
client = converra.wrap(OpenAI())
# Use client normally — conversations captured automatically
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)OpenAI
python
from converra import Converra
from openai import OpenAI
converra = Converra().init() # Reads CONVERRA_API_KEY from env
client = converra.wrap(OpenAI())
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful support agent."},
{"role": "user", "content": "I need help with my order"},
],
)Async
python
from openai import AsyncOpenAI
async_client = converra.wrap(AsyncOpenAI())
response = await async_client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)Anthropic
python
from anthropic import Anthropic
client = converra.wrap(Anthropic())
response = client.messages.create(
model="claude-sonnet-4-20250514",
system="You are a helpful assistant.",
messages=[{"role": "user", "content": "Hello"}],
max_tokens=100,
)Async
python
from anthropic import AsyncAnthropic
async_client = converra.wrap(AsyncAnthropic())LangChain
Use the callback handler for LangChain integration:
python
from converra.instrumentors import ConverraCallbackHandler
handler = ConverraCallbackHandler(converra.create_interceptor())
# Pass to any LangChain component
chain.invoke({"input": "Hello"}, config={"callbacks": [handler]})Multi-Agent Tracing
Link related LLM calls into one trace using a context manager:
python
with converra.trace("session-123") as t:
# All LLM calls inside are linked
r1 = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "system", "content": "Classify this request..."}],
)
r2 = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "system", "content": "You are a billing agent..."}],
)Traces use contextvars — works across async boundaries.
Explicit Prompt ID
For dynamic prompts (e.g., with RAG context), pass an explicit prompt ID:
python
client = converra.wrap(OpenAI(), prompt_id="support-agent")Configuration
python
converra = Converra(
api_key="sk_live_...", # Required (or CONVERRA_API_KEY env var)
base_url="https://app.converra.ai/api/v1", # Optional
timeout=30.0, # Optional, seconds
cache_ttl=300.0, # Optional, prompt cache TTL in seconds
batch_size=10, # Optional, batch size for transport
flush_interval=5.0, # Optional, flush interval in seconds
)Environment Variables
| Variable | Description | Default |
|---|---|---|
CONVERRA_API_KEY | API key | — |
CONVERRA_BASE_URL | API base URL | https://api.converra.io/v1 |
Graceful Shutdown
Flush pending data before process exit:
python
converra.shutdown()Or use atexit:
python
import atexit
atexit.register(converra.shutdown)Requirements
- Python 3.9+
- A Converra API key (get one here)
Next Steps
- Node.js SDK — Node.js equivalent
- Integration Guide — Detailed patterns and framework examples
- API Reference — Full method documentation
