Appearance
Integration Guide
Step-by-step guide to integrating the Converra SDK into your application.
Option 1: Wrap Your LLM Client (Recommended)
The simplest integration — wrap your LLM client and all conversations are captured automatically:
typescript
import { Converra } from 'converra';
import OpenAI from 'openai';
const converra = new Converra({ apiKey: process.env.CONVERRA_API_KEY });
const openai = converra.wrap(new OpenAI());
// Use openai normally — conversations captured, A/B testing enabled
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful support agent.' },
{ role: 'user', content: userMessage }
]
});Converra identifies your agent by hashing the system prompt content. For dynamic prompts (e.g., with RAG context), pass an explicit prompt ID:
typescript
const openai = converra.wrap(new OpenAI(), { promptId: 'support-agent' });Option 2: Auto-Instrumentation (Zero Code)
No wrapping needed — every OpenAI and Anthropic client is patched automatically:
bash
# Via Node.js flag
CONVERRA_API_KEY=sk_live_... node --import converra/auto server.jsOr add one import at the top of your entrypoint:
typescript
// server.ts (first line)
import 'converra/auto';
// Later in your code — all clients are already instrumented
import OpenAI from 'openai';
const openai = new OpenAI(); // Already wrappedAuto-instrumentation is controlled via environment variables:
| Variable | Description | Default |
|---|---|---|
CONVERRA_API_KEY | API key (required) | — |
CONVERRA_ENABLED | Set false to disable | true |
CONVERRA_PROMPT_ID | Explicit prompt matching | — |
Option 3: Vercel AI SDK Middleware
The standalone @converra/ai-sdk-middleware package captures all generateText() and streamText() calls with zero risk — it never modifies your LLM requests or responses:
bash
npm install @converra/ai-sdk-middlewaretypescript
import { createConverraMiddleware } from '@converra/ai-sdk-middleware';
import { wrapLanguageModel, generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware: createConverraMiddleware({
apiKey: process.env.CONVERRA_API_KEY!,
promptName: 'My Support Agent',
}),
});
// Use normally — traces are captured automatically
const result = await generateText({ model, prompt: 'Help me reset my password' });Configuration options:
| Option | Type | Default | Description |
|---|---|---|---|
apiKey | string | required | Your Converra API key |
promptName | string | Untitled Prompt | Agent name (auto-created in Converra if it doesn't exist) |
batchSize | number | 10 | Traces buffered before flushing |
flushInterval | number | 30000 | Milliseconds between automatic flushes |
enabled | boolean | true | Kill switch to disable capture |
If you're already using the full converra SDK and want variant swapping (A/B testing on prompts), use the built-in middleware instead:
typescript
import { Converra } from 'converra';
import { createConverraMiddleware } from 'converra/ai-sdk';
const converra = new Converra({ apiKey: process.env.CONVERRA_API_KEY });
const middleware = createConverraMiddleware(converra.createInterceptor());
const result = await streamText({
model: openai('gpt-4o'),
messages,
experimental_middleware: middleware,
});Multi-Agent Tracing
For applications with multiple agents, use trace() to link related LLM calls:
typescript
const result = await converra.trace('customer-session-456').run(async () => {
// All LLM calls inside are linked into one trace
const classification = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'system', content: 'Classify this request...' }, ...],
});
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'system', content: 'You are a billing agent...' }, ...],
});
return response;
});
// Agent boundaries auto-detected by system prompt changesTraces use AsyncLocalStorage — all LLM calls within the callback are automatically linked, even across async boundaries.
Management API (Manual Integration)
For cases where you want explicit control over prompt fetching and conversation logging:
typescript
import { Converra } from 'converra';
const converra = new Converra({ apiKey: process.env.CONVERRA_API_KEY });
// 1. Fetch your prompt (cached for 5 minutes by default)
const prompt = await converra.prompts.get('prompt_123');
// 2. Use the prompt in your AI application
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: prompt.content },
{ role: 'user', content: userMessage }
]
});
// 3. Log the conversation for insights
await converra.conversations.create({
promptId: 'prompt_123',
content: `User: ${userMessage}\nAssistant: ${response.choices[0].message.content}`,
status: 'completed'
});Webhook-Based Cache Invalidation
For instant prompt updates when optimizations complete:
typescript
import { Converra, createWebhookHandler } from 'converra';
const converra = new Converra({
apiKey: process.env.CONVERRA_API_KEY,
cache: { strategy: 'memory', ttl: 5 * 60 * 1000 }
});
const webhookHandler = createWebhookHandler({
secret: process.env.CONVERRA_WEBHOOK_SECRET,
onPromptUpdated: (data) => {
converra.cache.invalidate(data.promptId);
},
onOptimizationCompleted: (data) => {
if (data.results.winningVariantId) {
console.log(`${data.results.improvementPercentage}% improvement`);
}
},
});
// Express
app.post('/webhooks/converra', express.raw({ type: 'application/json' }), async (req, res) => {
const result = await webhookHandler(req.body, req.headers['x-converra-signature']);
res.status(result.success ? 200 : 400).json(result);
});Framework Examples
Next.js App Router
typescript
// lib/converra.ts
import { Converra } from 'converra';
export const converra = new Converra({
apiKey: process.env.CONVERRA_API_KEY!,
});
// app/api/chat/route.ts
import { converra } from '@/lib/converra';
import OpenAI from 'openai';
const openai = converra.wrap(new OpenAI());
export async function POST(req: Request) {
const { messages } = await req.json();
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
});
return Response.json(response);
}Express.js
typescript
import express from 'express';
import { Converra } from 'converra';
import OpenAI from 'openai';
const app = express();
const converra = new Converra({ apiKey: process.env.CONVERRA_API_KEY! });
const openai = converra.wrap(new OpenAI());
app.post('/chat', async (req, res) => {
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: req.body.messages,
});
res.json(response);
});Graceful Shutdown
Flush pending data before process exit:
typescript
process.on('SIGTERM', async () => {
await converra.shutdown();
process.exit(0);
});Auto-instrumentation (converra/auto) handles shutdown automatically.
