Appearance
Node.js SDK
Official Node.js SDK for Converra — automate AI agent improvement.
Installation
bash
npm install converraQuick Start — Wrap Your LLM Client
The fastest way to integrate. Wrap your LLM client with one line — all conversations are captured automatically, and A/B testing works transparently.
OpenAI
typescript
import { Converra } from 'converra';
import OpenAI from 'openai';
const converra = new Converra({ apiKey: process.env.CONVERRA_API_KEY });
const openai = converra.wrap(new OpenAI());
// Use openai normally — all calls are intercepted and captured
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
});Anthropic
typescript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = converra.wrap(new Anthropic());
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-20250514',
system: 'You are a helpful assistant.',
messages: [{ role: 'user', content: 'Hello' }],
max_tokens: 100,
});Vercel AI SDK
bash
npm install @converra/ai-sdk-middlewaretypescript
import { createConverraMiddleware } from '@converra/ai-sdk-middleware';
import { wrapLanguageModel, streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
const model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware: createConverraMiddleware({
apiKey: process.env.CONVERRA_API_KEY!,
promptName: 'My Support Agent',
}),
});
const result = await streamText({ model, messages });Zero-Code Auto-Instrumentation
No code changes needed — every OpenAI and Anthropic call is captured automatically:
bash
# Via Node.js flag (zero imports)
CONVERRA_API_KEY=sk_live_... node --import converra/auto server.jsOr add one import at the top of your entrypoint:
typescript
import 'converra/auto';
// That's it — all LLM clients are patched automaticallyMulti-Agent Tracing
Link related LLM calls into one trace for multi-agent analysis:
typescript
const result = await converra.trace('session-123').run(async () => {
const r1 = await openai.chat.completions.create({...}); // orchestrator
const r2 = await openai.chat.completions.create({...}); // sub-agent
return r2;
});
// Agent boundaries auto-detected by system prompt changesFeatures
- LLM Client Wrapping — One-line integration for OpenAI, Anthropic, Vercel AI SDK
- Auto-Instrumentation — Zero-code capture via
converra/auto - Multi-Agent Tracing — Link related LLM calls with AsyncLocalStorage
- A/B Variant Swapping — Transparent variant injection for optimization testing
- Prompt Management — Fetch, create, and update prompts with built-in caching
- Conversation Logging — Manual conversation capture for insights
- Webhook Handling — Type-safe handlers with signature verification
Configuration
typescript
const converra = new Converra({
apiKey: 'sk_live_...', // Required
baseUrl: 'https://api.converra.io/v1', // Optional
timeout: 30000, // Optional, request timeout in ms
cache: {
strategy: 'memory', // Optional: 'memory' or 'none'
ttl: 300000, // Optional: cache TTL in ms (default 5 min)
},
});Environment Variables (for auto-instrumentation)
| Variable | Description | Default |
|---|---|---|
CONVERRA_API_KEY | API key (required) | — |
CONVERRA_BASE_URL | API base URL | https://api.converra.io/v1 |
CONVERRA_ENABLED | Enable/disable | true |
CONVERRA_PROMPT_ID | Explicit prompt matching | — |
CONVERRA_SESSION_ID | Session grouping | — |
Requirements
- Node.js 18+
- A Converra API key (get one here)
Next Steps
- Integration Guide — Detailed patterns and framework examples
- Python SDK — Python equivalent with feature parity
- API Reference — Full method documentation
- Webhooks — Real-time event notifications
