Capture, Optimize, and Deploy — From Your Stack
Start with zero code changes — a CLI flag or an API key in your observability platform. Graduate to one-line SDK wrapping for the full closed loop.
Diagnose
Connectors, Paste / Upload
Auto-insights, failure patterns, and step-level diagnosis from your existing data.
Diagnose + Fix + Test
MCP, REST API
Interactive optimization, A/B simulation, and variant comparison from your IDE or backend.
Diagnose + Fix + Deploy
SDK
Autonomous closed loop — variants generated, tested, and served at runtime. No redeployment.
SDK — The Full Loop
Full Closed LoopOne import to capture, optimize, and deploy — without changing how you call your LLM.
Capture
Auto-instruments every LLM call. Prompts detected by content hash — no manual registration needed.
Optimize
Converra generates variants and tests them in simulation. You review or auto-approve.
Deploy
Winning variants served transparently at runtime. Deterministic session bucketing ensures consistency.
npm install converra # TypeScript
pip install converra # Python# .env
CONVERRA_API_KEY=sk_...
// One line at app startup — patches all LLM clients
import 'converra/auto';
// Or with Node.js flag (zero code changes):
// node --import converra/auto server.jsWhy the SDK?
- Transparent A/B testing — SDK swaps system prompts during active optimization with deterministic session bucketing.
- Auto-prompt detection — content hash matching means zero manual prompt registration.
- Multi-agent tracing with automatic agent boundary detection.
- Fail-safe: if Converra is down, your agent is completely unaffected.
- Your API keys pass through to the provider — never stored by Converra.
Data Connectors
Already tracing LLM runs? Converra syncs your data, generates insights automatically, and surfaces the failures worth fixing.
LangSmith
by LangChain
Continuous sync of traces and feedback scores. Converra auto-generates insights, detects failure patterns, and identifies which agent step breaks.
Langfuse
Open source LLM observability
Continuous sync from Langfuse Cloud (EU/US regions). Feedback scores pulled asynchronously. Same auto-diagnosis and failure detection.
OTel / Axiom
OpenTelemetry gen_ai.* spans
Sync gen_ai spans from Axiom. Same incremental import, auto-insights, and agent topology detection as other connectors.
How it works
Connect API key
Select workspace & project
Configure sync (hourly → daily)
Review diagnosed failures & fixes
Multi-agent traces are grouped into agent systems with path visualization and weakest-link scoring.
Connectors diagnose and surface insights. To deploy fixes automatically, pair them with the SDK.
GitHub Integration
DeployOptimization winners delivered as pull requests — with metrics, evidence, and a one-click merge path. Like Dependabot, but for your AI agents.
Auto-PR on Completion
When an optimization finds a winner, Converra opens a PR in your repo with the updated prompt file, metrics table, and evidence summary.
Check Runs & Labels
Each PR includes a GitHub check run with pass/fail status and labels like converra:optimization for easy filtering.
Merge-Back Sync
When you merge, Converra updates the prompt status automatically. New PRs supersede stale ones for the same prompt.
Setup
Connect GitHub — install the Converra GitHub App from the Integrations page. Select which repos to grant access to.
Map prompts to files — Converra auto-detects prompt files in your repo, or you map them manually in the prompt settings.
Run optimization — when a winner is found, a PR appears in your repo with the diff, metrics, and evidence.
What's in the PR
- Metrics table — head-to-head lift, goal achievement, sentiment, and clarity scores with deltas vs baseline.
- Evidence summary — number of simulations, persona coverage, and statistical confidence.
- Check run — GitHub check with pass/fail tied to the optimization outcome.
- Labels —
converra:optimizationandconverra:auto-prfor filtering. - Auto-superseding — if a newer optimization completes for the same prompt, the old PR is closed and replaced.
Manual PR creation (API)
curl -X POST https://converra.ai/api/v1/optimizations/{id}/github-pr \
-H "Authorization: Bearer sk_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"owner": "your-org",
"repo": "your-repo",
"filePath": "prompts/support-agent.txt",
"baseBranch": "main"
}'Returns the PR URL and number. Also available via the SDK: converra.optimizations.createGitHubPR(id, options)
MCP Server
Interactive optimization from your AI assistant. Works with Cursor, Claude Code, Windsurf, and any MCP-compatible client.
Great for exploring Converra's capabilities before committing to SDK integration.
Copy this to your AI assistant:
“Install the Converra MCP server, then help me upload my prompts and optimize them.”
After signing up, you'll get an API key to include in the prompt.
Example prompts to try:
Easy Data Import
(no code required)
See Converra's value in 5 minutes. Upload existing conversations, get insights immediately — no integration required.
Paste Data
On the Integrations page, paste chat transcripts or logs directly into the app.
Converra parses prompts and conversations and prepares them for optimization.
You get immediate visibility into performance issues and opportunities.
Upload File
Upload .txt, .csv, .xlsx, or .zip files (up to 10MB each).
See how many prompts and conversations were processed.
Click straight into the resulting prompt records and insights.
SDK Reference (TypeScript + Python)
Use the official TypeScript or Python SDK from your backend or worker processes. Both are published and feature-equivalent.
npm install converra # TypeScript
pip install converra # Pythonimport { Converra } from "converra";
const converra = new Converra({
apiKey: process.env.CONVERRA_API_KEY!,
// baseUrl: "https://api.converra.io/v1", // optional override
// timeout: 30000, // ms, optional
});
// Prompts
const { data: prompts } = await converra.prompts.list();
const prompt = await converra.prompts.get("prompt_123");
const newPrompt = await converra.prompts.create({
name: "Customer Support Agent",
content: "You are a helpful customer support agent...",
objective: "Resolve customer issues efficiently while maintaining satisfaction",
tags: ["support", "production"],
});
await converra.prompts.update(newPrompt.id, { status: "active" });
// Optimizations
const optimization = await converra.optimizations.trigger({
promptId: newPrompt.id,
mode: "exploratory",
variantCount: 3,
});
const results = await converra.optimizations.getVariants(optimization.id);
await converra.optimizations.applyVariant(optimization.id);
// Conversations & insights
await converra.conversations.create({
promptId: newPrompt.id,
content: "...",
status: "completed",
});
const promptInsights = await converra.insights.forPrompt(newPrompt.id, { days: 30 });Great for
- AI app developers embedding optimization into existing services.
- Platform teams who prefer typed, versioned integration with their infra.
SDK surface area
Converra's SDK is intentionally small and predictable:
Prompts
- prompts.list({ page, perPage, status })
- prompts.get(promptId)
- prompts.create({ name, content, tags, objective, ... })
- prompts.update(promptId, { content, status, ... })
- prompts.delete(promptId)
Optimizations
- optimizations.trigger(input: TriggerOptimizationInput)
- optimizations.get(id)
- optimizations.list({ promptId, status })
- optimizations.stop(id, reason?)
- optimizations.getVariants(id)
- optimizations.applyVariant(id, variantId?)
- optimizations.getStreamUrl(id)
Conversations & insights
- conversations.create({ promptId, content, ... })
- conversations.get(id)
- conversations.list({ promptId, status, ... })
- conversations.getInsights(id)
- insights.forPrompt(promptId, { days })
- insights.overall({ days })
Personas & Webhooks
- personas.list({ tags, search, page, perPage })
- personas.create({ name, description, tags })
- webhooks.list({ isActive, page, perPage })
- webhooks.create({ url, events, description? })
- webhooks.delete(id)
Everything is fully typed with exported TypeScript interfaces.
Error handling
import { Converra, ConverraError } from "converra";
try {
await converra.prompts.get("nonexistent_prompt");
} catch (error) {
if (error instanceof ConverraError) {
console.error(`${error.code} (${error.statusCode}): ${error.message}`);
console.error("Details:", error.details);
// error.code examples: TIMEOUT, NETWORK_ERROR, NOT_FOUND, UNAUTHORIZED, VALIDATION_ERROR
} else {
throw error;
}
}REST / JSON-RPC for any stack
Use JSON-RPC over HTTPS to call the same tools Converra exposes to MCP clients - from any language, any stack.
curl -X POST https://converra.ai/api/mcp \
-H "Authorization: Bearer sk_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "trigger_optimization",
"arguments": {
"promptId": "prompt_123",
"variantCount": 3,
"mode": "exploratory",
"intent": {
"targetImprovements": ["clarity", "task completion"],
"variationDegree": "moderate"
}
}
}
}'Head-to-head testing is also available:
curl -X POST https://converra.ai/api/mcp \
-H "Authorization: Bearer sk_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "simulate_ab_test",
"arguments": {
"baselinePrompt": "You are a helpful assistant.",
"variantPrompt": "You are an expert who gives precise, actionable answers.",
"personaCount": 3,
"scenarioCount": 5
}
}
}'Perfect for polyglot environments and custom infra.
Available tools
Prompts
list_prompts, get_prompt_status, create_prompt, update_prompt
Optimization
trigger_optimization, get_optimization_details, apply_variant, stop_optimization
Simulation
analyze_prompt, simulate_prompt, simulate_ab_test, regression_test, list_personas
Account / insights
get_usage, get_insights, create_webhook, get_stream_url
Requirements
- Node.js 18+ or Python 3.9+ for the SDK.
- A Converra API key (generated from the Integrations page in the app).
- For MCP / JSON-RPC: any environment that can make HTTPS requests.