# Converra - Complete Documentation > Turn agent traces into tested fixes ## What is Converra? Converra finds the failing step in your AI agent, generates a fix, and validates it with simulation—before anything touches production. It pairs real-time performance tracking with intelligent optimization so prompts improve continuously without engineering upkeep. The platform: - Analyzes real production conversations to identify improvement opportunities - Automatically generates candidate prompt variants based on objectives - Tests variants through simulated conversations with diverse personas and scenarios - Measures performance with multi-metric evaluation - Recommends winners with statistical confidence - Deploys improvements with one-click approval ## Core Capabilities ### 1. Prompt Analysis Analyze any prompt to identify structural weaknesses, clarity issues, and improvement opportunities without running simulations. **What you get:** - Weakness identification with severity levels - Strength recognition - Metrics: clarity, specificity, context richness, potential effectiveness - Prioritized improvement recommendations ### 2. Simulation Testing Run prompts against auto-generated or custom personas and scenarios to evaluate real-world performance. **Features:** - Auto-generated diverse personas based on prompt context - Scenario generation covering common and edge cases - Multi-turn conversation simulation - Quality scoring on multiple dimensions - Strengths/weaknesses analysis per conversation ### 3. Head-to-Head Comparison A/B test two prompt variants under identical conditions to measure lift. **What you get:** - Same personas and scenarios for both variants - Head-to-head pair comparison (per CLAUDE.md winner selection rules) - Lift calculation on all metrics - Evidence level assessment (insufficient/low/medium/high) - Clear recommendation: baseline, variant, or inconclusive ### 4. Automated Optimization Trigger full optimization runs that generate variants, run simulations, and select winners. **Process:** 1. Analyze baseline prompt 2. Generate N variants based on intent/goals 3. Simulate all variants against diverse test conditions 4. Aggregate metrics with head-to-head comparison 5. Select winner based on positive lift above baseline 6. Present results for approval ### 5. Production Deployment Apply winning variants to replace original prompt content with full audit trail. --- ## Integration Methods ### Method 1: MCP Server (Recommended for AI Assistants) Native integration with Claude, Cursor, Windsurf, and any MCP-compatible AI assistant. **Setup:** ```bash claude mcp add converra -- npx -y mcp-remote@latest https://converra.ai/api/mcp \ --header "Authorization:Bearer YOUR_API_KEY" ``` **Available Tools (30+):** **Prompts:** - `list_prompts` - List all prompts for the customer - `get_prompt_status` - Get prompt details, metrics, and active optimization - `create_prompt` - Create a new prompt - `update_prompt` - Update prompt content or settings **Optimization:** - `trigger_optimization` - Start optimization with intent and variant count - `get_optimization_details` - Check progress, variants, and activities - `stop_optimization` - Stop a running optimization - `apply_variant` - Deploy winning variant to production - `get_variant_details` - Compare all variants with metrics **Simulation (No optimization required):** - `analyze_prompt` - Get improvement recommendations - `simulate_prompt` - Run simulated conversations - `simulate_ab_test` - A/B simulation test comparing two prompts - `regression_test` - Test variant against golden scenarios **Personas:** - `list_personas` - View available test personas - `create_persona` - Create custom persona for testing **Conversations:** - `list_conversations` - List logged conversations - `get_conversation` - Get conversation details with insights - `create_conversation` - Log production conversation for analysis. IMPORTANT: Content must be actual user/AI dialogue (e.g., "User: Hi\nAI: Hello!"), NOT system prompts or instructions. **Insights:** - `get_insights` - Aggregated performance insights for a prompt **Account:** - `get_account` - Organization details and subscription - `get_usage` - Token usage, optimization counts, limits - `get_settings` - Account preferences - `update_settings` - Modify optimization defaults **Webhooks:** - `create_webhook` - Subscribe to events - `list_webhooks` - View configured webhooks - `delete_webhook` - Remove webhook ### Method 2: Node.js SDK ```bash npm install converra ``` **Basic Usage:** ```typescript import { Converra } from "converra"; const converra = new Converra({ apiKey: process.env.CONVERRA_API_KEY, }); // List prompts const prompts = await converra.prompts.list(); // Create a prompt const prompt = await converra.prompts.create({ name: "Customer Support Agent", content: "You are a helpful customer support agent...", tags: ["support", "production"], }); // Trigger optimization const optimization = await converra.optimizations.trigger({ promptId: prompt.id, mode: "exploratory", variantCount: 3, intent: { targetImprovements: ["task completion", "clarity"], variationDegree: "moderate", }, }); // Check progress const details = await converra.optimizations.get(optimization.id); // Apply winner await converra.optimizations.applyVariant(optimization.id); ``` ### Method 3: REST API (JSON-RPC 2.0) **Endpoint:** `POST https://converra.ai/api/mcp` **Headers:** ``` Authorization: Bearer YOUR_API_KEY Content-Type: application/json ``` **Request Format:** ```json { "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "tool_name", "arguments": { ... } } } ``` **Example - Trigger Optimization:** ```bash curl -X POST https://converra.ai/api/mcp \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "trigger_optimization", "arguments": { "promptId": "prompt_123", "variantCount": 3, "mode": "exploratory", "intent": { "targetImprovements": ["clarity", "task completion"], "variationDegree": "moderate" } } } }' ``` **Example - Analyze Prompt:** ```bash curl -X POST https://converra.ai/api/mcp \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "analyze_prompt", "arguments": { "prompt": "You are a helpful assistant..." } } }' ``` **Example - A/B Simulation Test:** ```bash curl -X POST https://converra.ai/api/mcp \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "simulate_ab_test", "arguments": { "baselinePrompt": "You are a helpful assistant...", "variantPrompt": "You are an expert assistant who...", "personaCount": 3, "scenarioCount": 3 } } }' ``` --- ## Evaluation Metrics ### Primary Metrics | Metric | Description | Range | |--------|-------------|-------| | Success Score | Overall task completion effectiveness | 0-100 | | AI Relevancy | Response appropriateness to user query | 0-100 | | User Sentiment | Tone and satisfaction indicators | 0-100 | | Task Completion | Did the agent accomplish the user's goal | 0-100 | | Clarity | Comprehensibility of responses | 0-100 | ### Quality Metrics | Metric | Description | |--------|-------------| | Hallucination Score | Detection of fabricated information | | Repetition Score | Unnecessary repetition in responses | | Consistency Score | Alignment with previous statements | | Context Retention | Memory of conversation context | | Verbosity Score | Appropriate response length | | Truncation Score | Detection of incomplete responses | ### Winner Selection Rules Per Converra's optimization principles: - Head-to-head pairs are the single source of truth for lift calculation - Only pairs where both baseline and variant have valid scores count - Variants need strictly positive lift above baseline to win - Minimum evidence threshold must be met for confident recommendation - Ties or smaller deltas keep the existing leader --- ## Optimization Modes ### Exploratory Mode (Default) - Faster iteration - 3 personas, 3 scenarios default - Good for initial testing and rapid iteration ### Validation Mode - Statistical rigor - More personas and scenarios - Higher confidence thresholds - Use before production deployment --- ## Webhook Events Subscribe to real-time notifications: | Event | Description | |-------|-------------| | `optimization.started` | Optimization process began | | `optimization.completed` | Optimization finished successfully | | `optimization.stopped` | Optimization was manually stopped | | `prompt.updated` | Prompt content or settings changed | | `prompt.deleted` | Prompt was deleted | | `conversation.created` | New conversation logged | | `conversation.updated` | Conversation updated | | `insights.generated` | New insights available | **Webhook Payload Example:** ```json { "event": "optimization.completed", "timestamp": "2024-01-15T10:30:00Z", "data": { "processId": "opt_123", "promptId": "prompt_456", "status": "completed", "winnerVariantId": "var_789", "lift": { "successScore": 12.5, "aiRelevancy": 8.3 } } } ``` --- ## Use Cases ### Customer Support Agents Optimize prompts for higher resolution rates, better tone, and reduced escalations. ### Sales Assistants Improve conversion rates, qualification accuracy, and engagement quality. ### Internal Tools Reduce errors, improve task completion, and enhance clarity for employee-facing AI. ### Content Generation Optimize for quality, consistency, and adherence to brand voice. ### Code Assistants Improve accuracy, reduce hallucinations, and enhance explanation quality. --- ## Getting Started 1. **Request access** at https://converra.ai 2. **Get API key** from dashboard after approval 3. **Add MCP server** or install SDK 4. **Create prompts** via UI or API 5. **Import conversations** for analysis (optional) 6. **Run simulations** or trigger optimization 7. **Review results** and deploy winners --- ## Links - Website: https://converra.ai - Documentation: https://converra.ai/docs - API Reference: https://converra.ai/docs/api - Login: https://converra.ai/login - Waitlist: https://converra.ai/waitlist ## Contact - Support: support@converra.ai - Founder: oren@converra.ai --- ## About Converra is built by a solo founder with 13 months of AI-augmented development. The platform handles the entire optimization lifecycle so engineering teams can focus on building features rather than manually tuning prompts.