# Converra > Turn agent traces into tested fixes ## Overview Converra optimizes AI agent prompts through simulation testing outside your production stack. It runs simulated conversations against diverse personas and scenarios, evaluates performance with multi-metric scoring, and recommends winning variants with statistical confidence. ## Core Capabilities 1. **Prompt Analysis** - Identify weaknesses and get improvement recommendations 2. **Simulation Testing** - Run prompts against auto-generated personas and scenarios 3. **A/B Simulation Testing** - Compare two prompts with identical test conditions 4. **Regression Testing** - Validate variants against golden scenarios before deployment 5. **Automated Optimization** - Generate and test variants, select winners based on lift 6. **One-Click Deployment** - Apply winning variants to production prompts ## Integration Methods - **MCP Server** - Native integration with Claude, Cursor, and AI assistants - **Node.js SDK** - `npm install converra` - **REST API** - JSON-RPC 2.0 over HTTPS ## Key Metrics - Success Score (task completion effectiveness) - AI Relevancy (response appropriateness) - User Sentiment (tone and satisfaction indicators) - Clarity (comprehensibility of responses) - Safety metrics (hallucination, repetition detection) ## Target Users - Prompt engineers optimizing AI agent behavior - AI application developers building conversational products - ML/Platform teams maintaining production AI systems - Product managers tracking AI agent performance ## Getting Started 1. Get API key at https://converra.ai 2. Add MCP server or install SDK 3. Create/import prompts 4. Run simulations or trigger optimization 5. Review results and deploy winners ## Links - Website: https://converra.ai - Documentation: https://converra.ai/docs - Login: https://converra.ai/login - Contact: support@converra.ai ## Full Documentation For comprehensive API documentation and integration guides, see: https://converra.ai/llms-full.txt