Skip to content

Core Concepts

Key concepts behind how Converra automates AI agent improvement.

Agents

An agent in Converra represents an AI agent defined by its system prompt, objectives, constraints, and LLM settings. Agents are:

  • Versioned - Every change is tracked
  • Optimizable - Can be improved through automated testing
  • Measurable - Performance metrics are collected from real conversations
typescript
// Example agent structure
{
  name: "Customer Support Agent",
  content: "You are a helpful customer support agent...",
  llmModel: "gpt-4o",
  tags: ["support", "production"]
}

Agent Systems

An agent system is a set of agents that work together as a multi-step flow (for example: an entry/router agent handing off to specialist agents).

Converra can auto-discover agent systems from imported traces and show:

  • the entry agent
  • the most common paths (agent sequences) and their frequencies
  • the weakest link (lowest-performing agent in the system)
  • a diagnostic, weighted “system score”

Flow constraints (what you should expect)

For reliable, bounded simulation, Converra models discovered agent systems with a constrained flow:

  • Branching between steps is supported (based on what we observe in traces).
  • Each run records the path taken so comparisons are apples-to-apples.
  • Some patterns (like unbounded loops/retries or complex parallelism) may not be supported in early versions; in those cases Converra falls back to individual optimization.

These constraints apply to Converra’s simulation model, not your production code.

Optimization

Optimization is the automated process of diagnosing agent failures, generating fixes, and proving them in simulation. It connects to where your agents already live:

  1. Import - Pull agents and traces from LangSmith, Langfuse, SDK, or paste manually
  2. Diagnose & Fix - Identify failure patterns and generate targeted variants
  3. Simulate & Prove - Test fixes against diverse personas and regression scenarios
  4. Select & Deploy - Ship the proven winner back to production

Optimization Modes

ModeUse Case
ExploratoryQuick iteration, finding improvements fast
ValidationStatistical rigor, production-ready decisions

Conversations

A conversation is a logged interaction between a user and your AI. Logging conversations enables:

  • Insights generation - Understanding what's working and what isn't
  • Performance tracking - Measuring agent effectiveness over time
  • Optimization signal - Real data to guide improvements

Personas

Personas are simulated users that test your agents:

  • Frustrated Customer - Tests patience and de-escalation
  • Enterprise Buyer - Tests technical depth
  • First-time User - Tests clarity and onboarding
  • Power User - Tests efficiency

You can create custom personas to match your specific user base.

Variants

A variant is an alternative version of your agent's system prompt created during optimization:

  • Variants compete against your original
  • The winner can be deployed automatically or applied manually
  • Previous versions are always preserved

Insights

Insights are AI-generated analysis of your agent's performance:

  • Task completion rates
  • Sentiment analysis
  • Common topics and issues
  • Improvement recommendations

Next Steps