Skip to content

Conversations API

Log and retrieve conversations.

Create Conversation

http
POST /api/v1/conversations

Request Body

json
{
  "promptId": "prompt_123",
  "content": "User: Hello, I need help\nAssistant: Hi! How can I help you today?",
  "status": "completed",
  "llmModel": "gpt-4o"
}

Required Fields

FieldTypeDescription
promptIdstringAssociated prompt ID
contentstringConversation transcript (see Content Format)

Optional Fields

FieldTypeDescription
statusstringcompleted, abandoned, in_progress
llmModelstringModel used
companyNamestringCustomer company name
idempotencyKeystringUnique key to prevent duplicate submissions (see Idempotency)

Response

json
{
  "id": "conv_456",
  "promptId": "prompt_123",
  "status": "completed",
  "createdAt": "2025-01-20T14:30:00Z",
  "_links": {
    "appendMessages": {
      "href": "/api/v1/conversations/conv_456/messages",
      "method": "POST",
      "hint": "Append turns to this conversation. Send status: 'completed' on the final call to trigger analysis."
    }
  }
}

Status: 200 OK

Append Messages

Append turns to an existing conversation. Use this when you want the conversation to appear in Converra mid-flight (long chats, voice, streaming agents) rather than batching the full transcript at the end.

http
POST /api/v1/conversations/:id/messages

Pair this with POST /api/v1/conversations called once with status: "active" to open the conversation. Keep the returned id and append to it as turns happen. Flip status to "completed" on the final call — that's what triggers analysis.

Request Body

json
{
  "messages": [
    { "role": "user", "content": "Can you check my order status?" },
    { "role": "assistant", "content": "Sure — what's your order number?" }
  ],
  "status": "active",
  "metadata": { "sessionId": "sess_abc" }
}

Fields

FieldTypeRequiredDescription
messagesarrayyesNew turns to append. At least one.
statusstringno"active" (default — no change) or "completed" (triggers analysis). Send "completed" exactly once per conversation.
metadataobjectnoShallow-merged into the conversation's existing metadata.

Each message: { role: "user" | "assistant" | "system" | "tool", content: string, timestamp?: string, model?: string, tokens?: number, toolCalls?: ToolCall[], usage?: { promptTokens, completionTokens }, latencyMs?: number, metadata?: object }.

The tool role represents a tool/function-call result. For tool messages, content may be empty — the call details go in toolCalls. All other roles require non-empty content.

Response

json
{
  "id": "conv_456",
  "userId": "user_789",
  "title": "...",
  "content": "User: ...\nAssistant: ...",
  "messages": [ /* full accumulated messages */ ],
  "status": "active",
  "createdAt": "2025-01-20T14:30:00Z",
  "updatedAt": "2025-01-20T14:31:12Z",
  "metadata": { /* merged metadata */ }
}

Status: 200 OK

Behavior

  • Atomic append. The underlying update pushes the new messages and recomputes content server-side in the same operation — concurrent appends are safe.
  • Insights trigger. When status transitions to "completed" and the conversation is linked to a prompt (agentId / systemPromptId), insights are generated asynchronously. Conversations created with only a free-text promptName will complete but not be analyzed — link a prompt if you want insights.
  • Webhooks. Fires conversation.updated with the list of changed fields on every append.
  • 404 if the conversation doesn't exist or isn't owned by your account.

List Conversations

http
GET /api/v1/conversations

Query Parameters

ParameterTypeDescription
promptIdstringFilter by prompt
statusstringFilter by status
hasInsightsbooleanFilter to conversations with insights
limitnumberMax results (default: 20)
offsetnumberPagination offset

Response

json
{
  "data": [
    {
      "id": "conv_456",
      "promptId": "prompt_123",
      "status": "completed",
      "hasInsights": true,
      "createdAt": "2025-01-20T14:30:00Z"
    }
  ],
  "pagination": {
    "total": 150,
    "limit": 20,
    "offset": 0
  }
}

Get Conversation

http
GET /api/v1/conversations/:id

Query Parameters

ParameterTypeDescription
includeInsightsbooleanInclude insights (default: true)

Response

json
{
  "id": "conv_456",
  "promptId": "prompt_123",
  "content": "User: Hello...\nAssistant: Hi!...",
  "status": "completed",
  "llmModel": "gpt-4o",
  "insights": {
    "sentiment": "positive",
    "taskCompleted": true,
    "topics": ["greeting", "product inquiry"],
    "summary": "User asked about product availability..."
  },
  "createdAt": "2025-01-20T14:30:00Z"
}

Get Conversation Insights

http
GET /api/v1/conversations/:id/insights

Response

json
{
  "conversationId": "conv_456",
  "sentiment": "positive",
  "sentimentScore": 0.82,
  "taskCompleted": true,
  "topics": ["greeting", "product inquiry", "availability"],
  "summary": "User inquired about product availability. Agent provided accurate information and the user was satisfied.",
  "issues": [],
  "createdAt": "2025-01-20T14:35:00Z"
}

Content Format

The content field accepts two formats:

JSON string containing a messages array. This format provides the richest data for insights.

json
{
  "content": "{\"messages\":[{\"role\":\"user\",\"content\":\"Hello\"},{\"role\":\"assistant\",\"content\":\"Hi! How can I help?\"}]}"
}

Full schema:

typescript
interface StructuredContent {
  messages: Array<{
    role: 'user' | 'assistant' | 'system';
    content: string;
    timestamp?: string;    // ISO 8601
    model?: string;        // Model used for this response
    tokens?: number;       // Token count
  }>;
  metadata?: {
    userId?: string;       // Your user ID
    sessionId?: string;    // Session identifier
    channel?: string;      // web, mobile, api, etc.
    language?: string;     // ISO language code
    [key: string]: any;    // Custom fields
  };
}

Text Format

Plain text with role labels:

json
{
  "content": "User: Hello\nAssistant: Hi! How can I help?"
}

Accepted role labels: User/Assistant, Human/AI, Customer/Agent

Content Limits

LimitValueDescription
Max content size512 KBMaximum size of the content field
Max messages500Maximum messages in structured format
Max message length100 KBMaximum size per individual message

Exceeding these limits returns a 413 Payload Too Large error.

See the Logging Guide for detailed examples.


Idempotency

Prevent duplicate conversation submissions using the idempotencyKey field.

json
{
  "promptId": "prompt_123",
  "content": "User: Hello\nAssistant: Hi!",
  "idempotencyKey": "session_abc123_1705764600"
}

Behavior:

  • If a conversation with the same idempotencyKey already exists, the existing conversation is returned (no duplicate created)
  • Keys are scoped to your account
  • Keys expire after 24 hours

Recommended key formats:

  • {sessionId}_{timestamp} - For session-based logging
  • {externalId} - If you have your own conversation IDs
  • {hash of content} - For content-based deduplication

Error Responses

Not Found

json
{
  "error": {
    "code": "NOT_FOUND",
    "message": "Conversation not found"
  }
}

Validation Error

json
{
  "error": {
    "code": "VALIDATION_ERROR",
    "message": "Invalid request body",
    "details": {
      "promptId": "Required field"
    }
  }
}