Appearance
Conversations API
Log and retrieve conversations.
Create Conversation
http
POST /api/v1/conversationsRequest Body
json
{
"promptId": "prompt_123",
"content": "User: Hello, I need help\nAssistant: Hi! How can I help you today?",
"status": "completed",
"llmModel": "gpt-4o"
}Required Fields
| Field | Type | Description |
|---|---|---|
promptId | string | Associated prompt ID |
content | string | Conversation transcript (see Content Format) |
Optional Fields
| Field | Type | Description |
|---|---|---|
status | string | completed, abandoned, in_progress |
llmModel | string | Model used |
companyName | string | Customer company name |
idempotencyKey | string | Unique key to prevent duplicate submissions (see Idempotency) |
Response
json
{
"id": "conv_456",
"promptId": "prompt_123",
"status": "completed",
"createdAt": "2025-01-20T14:30:00Z",
"_links": {
"appendMessages": {
"href": "/api/v1/conversations/conv_456/messages",
"method": "POST",
"hint": "Append turns to this conversation. Send status: 'completed' on the final call to trigger analysis."
}
}
}Status: 200 OK
Append Messages
Append turns to an existing conversation. Use this when you want the conversation to appear in Converra mid-flight (long chats, voice, streaming agents) rather than batching the full transcript at the end.
http
POST /api/v1/conversations/:id/messagesPair this with POST /api/v1/conversations called once with status: "active" to open the conversation. Keep the returned id and append to it as turns happen. Flip status to "completed" on the final call — that's what triggers analysis.
Request Body
json
{
"messages": [
{ "role": "user", "content": "Can you check my order status?" },
{ "role": "assistant", "content": "Sure — what's your order number?" }
],
"status": "active",
"metadata": { "sessionId": "sess_abc" }
}Fields
| Field | Type | Required | Description |
|---|---|---|---|
messages | array | yes | New turns to append. At least one. |
status | string | no | "active" (default — no change) or "completed" (triggers analysis). Send "completed" exactly once per conversation. |
metadata | object | no | Shallow-merged into the conversation's existing metadata. |
Each message: { role: "user" | "assistant" | "system" | "tool", content: string, timestamp?: string, model?: string, tokens?: number, toolCalls?: ToolCall[], usage?: { promptTokens, completionTokens }, latencyMs?: number, metadata?: object }.
The tool role represents a tool/function-call result. For tool messages, content may be empty — the call details go in toolCalls. All other roles require non-empty content.
Response
json
{
"id": "conv_456",
"userId": "user_789",
"title": "...",
"content": "User: ...\nAssistant: ...",
"messages": [ /* full accumulated messages */ ],
"status": "active",
"createdAt": "2025-01-20T14:30:00Z",
"updatedAt": "2025-01-20T14:31:12Z",
"metadata": { /* merged metadata */ }
}Status: 200 OK
Behavior
- Atomic append. The underlying update pushes the new messages and recomputes
contentserver-side in the same operation — concurrent appends are safe. - Insights trigger. When
statustransitions to"completed"and the conversation is linked to a prompt (agentId/systemPromptId), insights are generated asynchronously. Conversations created with only a free-textpromptNamewill complete but not be analyzed — link a prompt if you want insights. - Webhooks. Fires
conversation.updatedwith the list of changed fields on every append. - 404 if the conversation doesn't exist or isn't owned by your account.
List Conversations
http
GET /api/v1/conversationsQuery Parameters
| Parameter | Type | Description |
|---|---|---|
promptId | string | Filter by prompt |
status | string | Filter by status |
hasInsights | boolean | Filter to conversations with insights |
limit | number | Max results (default: 20) |
offset | number | Pagination offset |
Response
json
{
"data": [
{
"id": "conv_456",
"promptId": "prompt_123",
"status": "completed",
"hasInsights": true,
"createdAt": "2025-01-20T14:30:00Z"
}
],
"pagination": {
"total": 150,
"limit": 20,
"offset": 0
}
}Get Conversation
http
GET /api/v1/conversations/:idQuery Parameters
| Parameter | Type | Description |
|---|---|---|
includeInsights | boolean | Include insights (default: true) |
Response
json
{
"id": "conv_456",
"promptId": "prompt_123",
"content": "User: Hello...\nAssistant: Hi!...",
"status": "completed",
"llmModel": "gpt-4o",
"insights": {
"sentiment": "positive",
"taskCompleted": true,
"topics": ["greeting", "product inquiry"],
"summary": "User asked about product availability..."
},
"createdAt": "2025-01-20T14:30:00Z"
}Get Conversation Insights
http
GET /api/v1/conversations/:id/insightsResponse
json
{
"conversationId": "conv_456",
"sentiment": "positive",
"sentimentScore": 0.82,
"taskCompleted": true,
"topics": ["greeting", "product inquiry", "availability"],
"summary": "User inquired about product availability. Agent provided accurate information and the user was satisfied.",
"issues": [],
"createdAt": "2025-01-20T14:35:00Z"
}Content Format
The content field accepts two formats:
Structured Format (Recommended)
JSON string containing a messages array. This format provides the richest data for insights.
json
{
"content": "{\"messages\":[{\"role\":\"user\",\"content\":\"Hello\"},{\"role\":\"assistant\",\"content\":\"Hi! How can I help?\"}]}"
}Full schema:
typescript
interface StructuredContent {
messages: Array<{
role: 'user' | 'assistant' | 'system';
content: string;
timestamp?: string; // ISO 8601
model?: string; // Model used for this response
tokens?: number; // Token count
}>;
metadata?: {
userId?: string; // Your user ID
sessionId?: string; // Session identifier
channel?: string; // web, mobile, api, etc.
language?: string; // ISO language code
[key: string]: any; // Custom fields
};
}Text Format
Plain text with role labels:
json
{
"content": "User: Hello\nAssistant: Hi! How can I help?"
}Accepted role labels: User/Assistant, Human/AI, Customer/Agent
Content Limits
| Limit | Value | Description |
|---|---|---|
| Max content size | 512 KB | Maximum size of the content field |
| Max messages | 500 | Maximum messages in structured format |
| Max message length | 100 KB | Maximum size per individual message |
Exceeding these limits returns a 413 Payload Too Large error.
See the Logging Guide for detailed examples.
Idempotency
Prevent duplicate conversation submissions using the idempotencyKey field.
json
{
"promptId": "prompt_123",
"content": "User: Hello\nAssistant: Hi!",
"idempotencyKey": "session_abc123_1705764600"
}Behavior:
- If a conversation with the same
idempotencyKeyalready exists, the existing conversation is returned (no duplicate created) - Keys are scoped to your account
- Keys expire after 24 hours
Recommended key formats:
{sessionId}_{timestamp}- For session-based logging{externalId}- If you have your own conversation IDs{hash of content}- For content-based deduplication
Error Responses
Not Found
json
{
"error": {
"code": "NOT_FOUND",
"message": "Conversation not found"
}
}Validation Error
json
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Invalid request body",
"details": {
"promptId": "Required field"
}
}
}