POST
/
v1
/
messages
/
validate
{
  "valid": true,
  "errors": [],
  "warnings": [],
  "request_summary": {
    "message_count": 3,
    "has_tools": true,
    "stream_enabled": false,
    "tool_choice": "auto"
  }
}

Body

application/json

Request body for chat completion supporting multi-turn conversations with AI models.

Contains message history, tool definitions, system prompts, and response configuration. Supports both streaming and non-streaming responses with optional tool usage, citations, and advanced sampling parameters.

The request body defines the complete conversation context and AI behavior parameters for generating responses.

Chat request body model for handling chat interactions.

messages
MessageInput · object[]
required

Array of messages composing the chat conversation. Each message should have a 'role' (user, or assistant) and 'content'.

model
string | null

Model to use for the chat completion. If not provided, the default model will be used.

stream
boolean
default:false

Whether to stream the response back to the client.

tools
ToolSpec · object[] | null

List of tools to use for the response.

tool_choice
object

Define how the model should choose tools for the response.

tool_context
Tool Context · array

Context to provide to the tools, such as documents, databases connection strings, or data relevant to tool usage.

mcp_servers
McpServerConfig · object[]

List of MCP servers to use for tool retrieval. Each server can have its own configuration.

response_format
object

Format of the response. Can be text, json_schema, or tool call.

system
object

System message configuration, including default prompt and citations.

thinking
object

Thinking configuration, enabling reasoning capabilities for the model.

priority
integer | null

Priority of the request, used for prioritizing responses.

seed
integer | null

Random seed for reproducibility.

min_p
number | null

Minimum probability threshold for token selection. Tokens with probability below this value are filtered out.

top_p
number | null

Nucleus sampling parameter. Only tokens with cumulative probability up to this value are considered.

temperature
number | null

Controls randomness in generation. Higher values make output more random, lower values more deterministic.

top_k
integer | null

Limits token selection to the top K most likely tokens at each step.

repetition_penalty
number | null

Penalty applied to tokens that have already appeared in the sequence to reduce repetition.

presence_penalty
number | null

Penalty applied based on whether a token has appeared in the text, encouraging topic diversity.

frequency_penalty
number | null

Penalty applied based on how frequently a token appears in the text, reducing repetitive content.

max_tokens
integer | null

Maximum number of tokens to generate in the response.

correlation_id
string | null

Correlation ID for tracking the request across systems.

Response

Validation completed

Result of chat request validation.

valid
boolean
default:false

Is the request valid

errors
string[] | null

List of validation errors if any