Skip to main content
Embeddings turn text into high‑dimensional vectors. Use them for semantic search, clustering, or similarity scoring across your own data.

Prerequisites

  • An API token. See Token Management.
  • Your Zylon hostname (replace {BASE_URL} in the examples).

Before you start

  • Order matters: data[index] aligns with your input order.
  • Store the vectors in your database or vector index to power search and similarity.
  • Use consistent preprocessing (same casing and formatting) for better similarity results.

Create embeddings

Generate embeddings with POST /embeddings. The input can be a single string or an array.
input shapeWhen to use
Single stringOne text at a time.
Array of stringsBatch inputs and keep output order.
curl -X POST "https://{BASE_URL}/api/gpt/v1/embeddings" \
  -H "Authorization: Bearer {API_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Summarize Q1 support trends in one vector."
  }'
{
  "object": "list",
  "model": "private-gpt",
  "data": [
    {
      "index": 0,
      "object": "embedding",
      "embedding": [
        0.697265625,
        0.5078125,
        0.01129150390625,
        0.244873046875,
        -0.285888671875,
        0.058135986328125,
        0.01922607421875,
        -0.11431884765625
      ]
    }
  ]
}

Errors and edge cases

  • 401/403: token missing or invalid.
  • 413: input too large.
  • 400: invalid JSON or input type.