An artifact is a piece of information (or tool-ready resource) you ingest into Zylon so it can be used as context for queries and operations. Artifacts live inside a collection , and each artifact has a stable artifact ID.
Artifacts can be created from:
Type Input Notes Text Raw string Provide content directly. File Base64 bytes For files like .txt or .pdf. URI URL Zylon fetches and ingests the content.
Prerequisites
An API token. See Token Management .
Your Zylon hostname (replace {BASE_URL} in the examples).
Basic request and response
Use POST /artifacts/ingest when you want the ingest to complete in the same request.
curl -X POST "https://{BASE_URL}/api/gpt/v1/artifacts/ingest" \
-H "Authorization: Bearer {API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"collection": "{collectionID}",
"artifact": "{artifactID}",
"input": {
"type": "text",
"value": "Release notes: Shipped billing dashboard and fixed three bugs."
},
"metadata": {
"file_name": "release_notes.txt",
"source": "docs"
}
}'
{
"object" : "list" ,
"model" : "private-gpt" ,
"data" : [
{
"object" : "ingest.document" ,
"artifact" : "docs_text_artifact_sync" ,
"doc_metadata" : {
"file_name" : "release_notes.txt" ,
"source" : "docs" ,
"headers" : [],
"artifact_id" : "docs_text_artifact_sync" ,
"collection" : "019c7569-f1f3-74da-8017-35d4a11f93ca" ,
"llm_model" : "default" ,
"embed_model" : "default" ,
"hash" : "faf6da679814136cb7cef94532397948cd2cd303b25f91a527af53227e9da99d"
}
}
]
}
Async ingest
Use POST /artifacts/ingest/async for larger payloads or when ingest may take longer. The payload is the same as sync ingest, wrapped in ingest_body, and the response returns a task_id you can poll.
curl -X POST "https://{BASE_URL}/api/gpt/v1/artifacts/ingest/async" \
-H "Authorization: Bearer {API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"ingest_body": {
"collection": "{collectionID}",
"artifact": "{artifactID}",
"input": {
"type": "text",
"value": "Release notes: We shipped a new billing dashboard and fixed three bugs."
},
"metadata": {
"file_name": "release_notes.txt",
"source": "docs"
}
}
}'
curl -X POST "https://{BASE_URL}/api/gpt/v1/artifacts/ingest/async" \
-H "Authorization: Bearer {API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"ingest_body": {
"collection": "{collectionID}",
"artifact": "{artifactID}",
"input": {
"type": "file",
"value": "SGVsbG8gZnJvbSBmaWxlIGFydGlmYWN0Lgo="
},
"metadata": {
"file_name": "onboarding_checklist.txt",
"source": "docs"
}
}
}'
curl -X POST "https://{BASE_URL}/api/gpt/v1/artifacts/ingest/async" \
-H "Authorization: Bearer {API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"ingest_body": {
"collection": "{collectionID}",
"artifact": "{artifactID}",
"input": {
"type": "uri",
"value": "https://example.com/"
},
"metadata": {
"file_name": "example.html",
"source": "docs"
}
}
}'
{ "task_id" : "f04e0709-131a-4a9d-b884-78872a4e63d7" }
Check task status:
curl "https://{BASE_URL}/api/gpt/v1/artifacts/ingest/async/{task_id}" \
-H "Authorization: Bearer {API_TOKEN}"
{
"task_id" : "f04e0709-131a-4a9d-b884-78872a4e63d7" ,
"task_status" : "PENDING" ,
"task_result" : null
}
Listing ingested artifacts
curl "https://{BASE_URL}/api/gpt/v1/artifacts/list?collection={collectionID}" \
-H "Authorization: Bearer {API_TOKEN}"
{
"object" : "list" ,
"model" : "private-gpt" ,
"data" : [
{
"object" : "ingest.document" ,
"artifact" : "docs_file_artifact" ,
"doc_metadata" : {
"file_name" : "docs_file_artifact.txt" ,
"artifact_id" : "docs_file_artifact" ,
"collection" : "019c7569-f1f3-74da-8017-35d4a11f93ca"
}
}
]
}
Retrieving content
Full content
Use POST /artifacts/content to fetch complete document text:
curl -X POST "https://{BASE_URL}/api/gpt/v1/artifacts/content" \
-H "Authorization: Bearer {API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"context_filter": {
"collection": "{collectionID}"
}
}'
{
"data" : [
{
"artifact_id" : "docs_file_artifact" ,
"content" : "Hello from file artifact. \n\n "
}
]
}
Chunked content (chat-optimized)
Use POST /artifacts/chunked-content to retrieve content split into chat-friendly blocks:
curl -X POST "https://{BASE_URL}/api/gpt/v1/artifacts/chunked-content" \
-H "Authorization: Bearer {API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"context_filter": {
"collection": "{collectionID}"
},
"max_tokens": 50
}'
{
"data" : [
{
"artifact_id" : "docs_file_artifact" ,
"content" : [
{ "type" : "source" , "sources" : [{ "object" : "context.chunk" , "id" : "c5f20f0f-56e6-44e0-99e8-0d4ff4bb87eb" }] },
{ "type" : "text" , "text" : "**Context Information**: \n Below is important information..." }
]
}
]
}
Deleting artifacts
Delete (sync):
curl -i -X POST "https://{BASE_URL}/api/gpt/v1/artifacts/delete" \
-H "Authorization: Bearer {API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"collection": "{collectionID}",
"artifact": "{artifactID}"
}'
Delete (async):
curl -X POST "https://{BASE_URL}/api/gpt/v1/artifacts/delete/async" \
-H "Authorization: Bearer {API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"delete_body": {
"collection": "{collectionID}",
"artifact": "{artifactID}"
}
}'
{ "task_id" : "2d48e74f-3eff-474e-b338-a931390ac218" }
Check task status:
curl "https://{BASE_URL}/api/gpt/v1/artifacts/delete/async/{task_id}" \
-H "Authorization: Bearer {API_TOKEN}"
{
"task_id" : "2d48e74f-3eff-474e-b338-a931390ac218" ,
"task_status" : "PENDING" ,
"task_result" : null
}
Keep polling until task_status becomes "SUCCESS" or "FAILURE".
If deletion fails while an index is still being populated, retry after a short delay.
Errors and edge cases
401/403 : missing token or insufficient permissions.
404 : collection or artifact not found.
409 : ingest/delete task already in progress.
413 : input too large for a single ingest.
422 : invalid payload shape.
Async failures : when task_status becomes FAILURE, check task_result (for example routing_key_error: "ingest.failed" or delete.failed) and retry after fixing the input.