Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.zylon.ai/llms.txt

Use this file to discover all available pages before exploring further.

Set the AI preset in your Zylon configuration file using the ai.preset property. The default configuration uses a 24GB setup.

Base Presets

Base presets provide standard configurations optimized for general-purpose AI workloads.
PresetRequired GPU MemoryCompatible Hardware ExamplesModels
baseline-32g32GBRTX 5090cyankiwi/qwen3.5-9b-awq-4bit, mixedbread-ai/mxbai-embed-large-v1
baseline-48g48GBRTX A6000, A40, L40, L40stxn545/qwen3.5-35b-a3b-nvfp4, mixedbread-ai/mxbai-embed-large-v1
baseline-96g96GBA100 80GB, H100, A6000 (dual)cyankiwi/qwen3.5-27b-awq-4bit, mixedbread-ai/mxbai-embed-large-v1

Configuration Example

ai:
  preset: "baseline-48g"  # For a system with L40s (48GB)
Choose the preset that matches your GPU memory capacity. Always select a preset that matches or is lower than your available VRAM.

Alternative Presets

Zylon provides alternative presets that offer specialized configurations by trading certain capabilities for others. These are optional and should only be used when you have specific requirements that differ from the standard presets.

Throughput-Optimized Alternative

This preset uses a smaller, lighter model that generates tokens significantly faster. While it may not match the standard model in quality and reasoning depth, it delivers noticeably faster responses in return. When to use:
  • You primarily handle simple or straightforward queries
  • You have a high number of concurrent users and need fast response times
  • Generation speed matters more than peak response quality
PresetGPU Memory RequiredModels
baseline-throughput-96g96GB (A100 80GB, H100, A6000 dual)txn545/qwen3.5-35b-a3b-nvfp4, mixedbread-ai/mxbai-embed-large-v1

Large Model Alternative

This preset uses a larger, more capable model with greater intrinsic knowledge. The trade-off is a reduced context window. When to use:
  • You need stronger performance on complex or specialized tasks
  • The model’s intrinsic knowledge is a priority
  • You can work with a smaller context window
PresetGPU Memory RequiredModels
baseline-large-96g96GB (A100 80GB, H100, A6000 dual)cyankiwi/qwen3.5-122b-A10b-awq-4bit, mixedbread-ai/mxbai-embed-large-v1

Configuration Example

ai:
  preset: "alternatives.baseline-throughput-96g"
Each alternative preset involves a trade-off. Consider your specific use case — user volume, query complexity, and context needs — before switching from the standard preset.

Experimental Presets

Experimental presets are under active development and may not be stable. Use only in testing environments.
Experimental presets provide access to cutting-edge models and configurations that are being evaluated for future releases. These presets may have different performance characteristics or stability compared to baseline presets.
PresetRequired GPU MemoryModel FamilyStatusModels
experimental.mistral-24g24GBMistralBetamistralai/mistral-small-24b-instruct-2501-awq, mixedbread-ai/mxbai-embed-large-v1
experimental.mistral-48g48GBMistralBetamistralai/mistral-small-24b-instruct-2501-awq, mixedbread-ai/mxbai-embed-large-v1
experimental.gpt-oss-24g24GBGPT-OSSBetaopenai/gpt-oss-20b, mixedbread-ai/mxbai-embed-large-v1
experimental.gpt-oss-48g48GBGPT-OSSBetaopenai/gpt-oss-20b, mixedbread-ai/mxbai-embed-large-v1
experimental.gpt-oss-96g96GBGPT-OSSBetaopenai/gpt-oss-120b, mixedbread-ai/mxbai-embed-large-v1
experimental.gemma-24g24GBGemma 3Alphagoogle/gemma-3n-e4b-it, mixedbread-ai/mxbai-embed-large-v1
mistral-3-instruct-24g24GBMistralAlphacyankiwi/ministral-3-14b-instruct-2512-awq-4bit, mixedbread-ai/mxbai-embed-large-v1
mistral-3-instruct-48g48GBMistralAlphacyankiwi/ministral-3-14b-instruct-2512-awq-4bit, mixedbread-ai/mxbai-embed-large-v1
mistral-3-reasoning-24g24GBMistralAlphacyankiwi/ministral-3-14b-reasoning-2512-awq-4bit, mixedbread-ai/mxbai-embed-large-v1
mistral-3-reasoning-48g48GBMistralAlphacyankiwi/ministral-3-14b-reasoning-2512-awq-4bit, mixedbread-ai/mxbai-embed-large-v1
nemotron-3-nano-48g48GBNemotronAlphastelterlab/nvidia-nemotron-3-nano-30b-a3b-awq, mixedbread-ai/mxbai-embed-large-v1
glm-47-flash-32g32GBGLMAlphacyankiwi/glm-4.7-flash-awq-4bit, mixedbread-ai/mxbai-embed-large-v1
glm-47-flash-48g48GBGLMAlphacyankiwi/glm-4.7-flash-awq-4bit, mixedbread-ai/mxbai-embed-large-v1
Configuration Example:
ai:
  preset: "experimental.gpt-oss-24g"

Important Notes About Experimental Presets

  • Experimental presets may be removed or significantly changed between versions
  • Performance and stability are not guaranteed
  • Not recommended for production environments
  • May require additional configuration parameters
  • Support may be limited

Deprecated Presets

Deprecated presets are maintained for backward compatibility only and will not receive updates.
For customers that require older configurations, deprecated presets are available but not recommended for new installations.
PresetGPU MemoryDescription
deprecated.24g.2025071024GBPre-Qwen 3 configuration
deprecated.24g.2026032724GBPre-Qwen 3.5 configuration
deprecated.32g.2025071032GBPre-Qwen 3 configuration
deprecated.32g.2026032732GBPre-Qwen 3.5 configuration
deprecated.48g.2025071048GBPre-Qwen 3 configuration
deprecated.48g.2026032748GBPre-Qwen 3.5 configuration
deprecated.48g.20260327-context48GBPre-Qwen 3.5 context-optimized configuration
deprecated.48g.20260327-vision48GBPre-Qwen 3.5 vision-optimized configuration
deprecated.96g.2025071096GBPre-Qwen 3 configuration
deprecated.96g.2026032796GBPre-Qwen 3.5 configuration
deprecated.96g.20260327-context96GBPre-Qwen 3.5 context-optimized configuration
deprecated.96g.20260327-vision96GBPre-Qwen 3.5 vision-optimized configuration
deprecated.96g.qwen3-32b-96g96GBPre-Qwen 3.5 Qwen 3 32B configuration
ai:
  preset: "deprecated.24g.20260327"

Migration from Deprecated Presets

If you’re using a deprecated preset, we strongly recommend migrating to current baseline or alternative presets:
  1. Review the base presets to find an equivalent configuration
  2. Test the new preset in a staging environment
  3. Update your production configuration
  4. Monitor performance and adjust if needed
Migration provides access to improved models, better performance, and ongoing support.