Skip to main content
Set the AI preset in your Zylon configuration file using the ai.preset property. The default configuration uses a 24GB setup.

Base Presets

Base presets provide standard configurations optimized for general-purpose AI workloads.
PresetRequired GPU MemoryCompatible Hardware Examples
baseline-24g24GBRTX 4090, L4, RTX 3090 Ti
baseline-32g32GBRTX 5090
baseline-48g48GBRTX A6000, A40, L40, L40s
baseline-96g96GBA100 80GB, H100, A6000 (dual)

Configuration Example

ai:
  preset: "baseline-48g"  # For a system with L40s (48GB)
Choose the preset that matches your GPU memory capacity. Always select a preset that matches or is lower than your available VRAM.

Alternative Presets

Zylon provides alternative presets that offer specialized configurations trading certain capabilities for others. These are optional and should only be used when you have specific requirements that differ from the standard presets.

Vision-Enabled Alternatives

These presets include specialized computer vision capabilities in the ingestion pipeline, allowing the system to process and understand images, documents, and visual content.
PresetRequired GPU MemoryTrade-off
alternatives.baseline-48g-vision48GBSmaller model (Qwen 3 14B)
alternatives.baseline-96g-vision96GBSmaller model (Qwen 3 14B)
When to use vision-enabled presets:
  • Processing scanned documents and slide understanding
  • Analyzing charts, graphs, and visual data
  • Image understanding and description tasks
Configuration Example:
ai:
  preset: "alternatives.baseline-96g-vision"
The use of these presets probably requires modifying the inference server’s shared memory. See the Shared Memory Configuration section.

Context-Optimized Alternatives

These presets use smaller AI models to provide significantly larger context windows, allowing for extended conversations and complex analysis tasks.
PresetRequired GPU MemoryTrade-off
alternatives.baseline-48g-context48GBSmaller model (Qwen 3 14B)
alternatives.baseline-96g-context96GBSmaller model (Qwen 3 14B)
When to use context-optimized presets:
  • Extended conversation sessions
  • Complex analysis requiring large amounts of context
  • Long document processing
Configuration Example:
ai:
  preset: "alternatives.baseline-48g-context"
Using more context windows does not always yield better results. Consider your specific use case before selecting context-optimized presets.

Experimental Presets

Experimental presets are under active development and may not be stable. Use only in testing environments.
Experimental presets provide access to cutting-edge models and configurations that are being evaluated for future releases. These presets may have different performance characteristics or stability compared to baseline presets.
PresetRequired GPU MemoryModel FamilyStatus
experimental.mistral-24g24GBMistralBeta
experimental.mistral-48g48GBMistralBeta
experimental.gpt-oss-24g24GBGPT-OSSBeta
experimental.gpt-oss-48g48GBGPT-OSSBeta
experimental.gemma-24g24GBGemma 3Alpha
Configuration Example:
ai:
  preset: "experimental.gpt-oss-24g"

Important Notes About Experimental Presets

  • Experimental presets may be removed or significantly changed between versions
  • Performance and stability are not guaranteed
  • Not recommended for production environments
  • May require additional configuration parameters
  • Support may be limited

Deprecated Presets

Deprecated presets are maintained for backward compatibility only and will not receive updates.
For customers that require older configurations, deprecated presets are available but not recommended for new installations.
Preset PatternDescriptionRecommendation
deprecated.<size>g.20250710Pre-Qwen 3 model configurationsUpgrade to current presets when possible
Example Configuration:
ai:
  preset: "deprecated.24g.20250710"

Migration from Deprecated Presets

If you’re using a deprecated preset, we strongly recommend migrating to current baseline or alternative presets:
  1. Review the base presets to find an equivalent configuration
  2. Test the new preset in a staging environment
  3. Update your production configuration
  4. Monitor performance and adjust if needed
Migration provides access to improved models, better performance, and ongoing support.