Skip to main content
Strix uses LiteLLM for model compatibility, supporting 100+ LLM providers. The fastest way to get started. Strix Router gives you access to tested models with the highest rate limits and zero data retention.
export STRIX_LLM="strix/gpt-5"
export LLM_API_KEY="your-strix-api-key"
Get your API key at models.strix.ai.

Bring Your Own Key

You can also use any LiteLLM-compatible provider with your own API keys:
ModelProviderConfiguration
GPT-5OpenAIopenai/gpt-5
Claude Sonnet 4.6Anthropicanthropic/claude-sonnet-4-6
Gemini 3 ProGoogle Vertexvertex_ai/gemini-3-pro-preview
export STRIX_LLM="openai/gpt-5"
export LLM_API_KEY="your-api-key"

Local Models

Run models locally with Ollama, LM Studio, or any OpenAI-compatible server:
export STRIX_LLM="ollama/llama4"
export LLM_API_BASE="http://localhost:11434"
See the Local Models guide for setup instructions and recommended models.

Provider Guides

Model Format

Use LiteLLM’s provider/model-name format:
openai/gpt-5
anthropic/claude-sonnet-4-6
vertex_ai/gemini-3-pro-preview
bedrock/anthropic.claude-4-5-sonnet-20251022-v1:0
ollama/llama4