LLM Configuration
Model name in LiteLLM format (e.g.,
openai/gpt-5, anthropic/claude-sonnet-4-5).API key for your LLM provider. Not required for local models or cloud provider auth (Vertex AI, AWS Bedrock).
Custom API base URL. Also accepts
OPENAI_API_BASE, LITELLM_BASE_URL, or OLLAMA_API_BASE.Request timeout in seconds for LLM calls.
Maximum number of retries for LLM API calls on transient failures.
Control thinking effort for reasoning models. Valid values:
none, minimal, low, medium, high, xhigh. Defaults to medium for quick scan mode.Timeout in seconds for memory compression operations (context summarization).
Optional Features
API key for Perplexity AI. Enables real-time web search during scans for OSINT and vulnerability research.
Disable browser automation tools.
Enable/disable anonymous telemetry. Set to
0, false, no, or off to disable.Docker Configuration
Docker image to use for the sandbox container.
Docker daemon socket path. Use for remote Docker hosts or custom configurations.
Runtime backend for the sandbox environment.
Sandbox Configuration
Maximum execution time in seconds for sandbox operations.
Timeout in seconds for connecting to the sandbox container.
Config File
Strix stores configuration in~/.strix/cli-config.json. You can also specify a custom config file: