LLM Configuration
Model name in LiteLLM format (e.g.,
openai/gpt-5.4, anthropic/claude-sonnet-4-6).API key for your LLM provider. Not required for local models or cloud provider auth (Vertex AI, AWS Bedrock).
Custom API base URL. Also accepts
OPENAI_API_BASE, LITELLM_BASE_URL, or OLLAMA_API_BASE.Request timeout in seconds for LLM calls.
Maximum number of retries for LLM API calls on transient failures.
Control thinking effort for reasoning models. Valid values:
none, minimal, low, medium, high, xhigh. Defaults to medium for quick scan mode.Timeout in seconds for memory compression operations (context summarization).
Optional Features
API key for Perplexity AI. Enables real-time web search during scans for OSINT and vulnerability research.
Disable browser automation tools.
Global telemetry default toggle. Set to
0, false, no, or off to disable both PostHog and OTEL unless overridden by per-channel flags below.Enable/disable OpenTelemetry run observability independently. When unset, falls back to
STRIX_TELEMETRY.Enable/disable PostHog product telemetry independently. When unset, falls back to
STRIX_TELEMETRY.OTLP/Traceloop base URL for remote OpenTelemetry export. If unset, Strix keeps traces local only.
API key used for remote trace export. Remote export is enabled only when both
TRACELOOP_BASE_URL and TRACELOOP_API_KEY are set.Optional custom OTEL headers (JSON object or
key=value,key2=value2). Useful for Langfuse or custom/self-hosted OTLP gateways.Docker Configuration
Docker image to use for the sandbox container.
Docker daemon socket path. Use for remote Docker hosts or custom configurations.
Runtime backend for the sandbox environment.
Sandbox Configuration
Maximum execution time in seconds for sandbox operations.
Timeout in seconds for connecting to the sandbox container.
Config File
Strix stores configuration in~/.strix/cli-config.json. You can also specify a custom config file: