LLM Providers
LLM Providers
Configure your AI model — use any LLM provider or run models locally.
Supported Providers
| Provider | Models |
|---|---|
| OpenAI | GPT-4o, o1, o3-mini |
| Anthropic | Claude Opus, Sonnet, Haiku |
| Gemini Pro, Flash | |
| Azure OpenAI | GPT-4o (enterprise compliance) |
| Ollama | Llama, Mistral, Phi, Qwen — any local model |
| Any OpenAI-compatible API | vLLM, LM Studio, Together AI, Groq |
Configuration
Add your API key in Settings > LLM Configuration. You can switch providers at any time.
Local Models with Ollama
For maximum privacy, run a local LLM with Ollama. Zero tokens leave your network. Zero per-query cost after hardware.
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama3
# Point MetricChat to Ollama
MC_LLM_PROVIDER=ollama
MC_OLLAMA_URL=http://localhost:11434