Configuring LLM Providers
The Opik Agent Optimizer uses LiteLLM under the hood, giving you access to 100+ LLM providers with a unified interface. This guide shows you how to configure different providers for both your ChatPrompt (the model that runs your prompt) and the Optimizer (the model that improves your prompt).
Understanding the Two Model Types
When using the Opik Optimizer, there are two distinct models to configure:
LiteLLM Model Format
All models use the LiteLLM format: provider/model-name
Provider Configuration
OpenAI
Anthropic
Google Gemini
Azure OpenAI
Ollama (Local)
OpenRouter
OpenAI
Environment Variable:
Available Models:
openai/gpt-4o- Most capable modelopenai/gpt-4o-mini- Fast and cost-effectiveopenai/gpt-4-turbo- Previous generationopenai/o1- Reasoning modelopenai/o3-mini- Efficient reasoning model
Example:
Environment Variables Reference
Model Parameters
You can pass additional parameters to control model behavior:
Mixing Providers
You can use different providers for the ChatPrompt and Optimizer:
Recommendation: Use a capable model like gpt-4o or claude-3-5-sonnet for the optimizer, even if your production application uses a smaller model. The optimizer only runs during development, so the cost is minimal compared to the quality improvements you’ll achieve.
Troubleshooting
AuthenticationError: Invalid API key
Ensure your API key is correctly set in the environment:
Model not found
Verify the model name follows the LiteLLM format provider/model-name:
Rate limiting errors
If you encounter rate limits, try:
- Reducing
n_threadsin the optimizer - Using a model with higher rate limits
- Adding delays between API calls
Next Steps
- Learn about Optimization Concepts
- Explore different Optimization Algorithms
- Check out the API Reference
For a complete list of supported providers and models, see the LiteLLM documentation.