Agent Optimizer LiteLLM Support
LiteLLM Model Support for Optimizers (OpenAI, Azure, Local Models)
Opik Agent Optimizer
leverages LiteLLM under the hood to provide comprehensive support for a wide array of Large Language Models (LLMs). This integration means you can use virtually any model provider or even locally hosted models with any of the Opik optimization algorithms.
How it Works
When you initialize an optimizer (e.g., MetaPromptOptimizer
, FewShotBayesianOptimizer
, MiproOptimizer
, EvolutionaryOptimizer
), the model
parameter (and reasoning_model
for MetaPromptOptimizer
) accepts a LiteLLM model string. LiteLLM then handles the communication with the specified model provider or local endpoint.
This builds upon Opik’s existing tracing integration with LiteLLM, ensuring that not only are your optimization processes flexible in model choice, but calls made during optimization can also be seamlessly tracked if you have the OpikLogger
callback configured in LiteLLM.
Key Benefit: You are not locked into a specific model provider. You can experiment with different models, including open-source ones running locally, to find the best fit for your task and budget, all while using the same Opik Agent Optimizer codebase.
Specifying Models
You pass the LiteLLM model identifier string directly to the model
parameter in the optimizer’s constructor. Here are some common examples:
OpenAI
Azure OpenAI
Anthropic
Google Gemini
Local Models (e.g., via Ollama)
LiteLLM allows you to connect to models served locally through tools like Ollama.
Other Providers
LiteLLM supports numerous other providers like Cohere, Mistral AI, Bedrock, Vertex AI, Hugging Face Inference Endpoints, etc. Refer to the LiteLLM documentation on Providers for the correct model identifier strings and required environment variables for API keys.
Example for a generic Hugging Face Inference Endpoint: