Customizing models used in the Optimizer
LiteLLM Model Support for Optimizers (OpenAI, Azure, Local Models)
Opik Agent Optimizer
leverages LiteLLM under the hood to provide
comprehensive support for a wide array of Large Language Models (LLMs). This integration means you
can use virtually any model provider or even locally hosted models with any of the Opik optimization
algorithms.
The Agent Optimizer uses LLM’s to optimize prompts, as a result there are two different ways to configure the LLM calling behavior in the Optimizer:
ChatPrompt model
: This is the LLM model used in your application to generate the responseOptimization model
: This is the LLM model used by the Opik Optimizer to improve the ChatPrompt
Configuring ChatPrompt model
How it works
The ChatPrompt has two different parameters that you can use to configure the LLM response:
model
: Specify a model to useinvoke
: Specify a function to generate the response, this is more customizable than themodel
field
We recommend starting by using the model
parameter that allows you to specify a provider or model
to use to generate the LLM out. We recommend that you set the model string to the model that you
use within your LLM application.
If you need to customize the LLM generation behavior further to match what you do in your
LLM application, you can use the invoke
parameter in the ChatPrompt
. This parameter supports a
function that takes the model string, the messages list and the tool list and returns the LLM
response string.
Specifying a ChatPrompt model
You can configure the ChatPrompt model using the model
parameter:
The Opik Optimizer supports all LLM providers thanks to LiteLLM, you can learn more about the models and providers supported in the LiteLLM documentation.
Customizing the LLM response
The invoke
parameter of the ChatPrompt allows you to customize the LLM response. This allows you
to fully customize the ChatPrompt behavior, the invoke function should follow the following parameters:
The invoke
method can also be used to optimize prompts that utilize structured outputs:
Configuring Optimization model
How it Works
When you initialize an optimizer (e.g., MetaPromptOptimizer
, FewShotBayesianOptimizer
,
MiproOptimizer
, EvolutionaryOptimizer
), the model
parameter (and reasoning_model
for
MetaPromptOptimizer
) accepts a LiteLLM model string. LiteLLM then handles the communication with
the specified model provider or local endpoint.
This builds upon Opik’s existing tracing integration with LiteLLM,
ensuring that not only are your optimization processes flexible in model choice, but calls made
during optimization can also be seamlessly tracked if you have the OpikLogger
callback configured
in LiteLLM.
Key Benefit: You are not locked into a specific model provider. You can experiment with different models, including open-source ones running locally, to find the best fit for your task and budget, all while using the same Opik Agent Optimizer codebase.
Specifying Models
You pass the LiteLLM model identifier string directly to the model
parameter in the optimizer’s
constructor. Here are some common examples:
OpenAI
Azure OpenAI
Anthropic
Google Gemini
Local Models (e.g., via Ollama)
LiteLLM allows you to connect to models served locally through tools like Ollama.
Other Providers
LiteLLM supports numerous other providers like Cohere, Mistral AI, Bedrock, Vertex AI, Hugging Face Inference Endpoints, etc. Refer to the LiteLLM documentation on Providers for the correct model identifier strings and required environment variables for API keys.