Agent Optimizer LiteLLM Support

LiteLLM Model Support for Optimizers (OpenAI, Azure, Local Models)

Opik Agent Optimizer leverages LiteLLM under the hood to provide comprehensive support for a wide array of Large Language Models (LLMs). This integration means you can use virtually any model provider or even locally hosted models with any of the Opik optimization algorithms.

How it Works

When you initialize an optimizer (e.g., MetaPromptOptimizer, FewShotBayesianOptimizer, MiproOptimizer, EvolutionaryOptimizer), the model parameter (and reasoning_model for MetaPromptOptimizer) accepts a LiteLLM model string. LiteLLM then handles the communication with the specified model provider or local endpoint.

This builds upon Opik’s existing tracing integration with LiteLLM, ensuring that not only are your optimization processes flexible in model choice, but calls made during optimization can also be seamlessly tracked if you have the OpikLogger callback configured in LiteLLM.

Key Benefit: You are not locked into a specific model provider. You can experiment with different models, including open-source ones running locally, to find the best fit for your task and budget, all while using the same Opik Agent Optimizer codebase.

Specifying Models

You pass the LiteLLM model identifier string directly to the model parameter in the optimizer’s constructor. Here are some common examples:

OpenAI

1from opik_optimizer import MetaPromptOptimizer
2
3optimizer = MetaPromptOptimizer(
4 model="openai/gpt-4o-mini", # Standard OpenAI model
5 # ... other parameters
6)

Azure OpenAI

1from opik_optimizer import FewShotBayesianOptimizer
2
3optimizer = FewShotBayesianOptimizer(
4 model="azure/your-azure-deployment-name", # Your Azure deployment name
5 # Ensure your environment variables for Azure (AZURE_API_KEY, AZURE_API_BASE, AZURE_API_VERSION) are set
6 # ... other parameters
7)

Anthropic

1from opik_optimizer import MiproOptimizer
2
3optimizer = MiproOptimizer(
4 model="anthropic/claude-3-opus-20240229",
5 # Ensure ANTHROPIC_API_KEY is set
6 # ... other parameters
7)

Google Gemini

1from opik_optimizer import EvolutionaryOptimizer
2
3optimizer = EvolutionaryOptimizer(
4 model="gemini/gemini-1.5-pro-latest",
5 # Ensure GEMINI_API_KEY is set
6 # ... other parameters
7)

Local Models (e.g., via Ollama)

LiteLLM allows you to connect to models served locally through tools like Ollama.

1

Ensure Ollama is running and the model is served

For example, to serve Llama 3:

$ollama serve
>ollama pull llama3
2

Use the LiteLLM convention for Ollama models

The format is typically ollama/model_name or ollama_chat/model_name for chat-tuned models.

1from opik_optimizer import MetaPromptOptimizer
2
3optimizer = MetaPromptOptimizer(
4 model="ollama_chat/llama3", # Or "ollama/llama3" if not using the chat API specifically
5 # LiteLLM defaults to http://localhost:11434 for Ollama.
6 # If Ollama runs elsewhere, set an environment variable like OLLAMA_API_BASE_URL or pass api_base in model_kwargs
7 # model_kwargs={"api_base": "http://my-ollama-host:11434"}
8 # ... other parameters
9)

Other Providers

LiteLLM supports numerous other providers like Cohere, Mistral AI, Bedrock, Vertex AI, Hugging Face Inference Endpoints, etc. Refer to the LiteLLM documentation on Providers for the correct model identifier strings and required environment variables for API keys.

Example for a generic Hugging Face Inference Endpoint: