Known Issues

Known Issues

If pyrate-limiter 4.x is installed you may see TypeError: Limiter.__init__() got an unexpected keyword argument 'raise_when_fail'. That version dropped the legacy flag our optimizer still passes.

Workaround: pin pyrate-limiter to a 3.x release:

$pip install "pyrate-limiter>=3.0.0,<4.0.0"

Fixed in: 3.0.0 (2026-01-26). Upgrade the SDK to remove the legacy flag entirely.

convert_tqdm_to_rich.<locals>._tqdm_to_track() missing 1 required positional argument: 'iterable' comes from tqdm >= 4.71 changing the wrapper signature we rely on.

Workaround: pin tqdm to 4.70.0:

$pip install tqdm==4.70.0

Fixed in: 3.0.0 (2026-01-26).

PydanticSerializationUnexpectedValue is emitted when LiteLLM serializes Message objects with fewer fields than the schema (an upstream change in LiteLLM/Pydantic v2). We suppress the warning because the payload is still valid.

Workaround: avoid the affected LiteLLM builds:

$pip install --upgrade "litellm<1.81.1"

Fixed in: 3.0.0 (2026-01-26).

litellm.InternalServerError: OpenAIException - Connection error. has been reproducible against LiteLLM 1.81.*. These releases can break the OpenAI evaluation flow inside Opik Optimizer.

Workaround:

$pip install --upgrade "litellm<1.81.0"

Fixed in: 3.0.0 (2026-01-26).

Common Errors

This error occurs when you pass an incorrect type to the optimizer’s optimize_prompt() method.

Solution: Ensure you’re using the ChatPrompt class to define your prompt:

1from opik_optimizer import ChatPrompt
2
3prompt = ChatPrompt(
4 messages=[
5 {"role": "system", "content": "Your system prompt here"},
6 {"role": "user", "content": "Your user prompt with {variable}"},
7 ],
8 model="gpt-4",
9)

This error occurs when the dataset passed to the optimizer is not a proper Dataset object.

Solution: Use the Dataset class to create your dataset:

1import opik
2
3client = opik.Opik()
4dataset = client.get_or_create_dataset(name="your-dataset-name")
5dataset.insert(
6 [
7 {"input": "example 1", "output": "expected 1"},
8 {"input": "example 2", "output": "expected 2"},
9 ]
10)

This error occurs when the metric parameter is not callable or doesn’t have the correct signature.

Solution: Ensure your metric is a function that takes dataset_item and llm_output as arguments and returns a ScoreResult:

1from opik.evaluation.metrics import ScoreResult
2
3def my_metric(dataset_item, llm_output):
4 # Your scoring logic here
5 score = calculate_score(dataset_item, llm_output)
6 return ScoreResult(
7 name="my-metric",
8 value=score,
9 reason="Explanation for the score",
10 )

This error occurs when your prompt template contains placeholders (e.g., {variable}) that don’t match your dataset fields.

Solution: Ensure all placeholders in your prompt match the keys in your dataset:

1# Prompt with {question} placeholder
2prompt = ChatPrompt(
3 user="Answer: {question}",
4 model="gpt-4",
5)
6
7# Dataset must have 'question' field
8dataset = Dataset.from_list(
9 [
10 {"question": "What is AI?", "output": "..."},
11 ]
12)

This error occurs when trying to use the GepaOptimizer without the required gepa package installed.

Solution: Install the gepa package:

$pip install gepa

This error typically occurs when the LLM provider API key is not configured in your environment.

Solution: Set the appropriate environment variable for your LLM provider:

$# For OpenAI
$export OPENAI_API_KEY="your-api-key"
$
$# For Anthropic
$export ANTHROPIC_API_KEY="your-api-key"
$
$# For other providers, check the LiteLLM documentation