Multimodal evaluations

Evaluate prompts that include images

Opik lets you evaluate multimodal prompts that combine text and images. You can run these experiments straight from the UI, or by using the SDKs. This page covers both flows, clarifies which models support image inputs, and explains how to customise model detection.

Online evaluation in the UI

LLM-as-a-Judge experiments in the Opik UI accept image attachments on both the dataset rows and the prompt messages. When you configure an evaluation:

  1. Open Evaluations → LLM-as-a-Judge and click New evaluation.
  2. Choose a vision-capable model (for example, gpt-4o or claude-3-5-sonnet).
  3. Add image URLs or upload files in either the Dataset rows or the Prompt builder.
  4. Launch the evaluation. Opik automatically keeps the images in the judge prompt when the selected model supports them. If the model does not support images, the UI surfaces a warning and flattens the image reference into a text placeholder so the run still completes.

All multimodal traces appear in the evaluation results, so you can inspect exactly what the judge model received.

Using the SDKs

Both the Python and TypeScript SDKs accept OpenAI-style message payloads. Each message can contain either a string or a list of content blocks. Image blocks use the image_url type and can point to an https:// URL or a data:image/...;base64, payload.

Python example

1from opik import Opik
2from opik.evaluation import evaluate_prompt, metrics
3
4client = Opik()
5dataset = client.get_or_create_dataset("vision_captions")
6dataset.insert(
7 [
8 {
9 "input": {
10 "image_source": "https://example.com/cat.jpg",
11 },
12 "reference": "A grey cat sitting on a sofa",
13 },
14 {
15 "input": {
16 "image_source": "data:image/png;base64,iVBORw0KGgo...", # base64 works too
17 },
18 "reference": "An orange cat playing with a toy",
19 },
20 ]
21)
22
23MESSAGES = [
24 {
25 "role": "system",
26 "content": "You are an assistant that analyses the attached image.",
27 },
28 {
29 "role": "user",
30 "content": [
31 {"type": "text", "text": "Describe the following picture."},
32 {
33 "type": "image_url",
34 "image_url": {"url": "{{image_source}}", "detail": "high"},
35 },
36 ],
37 },
38]
39
40evaluate_prompt(
41 dataset=dataset,
42 messages=MESSAGES,
43 model="gpt-4o-mini",
44 scoring_metrics=[metrics.Equals()], # compares output against the dataset `reference`
45)

The evaluator uses LiteLLM-style model identifiers. Opik recognises popular multimodal families (OpenAI GPT-4o, Anthropic Claude 3+, Google Gemini 1.5, Meta Llama 3.2 Vision, Mistral Pixtral, etc.) and treats any model whose name ends with -vision or -vl as vision-capable. Provider prefixes such as anthropic/ are stripped automatically. When a model is not recognised as vision-capable, Opik logs a warning and replaces image blocks with placeholders before making the API call.

TypeScript note

TypeScript support for multimodal evaluations is in progress. The TypeScript SDK will expose the same message structure and detection rules; we’ll update this section with a full example once the implementation lands.

Customising model support

If you are experimenting with a new provider, you can extend the registry at runtime:

1from opik.evaluation.models import ModelCapabilities
2
3ModelCapabilities.add_vision_model("my-provider/sparrow-vision-beta")

Any subsequent evaluations in that process will treat the custom model as vision-capable.

FAQ

How do I confirm whether a model supports images?

1from opik.evaluation.models import ModelCapabilities
2
3ModelCapabilities.supports_vision("anthropic/claude-3-opus")

If the call returns False, Opik will log a warning and flatten image blocks. The data is inserted as text and truncated to the first 500 characters to keep prompts manageable.

What kind of image sources can I use?

  • Direct https:// URLs (publicly accessible).
  • Base64 data URLs such as data:image/png;base64,iVBORw0....
  • Optional OpenAI detail fields ("low", "high") are preserved and forwarded when present.

Does this work with LangChain integrations?

Yes. Opik forwards the same OpenAI-style content blocks that LangChain expects, so structured messages with image_url dictionaries continue to work. A simple validation script is shown below:

1from langchain_core.messages import messages_to_dict
2from langchain_core.messages import HumanMessage, SystemMessage
3from langchain_core.prompts import ChatPromptTemplate
4from opik.evaluation.models.langchain.message_converters import convert_to_langchain_messages
5
6# Directly using LangChain message objects
7plain_messages = [
8 SystemMessage(content="You are an assistant."),
9 HumanMessage(content="Describe the weather in Paris."),
10]
11
12convert_to_langchain_messages(messages_to_dict(plain_messages))
13
14# Using a ChatPromptTemplate with multimodal content
15chat_prompt = ChatPromptTemplate.from_messages([
16 ("system", "You are a helpful assistant."),
17 (
18 "user",
19 [
20 {"type": "text", "text": "Describe the following image."},
21 {
22 "type": "image_url",
23 "image_url": {
24 "url": "https://python.langchain.com/img/phone_handoff.jpeg",
25 "detail": "high",
26 },
27 },
28 ],
29 ),
30])
31
32rendered = chat_prompt.invoke({})
33messages = messages_to_dict(rendered.messages)
34
35convert_to_langchain_messages(messages) # round-trip validation