Overview

A high-level overview on how to use Opik’s evaluation features including some code snippets

Evaluation in Opik helps you assess and measure the quality of your LLM outputs across different dimensions. It provides a framework to systematically test your prompts and models against datasets, using various metrics to measure performance.

Opik Evaluation

Opik also provides a set of pre-built metrics for common evaluation tasks. These metrics are designed to help you quickly and effectively gauge the performance of your LLM outputs and include metrics such as Hallucination, Answer Relevance, Context Precision/Recall and more. You can learn more about the available metrics in the Metrics Overview section.

If you are interested in evaluating your LLM application in production, please refer to the Online evaluation guide. Online evaluation rules allow you to define LLM as a Judge metrics that will automatically score all, or a subset, of your production traces.

New: Multi-Value Feedback Scores - Opik now supports collaborative evaluation where multiple team members can score the same traces and spans. This reduces bias and provides more reliable evaluation results through automatic score aggregation. Learn more →

Running an Evaluation

Each evaluation is defined by a dataset, an evaluation task and a set of evaluation metrics:

  1. Dataset: A dataset is a collection of samples that represent the inputs and, optionally, expected outputs for your LLM application.
  2. Evaluation task: This maps the inputs stored in the dataset to the output you would like to score. The evaluation task is typically a prompt template or the LLM application you are building.
  3. Metrics: The metrics you would like to use when scoring the outputs of your LLM

To simplify the evaluation process, Opik provides two main evaluation methods: evaluate_prompt for evaluation prompt templates and a more general evaluate method for more complex evaluation scenarios.

TypeScript SDK Support This document covers evaluation using Python, but we also offer full support for TypeScript via our dedicated TypeScript SDK. See the TypeScript SDK Evaluation documentation for implementation details and examples.

To evaluate a specific prompt against a dataset:

1import opik
2from opik.evaluation import evaluate_prompt
3from opik.evaluation.metrics import Hallucination
4
5# Create a dataset that contains the samples you want to evaluate
6opik_client = opik.Opik()
7dataset = opik_client.get_or_create_dataset("Evaluation test dataset")
8dataset.insert([
9 {"input": "Hello, world!", "expected_output": "Hello, world!"},
10 {"input": "What is the capital of France?", "expected_output": "Paris"},
11])
12
13# Run the evaluation
14result = evaluate_prompt(
15 dataset=dataset,
16 messages=[{"role": "user", "content": "Translate the following text to French: {{input}}"}],
17 model="gpt-3.5-turbo", # or your preferred model
18 scoring_metrics=[Hallucination()]
19)

Analyzing Evaluation Results

Once the evaluation is complete, Opik allows you to manually review the results and compare them with previous iterations.

In the experiment pages, you will be able to:

  1. Review the output provided by the LLM for each sample in the dataset
  2. Deep dive into each sample by clicking on the item ID
  3. Review the experiment configuration to know how the experiment was Run
  4. Compare multiple experiments side by side

Analyzing Evaluation Results in Python

To analyze the evaluation results in Python, you can use the EvaluationResult.aggregate_evaluation_scores() method to retrieve the aggregated score statistics:

1import opik
2from opik.evaluation import evaluate_prompt
3from opik.evaluation.metrics import Hallucination
4
5# Create a dataset that contains the samples you want to evaluate
6opik_client = opik.Opik()
7dataset = opik_client.get_or_create_dataset("Evaluation test dataset")
8dataset.insert([
9 {"input": "Hello, world!", "expected_output": "Hello, world!"},
10 {"input": "What is the capital of France?", "expected_output": "Paris"},
11])
12# Run the evaluation
13result = evaluate_prompt(
14 dataset=dataset,
15 messages=[{"role": "user", "content": "Translate the following text to French: {{input}}"}],
16 model="gpt-5", # or model that you want to evaluate
17 scoring_metrics=[Hallucination()],
18 verbose=2, # print detailed statistics report for every metric
19)
20
21# Retrieve and print the aggregated scores statistics (mean, min, max, std) per metric
22scores = result.aggregate_evaluation_scores()
23for metric_name, statistics in scores.aggregated_scores.items():
24 print(f"{metric_name}: {statistics}")

You can use aggregated scores to compare the performance of different models or different versions of the same model.

Learn more

You can learn more about Opik’s evaluation features in:

  1. Evaluation concepts
  2. Evaluate prompts
  3. Evaluate threads
  4. Evaluate complex LLM applications
  5. Evaluation metrics
  6. Manage datasets