Define metrics

Metrics drive optimizer decisions. This guide highlights the fastest way to pick proven presets from Opik’s evaluation catalog, then shows how to extend them when your use case demands it. If you need the full theory, see Evaluation concepts and the metrics overview.

Metric anatomy

A metric is a callable with the signature (dataset_item, llm_output) -> ScoreResult | float. Use ScoreResult to attach names and reasons.

1from opik.evaluation.metrics.score_result import ScoreResult
2
3def short_answer(item, output):
4 is_short = len(output) <= 200
5 return ScoreResult(
6 name="short_answer",
7 value=1.0 if is_short else 0.0,
8 reason="Answer under 200 chars" if is_short else "Answer too long"
9 )

Compose metrics

Use MultiMetricObjective to balance multiple goals (accuracy, style, safety).

1from opik_optimizer import MultiMetricObjective
2from opik.evaluation.metrics import LevenshteinRatio, AnswerRelevance
3
4objective = MultiMetricObjective(
5 weights=[0.6, 0.4],
6 metrics=[
7 lambda item, output: LevenshteinRatio().score(reference=item["answer"], output=output),
8 lambda item, output: AnswerRelevance().score(
9 context=[item["answer"]], output=output, input=item["question"]
10 ),
11 ],
12 name="accuracy_and_relevance",
13)

Weights do not need to sum to 1; choose numbers that highlight the most critical metric to your use case.

ScenarioMetricNotes
Factual QALevenshteinRatio or ExactMatchWorks with text-only datasets; deterministic and low cost.
Retrieval / groundingAnswerRelevancePass reference context via context=[item["answer"]] or retrieved docs.
SafetyModeration or custom LLM-as-a-judgeCombine with MultiMetricObjective to gate unsafe answers.
Multi-turn trajectoriesAgent trajectory evaluatorScores complete conversations, not just final outputs.

Reuse these heuristics before writing custom metrics—most are already imported in opik.evaluation.metrics.

Checklist for great metrics

  • Return explanations – populate reason so reflective optimizers can group failure modes.
  • Avoid randomness – deterministic metrics keep optimizers from chasing noise.
  • Bound runtime – use cached references or lightweight models where possible; heavy metrics slow down trials.
  • Log metadata – include details in the ScoreResult if you want to visualize per-sample attributes later.

When you outgrow presets, move to Custom metrics for LLM-as-a-judge flows or domain-specific scoring.

Testing metrics

  1. Dry-run against a handful of dataset rows before launching an optimization.
  2. Use optimizer.task_evaluator.evaluate_prompt to evaluate a single prompt with your metric.
  3. Inspect the per-sample reasons in the Opik dashboard to ensure they match expectations.