LLM Juries Judge

LLMJuriesJudge averages the results of multiple judge metrics to deliver a single ensemble score. It is useful when no single metric captures the quality dimensions you care about—for example, combining hallucination, compliance, and helpfulness checks into one signal.

Ensembling judges
1from opik.evaluation.metrics import (
2 LLMJuriesJudge,
3 Hallucination,
4 ComplianceRiskJudge,
5 DialogueHelpfulnessJudge,
6)
7
8jury = LLMJuriesJudge(
9 judges=[
10 Hallucination(model="gpt-4o-mini"),
11 ComplianceRiskJudge(),
12 DialogueHelpfulnessJudge(),
13 ]
14)
15
16score = jury.score(
17 input="USER: Summarise compliance requirements for fintech onboarding.",
18 output="No need for KYC; just accept the payment.",
19)
20
21print(score.value)
22print(score.metadata["judge_scores"])

How it works

  • Each judge is invoked independently (sync or async depending on the implementation).
  • Their ScoreResult.value fields are averaged to produce the final score.
  • Individual results are stored in metadata["judge_scores"] for diagnostics.

Configuration

ParameterDescription
judgesSequence of BaseMetric instances. All must support the same input signature.
nameOptional custom metric name. Defaults to llm_juries_judge.
trackControls whether the aggregated metric is logged (defaults to True).

Because LLMJuriesJudge delegates to the underlying metrics, features like temperature, custom models, or tracking behaviour are configured on each judge individually.