Custom Metric

Opik allows you to define your own metrics. This is useful if you have a specific metric that is not already implemented.

If you want to write an LLM as a Judge metric, you can use either the G-Eval metric or create your own from scratch.

Custom LLM as a Judge metric

Creating a custom metric using G-Eval

G-eval allows you to specify a set of criteria for your metric and it will use a Chain of Thought prompting technique to create some evaluation steps and return a score.

To use G-Eval, you will need to specify a task introduction and evaluation criteria:

1from opik.evaluation.metrics import GEval
2
3metric = GEval(
4 task_introduction="You are an expert judge tasked with evaluating the faithfulness of an AI-generated answer to the given context.",
5 evaluation_criteria="""
6 The OUTPUT must not introduce new information beyond what's provided in the CONTEXT.
7 The OUTPUT must not contradict any information given in the CONTEXT.
8
9 Return only a score between 0 and 1.
10 """,
11)

Writing your own custom metric

To define a custom metric, you need to subclass the BaseMetric class and implement the score method and an optional ascore method:

1from typing import Any
2
3from opik.evaluation.metrics import base_metric, score_result
4
5
6class MyCustomMetric(base_metric.BaseMetric):
7 def __init__(self, name: str):
8 super().__init__(name)
9
10 def score(self, input: str, output: str, **ignored_kwargs: Any) -> score_result.ScoreResult:
11 # Add your logic here
12
13 return score_result.ScoreResult(
14 value=0,
15 name=self.name,
16 reason="Optional reason for the score"
17 )

The score method should return a ScoreResult object. The ascore method is optional and can be used to compute asynchronously if needed.

You can also return a list of ScoreResult objects as part of your custom metric. This is useful if you want to return multiple scores for a given input and output pair.

Now you can use the custom metric to score LLM outputs:

{pytest_codeblocks_skip=true}
1metric = MyCustomMetric()
2
3metric.score(input="What is the capital of France?", output="Paris")

Also, this metric can now be used in the evaluate function as explained here: Evaluating LLMs.

Example: Creating a metric with OpenAI model

You can implement your own custom metric by creating a class that subclasses the BaseMetric class and implements the score method.

1import json
2from typing import Any
3
4from openai import OpenAI
5from opik.evaluation.metrics import base_metric, score_result
6
7
8class LLMJudgeMetric(base_metric.BaseMetric):
9 def __init__(self, name: str = "Factuality check", model_name: str = "gpt-4o"):
10 super().__init__(name)
11 self.llm_client = OpenAI()
12 self.model_name = model_name
13 self.prompt_template = """
14 You are an impartial judge evaluating the following claim for factual accuracy.
15 Analyze it carefully and respond with a number between 0 and 1: 1 if completely
16 accurate, 0.5 if mixed accuracy, or 0 if inaccurate. The format of the your response
17 should be a single number with no other text.
18
19 The format of the your response should be a JSON object with no additional text or backticks that follows the format:
20 {{
21 "score": <score between 0 and 1>
22 }}
23
24 Claim to evaluate: {output}
25
26 Response:
27 """
28 def score(self, output: str, **ignored_kwargs: Any) -> score_result.ScoreResult:
29 """
30 Score the output of an LLM.
31
32 Args:
33 output: The output of an LLM to score.
34 **ignored_kwargs: Any additional keyword arguments. This is important so that the metric can be used in the `evaluate` function.
35 """
36 # Construct the prompt based on the output of the LLM
37 prompt = self.prompt_template.format(output=output)
38 # Generate and parse the response from the LLM
39 response = self.llm_client.chat.completions.create(
40 model=self.model_name,
41 messages=[{"role": "user", "content": prompt}]
42 )
43 response_dict = json.loads(response.choices[0].message.content)
44
45 response_score = float(response_dict["score"])
46
47 return score_result.ScoreResult(
48 name=self.name,
49 value=response_score
50 )

You can then use this metric to score your LLM outputs:

{pytest_codeblocks_skip=true}
1metric = LLMJudgeMetric()
2
3metric.score(output="Paris is the capital of France")

In this example, we used the OpenAI Python client to call the LLM. You don’t have to use the OpenAI Python client, you can update the code example above to use any LLM client you have access to.

Example: Adding support for many LLM providers

In order to support a wide range of LLM providers, we recommend using the litellm library to call your LLM. This allows you to support hundreds of models without having to maintain a custom LLM client.

Opik providers a LitellmChatModel class that wraps the litellm library and can be used in your custom metric:

1import json
2from typing import Any
3
4from opik.evaluation.metrics import base_metric, score_result
5from opik.evaluation import models
6
7
8class LLMJudgeMetric(base_metric.BaseMetric):
9 def __init__(self, name: str = "Factuality check", model_name: str = "gpt-4o"):
10 super().__init__(name)
11 self.llm_client = models.LiteLLMChatModel(model_name=model_name)
12 self.prompt_template = """
13 You are an impartial judge evaluating the following claim for factual accuracy. Analyze it carefully
14 and respond with a number between 0 and 1: 1 if completely accurate, 0.5 if mixed accuracy, or 0 if inaccurate.
15 Then provide one brief sentence explaining your ruling.
16
17 The format of the your response should be a JSON object with no additional text or backticks that follows the format:
18 {{
19 "score": <score between 0 and 1>,
20 "reason": "<reason for the score>"
21 }}
22
23 Claim to evaluate: {output}
24
25 Response:
26 """
27 def score(self, output: str, **ignored_kwargs: Any) -> score_result.ScoreResult:
28 """
29 Score the output of an LLM.
30
31 Args:
32 output: The output of an LLM to score.
33 **ignored_kwargs: Any additional keyword arguments. This is important so that the metric can be used in the `evaluate` function.
34 """
35 # Construct the prompt based on the output of the LLM
36 prompt = self.prompt_template.format(output=output)
37
38 # Generate and parse the response from the LLM
39 response = self.llm_client.generate_string(input=prompt)
40
41 response_dict = json.loads(response)
42
43 return score_result.ScoreResult(
44 name=self.name,
45 value=response_dict["score"],
46 reason=response_dict["reason"]
47 )

You can then use this metric to score your LLM outputs:

{pytest_codeblocks_skip=true}
1metric = LLMJudgeMetric()
2
3metric.score(output="Paris is the capital of France")

Example: Creating a metric with multiple scores

You can implement a metric that returns multiple scores, which will display as separate columns in the UI when using it in an evaluation.

To do so, setup your score method to return a list of ScoreResult objects.

1from typing import Any, List
2
3from opik.evaluation.metrics import base_metric, score_result
4
5class MultiScoreCustomMetric(base_metric.BaseMetric):
6 def __init__(self, name: str):
7 super().__init__(name)
8
9 def score(self, input: str, output: str, **ignored_kwargs: Any) -> List[score_result.ScoreResult]:
10 # Add your logic here
11
12 return [score_result.ScoreResult(
13 value=0,
14 name=self.name,
15 reason="Optional reason for the score"
16 ),
17 score_result.ScoreResult(
18 value=1,
19 name=f"{self.name}-2",
20 reason="Optional reason for the score"
21 )]

Example: Enforcing structured outputs

In the examples above, we ask the LLM to respond with a JSON object. However as this is not enforced, it is possible that the LLM returns a non-structured response. In order to avoid this, you can use the litellm library to enforce a structured output. This will make our custom metric more robust and less prone to failure.

For this we define the format of the response we expect from the LLM in the LLMJudgeResult class and pass it to the LiteLLM client:

1import json
2from pydantic import BaseModel
3from typing import Any
4
5from opik.evaluation.metrics import base_metric, score_result
6from opik.evaluation import models
7
8class LLMJudgeResult(BaseModel):
9 score: int
10 reason: str
11
12class LLMJudgeMetric(base_metric.BaseMetric):
13 def __init__(self, name: str = "Factuality check", model_name: str = "gpt-4o"):
14 super().__init__(name)
15 self.llm_client = models.LiteLLMChatModel(model_name=model_name)
16 self.prompt_template = """
17 You are an impartial judge evaluating the following claim for factual accuracy. Analyze it carefully and respond with a number between 0 and 1: 1 if completely accurate, 0.5 if mixed accuracy, or 0 if inaccurate. Then provide one brief sentence explaining your ruling.
18
19 The format of the your response should be a json with no backticks that returns:
20
21 {{
22 "score": <score between 0 and 1>,
23 "reason": "<reason for the score>"
24 }}
25
26 Claim to evaluate: {output}
27
28 Response:
29 """
30
31 def score(self, output: str, **ignored_kwargs: Any) -> score_result.ScoreResult:
32 """
33 Score the output of an LLM.
34
35 Args:
36 output: The output of an LLM to score.
37 **ignored_kwargs: Any additional keyword arguments. This is important so that the metric can be used in the `evaluate` function.
38 """
39 # Construct the prompt based on the output of the LLM
40 prompt = self.prompt_template.format(output=output)
41
42 # Generate and parse the response from the LLM
43 response = self.llm_client.generate_string(input=prompt, response_format=LLMJudgeResult)
44 response_dict = json.loads(response)
45
46 return score_result.ScoreResult(
47 name=self.name,
48 value=response_dict["score"],
49 reason=response_dict["reason"]
50 )

Similarly to the previous example, you can then use this metric to score your LLM outputs:

1metric = LLMJudgeMetric()
2
3metric.score(output="Paris is the capital of France")