Heuristic metrics

Heuristic metrics are rule-based evaluation methods that allow you to check specific aspects of language model outputs. These metrics use predefined criteria or patterns to assess the quality, consistency, or characteristics of generated text. They come in two flavours:

  • Token or string heuristics – operate on a single turn and compare the candidate output to a reference or handcrafted rule.
  • Conversation heuristics – analyse whole transcripts to spot issues like degeneration or forgotten facts across assistant turns.

String and token heuristics

MetricDescription
BERTScoreContextual embedding similarity; robust alternative to Levenshtein.
ChrFCharacter n-gram F-score (supports chrF and chrF++).
ContainsChecks if the output contains a specific substring (case-sensitive or insensitive).
CorpusBLEUCalculates a corpus-level BLEU score across many candidates.
EqualsChecks if the output exactly matches an expected string.
GLEUEstimates fluency and grammatical correctness on a 0–1 scale.
IsJsonEnsures the output can be parsed as JSON.
JSDivergenceJensen–Shannon similarity between token distributions.
JSDistanceRaw Jensen–Shannon divergence between token distributions.
KLDivergenceKullback–Leibler divergence between token distributions.
LanguageAdherenceMetricChecks whether text adheres to an expected language code.
LevenshteinRatioComputes the normalised Levenshtein similarity between output and reference.
ReadabilityReports Flesch Reading Ease and Flesch–Kincaid grade levels.
RegexMatchValidates the output against a regular expression pattern.
ROUGECalculates ROUGE variants (rouge1, rouge2, rougeL, rougeLsum).
SentenceBLEUCalculates a single-sentence BLEU score against one or more references.
SentimentScores sentiment using NLTK’s VADER lexicon (compound, pos/neu/neg).
SpearmanRankingSpearman’s rank correlation for two equal-length rankings.
ToneFlags tone issues such as negativity, shouting, or forbidden phrases.

Conversation heuristics

MetricDescription
Conversation DegenerationDetects repetition and low-entropy responses over a conversation (implemented by ConversationDegenerationMetric).
Knowledge RetentionChecks whether the last assistant reply preserves user-provided facts from earlier turns.

[!TIP] These metrics operate on a single transcript without requiring a gold reference. If you need BLEU/ROUGE/METEOR-style comparisons, compose a custom ConversationThreadMetric that wraps the single-turn heuristics (SentenceBLEU, ROUGE, METEOR).

Score an LLM response

You can score an LLM response by first initializing the metrics and then calling the score method:

1from opik.evaluation.metrics import Contains
2
3metric = Contains(name="contains_hello", case_sensitive=True)
4
5score = metric.score(output="Hello world !", reference="Hello")
6
7print(score)

Metrics

Equals

The Equals metric can be used to check if the output of an LLM exactly matches a specific string. It can be used in the following way:

1from opik.evaluation.metrics import Equals
2
3metric = Equals()
4
5score = metric.score(output="Hello world !", reference="Hello, world !")
6print(score)

Contains

The Contains metric can be used to check if the output of an LLM contains a specific substring. It can be used in the following way:

1from opik.evaluation.metrics import Contains
2
3metric = Contains(case_sensitive=False)
4
5score = metric.score(output="Hello world !", reference="Hello")
6print(score)

RegexMatch

The RegexMatch metric can be used to check if the output of an LLM matches a specified regular expression pattern. It can be used in the following way:

1from opik.evaluation.metrics import RegexMatch
2
3metric = RegexMatch(regex="^[a-zA-Z0-9]+$")
4
5score = metric.score("Hello world !")
6print(score)

IsJson

The IsJson metric can be used to check if the output of an LLM is valid. It can be used in the following way:

1from opik.evaluation.metrics import IsJson
2
3metric = IsJson(name="is_json_metric")
4
5score = metric.score(output='{"key": "some_valid_sql"}')
6print(score)

LevenshteinRatio

The LevenshteinRatio metric measures how similar the output is to a reference string on a 0–1 scale (1.0 means identical). It is useful when exact matches are too strict but you still want to penalise large deviations.

1from opik.evaluation.metrics import LevenshteinRatio
2
3metric = LevenshteinRatio()
4
5score = metric.score(output="Hello world !", reference="hello")
6print(score)

BLEU

The BLEU (Bilingual Evaluation Understudy) metrics estimate how close the LLM outputs are to one or more reference translations. Opik provides two separate classes:

  • SentenceBLEU – Single-sentence BLEU
  • CorpusBLEU – Corpus-level BLEU Both rely on the underlying NLTK BLEU implementation with optional smoothing methods, weights, and variable n-gram orders.

You will need nltk library:

$pip install nltk

Use SentenceBLEU to compute single-sentence BLEU between a single candidate and one (or more) references:

1from opik.evaluation.metrics import SentenceBLEU
2
3metric = SentenceBLEU(n_grams=4, smoothing_method="method1")
4
5# Single reference
6score = metric.score(
7 output="Hello world!",
8 reference="Hello world"
9)
10print(score.value, score.reason)
11
12# Multiple references
13score = metric.score(
14 output="Hello world!",
15 reference=["Hello planet", "Hello world"]
16)
17print(score.value, score.reason)

Use CorpusBLEU to compute corpus-level BLEU for multiple candidates vs. multiple references. Each candidate and its references align by index in the list:

1from opik.evaluation.metrics import CorpusBLEU
2
3metric = CorpusBLEU()
4
5outputs = ["Hello there", "This is a test."]
6references = [
7 # For the first candidate, two references
8 ["Hello world", "Hello there"],
9 # For the second candidate, one reference
10 "This is a test."
11]
12
13score = metric.score(output=outputs, reference=references)
14print(score.value, score.reason)

You can also customize n-grams, smoothing methods, or weights:

1from opik.evaluation.metrics import SentenceBLEU
2
3metric = SentenceBLEU(
4 n_grams=4,
5 smoothing_method="method2",
6 weights=[0.25, 0.25, 0.25, 0.25]
7)
8
9score = metric.score(
10 output="The cat sat on the mat",
11 reference=["The cat is on the mat", "A cat sat here on the mat"]
12)
13print(score.value, score.reason)

Note: If any candidate or reference is empty, SentenceBLEU or CorpusBLEU will raise a MetricComputationError. Handle or validate inputs accordingly.

ROUGE

ROUGE supports multiple variants out of the box: rouge1, rouge2, rougeL, and rougeLsum. You can switch variants via the rouge_type argument and optionally enable stemming or sentence splitting.

1from opik.evaluation.metrics import ROUGE
2
3metric = ROUGE(rouge_type="rougeLsum", use_stemmer=True)
4score = metric.score(
5 output="The quick brown fox jumps over the lazy dog.",
6 reference="A quick brown fox leapt over a very lazy dog."
7)
8print(score.value, score.reason)

Install rouge-score when using this metric:

$pip install rouge-score

GLEU

GLEU estimates grammatical fluency using n-gram overlap. It is useful when you care about fluency rather than exact lexical matches.

1from opik.evaluation.metrics import GLEU
2
3metric = GLEU(min_len=1, max_len=4)
4score = metric.score(
5 output="I has a pen",
6 reference=["I have a pen"]
7)
8print(score.value, score.reason)

Requires nltk:

$pip install nltk

BERTScore

BERTScore compares texts using contextual embeddings, offering a robust alternative to token-level similarity metrics. It produces precision, recall, and F1 scores (Opik reports the F1 by default).

1from opik.evaluation.metrics import BERTScore
2
3metric = BERTScore(model_type="microsoft/deberta-xlarge-mnli")
4score = metric.score(
5 output="The cat sits on the mat.",
6 reference="A cat is sitting on a mat."
7)
8print(score.value, score.reason)

Install the optional dependency before use:

$pip install bert-score

ChrF

ChrF computes the character n-gram F-score (chrF / chrF++). Adjust beta, char_order, and word_order to switch between the two variants.

1from opik.evaluation.metrics import ChrF
2
3metric = ChrF(beta=2.0, char_order=6, word_order=2)
4score = metric.score(
5 output="The cat sat on the mat",
6 reference="A cat sits upon the mat"
7)
8print(score.value, score.reason)

This metric relies on NLTK:

$pip install nltk

Distribution metrics

Histogram-based metrics compare token distributions between candidate and reference texts. They are helpful when you want to match style, vocabulary, or topical coverage.

JSDivergence

Returns 1 - Jensen–Shannon divergence, giving a similarity score between 0.0 and 1.0.

1from opik.evaluation.metrics import JSDivergence
2
3metric = JSDivergence()
4score = metric.score(
5 output="Dogs chase balls",
6 reference="Cats chase toys"
7)
8print(score.value, score.reason)

JSDistance

Wraps the same computation but returns the raw divergence (0.0 means identical distributions).

1from opik.evaluation.metrics import JSDistance
2
3metric = JSDistance()
4score = metric.score(output="hello world", reference="hello there")
5print(score.value, score.reason)

KLDivergence

Computes the KL divergence with optional smoothing and direction control.

1from opik.evaluation.metrics import KLDivergence
2
3metric = KLDivergence(direction="avg")
4score = metric.score(output="a b b", reference="a a b")
5print(score.value, score.reason)

Language Adherence

LanguageAdherenceMetric checks whether text matches an expected ISO language code. It can use a fastText language identification model or a custom detector callable.

1from opik.evaluation.metrics import LanguageAdherenceMetric
2
3metric = LanguageAdherenceMetric(
4 expected_language="en",
5 model_path="/path/to/lid.176.ftz",
6)
7score = metric.score(output="Hello, how are you?")
8print(score.value, score.reason, score.metadata)

Install fasttext and download a language ID model when using the default detector:

$pip install fasttext

Readability

Readability computes Flesch Reading Ease (0–100) and the Flesch–Kincaid grade using the textstat package. The metric returns the reading-ease score normalised to [0, 1].

1from opik.evaluation.metrics import Readability
2
3metric = Readability()
4score = metric.score(output="This is a simple explanation of the payment process.")
5print(score.value, score.reason)
6print(score.metadata["flesch_kincaid_grade"])

Install the optional dependency when using this metric:

$pip install textstat

Pass enforce_bounds=True alongside min_grade and/or max_grade to turn the metric into a strict guardrail that only reports 1.0 when the text meets your grade limits.

Spearman Ranking

SpearmanRanking measures how well two rankings agree. It returns a normalised correlation score in [0, 1].

1from opik.evaluation.metrics import SpearmanRanking
2
3metric = SpearmanRanking()
4score = metric.score(
5 output=["doc3", "doc1", "doc2"],
6 reference=["doc1", "doc2", "doc3"],
7)
8print(score.value, score.metadata["rho"])

Tone

Tone flags outputs that sound aggressive, negative, or violate a list of forbidden phrases. You can tweak sentiment thresholds, uppercase ratios, and exclamation limits.

1from opik.evaluation.metrics import Tone
2
3metric = Tone(max_exclamations=1)
4score = metric.score(output="THIS IS TERRIBLE!!!")
5print(score.value, score.reason)
6print(score.metadata)

Sentiment

The Sentiment metric analyzes the sentiment of text using NLTK’s VADER (Valence Aware Dictionary and sEntiment Reasoner) sentiment analyzer. It returns scores for positive, neutral, negative, and compound sentiment.

You will need the nltk library and the vader_lexicon:

$pip install nltk
>python -m nltk.downloader vader_lexicon

Use Sentiment to analyze the sentiment of text:

1from opik.evaluation.metrics import Sentiment
2
3metric = Sentiment()
4
5# Analyze sentiment
6score = metric.score(output="I love this product! It's amazing.")
7print(score.value) # Compound score (e.g., 0.8802)
8print(score.metadata) # All sentiment scores (pos, neu, neg, compound)
9print(score.reason) # Explanation of the sentiment
10
11# Negative sentiment example
12score = metric.score(output="This is terrible, I hate it.")
13print(score.value) # Negative compound score (e.g., -0.7650)

The metric returns:

  • value: The compound sentiment score (-1.0 to 1.0)
  • metadata: Dictionary containing all sentiment scores:
    • pos: Positive sentiment (0.0-1.0)
    • neu: Neutral sentiment (0.0-1.0)
    • neg: Negative sentiment (0.0-1.0)
    • compound: Normalized compound score (-1.0 to 1.0)

The compound score is a normalized score between -1.0 (extremely negative) and 1.0 (extremely positive), with scores:

  • ≥ 0.05: Positive sentiment
  • -0.05 and < 0.05: Neutral sentiment

  • ≤ -0.05: Negative sentiment

ROUGE

The ROUGE (Recall-Oriented Understudy for Gisting Evaluation) metrics estimate how close the LLM outputs are to one or more reference summaries, commonly used for evaluating summarization and text generation tasks. It measures the overlap between an output string and a reference string, with support for multiple ROUGE types. This metrics is a wrapper around the Google Research reimplementation of ROUGE, which is based on the rouge-score library. You will need rouge-score library:

$pip install rouge-score

It can be used in a following way:

1from opik.evaluation.metrics import ROUGE
2
3metric = ROUGE()
4
5# Single reference
6score = metric.score(
7 output="Hello world!",
8 reference="Hello world"
9)
10print(score.value, score.reason)
11
12# Multiple references
13score = metric.score(
14 output="Hello world!",
15 reference=["Hello planet", "Hello world"]
16)
17print(score.value, score.reason)

You can customize the ROUGE metric using the following parameters:

  • rouge_type (str): Type of ROUGE score to compute. Must be one of:

    • rouge1: Unigram-based scoring
    • rouge2: Bigram-based scoring
    • rougeL: Longest common subsequence-based scoring
    • rougeLsum: ROUGE-L score based on sentence splitting

    Default: rouge1

  • use_stemmer (bool): Whether to use stemming in ROUGE computation.
    Default: False

  • split_summaries (bool): Whether to split summaries into sentences.
    Default: False

  • tokenizer (Any | None): Custom tokenizer for sentence splitting.
    Default: None

1from opik.evaluation.metrics import ROUGE
2
3metric = ROUGE(
4 rouge_type="rouge2",
5 use_stemmer=True
6)
7
8score = metric.score(
9 output="The cats sat on the mats",
10 reference=["The cat is on the mat", "A cat sat here on the mat"]
11)
12print(score.value, score.reason)

AggregatedMetric

You can use the AggregatedMetric function to compute averages across multiple metrics for each item in your experiment.

You can define the metric as:

1from opik.evaluation.metrics import AggregatedMetric, Hallucination, GEval
2
3metric = AggregatedMetric(
4 name="average_score",
5 metrics=[
6 Hallucination(),
7 GEval(
8 task_introduction="Identify factual inaccuracies",
9 evaluation_criteria="Return a score of 1 if there are inaccuracies, 0 otherwise"
10 )
11 ],
12 aggregator=lambda metric_results: sum([score_result.value for score_result in metric_results]) / len(metric_results)
13)

References

Notes

  • The metric is case-insensitive.
  • ROUGE scores are useful for comparing text summarization models or evaluating text similarity.
  • Consider using stemming for improved evaluation in certain cases.