Using Ragas to evaluate RAG pipelines

In this notebook, we will showcase how to use Opik with Ragas for monitoring and evaluation of RAG (Retrieval-Augmented Generation) pipelines.

There are two main ways to use Opik with Ragas:

  1. Using Ragas metrics to score traces
  2. Using the Ragas evaluate function to score a dataset

Creating an account on Comet.com

Comet provides a hosted version of the Opik platform, simply create an account and grab your API Key.

You can also run the Opik platform locally, see the installation guide for more information.

1%pip install --quiet --upgrade opik ragas nltk openai
1import opik
2
3opik.configure(use_local=False)

Preparing our environment

First, we will configure the OpenAI API key.

1import os
2import getpass
3
4if "OPENAI_API_KEY" not in os.environ:
5 os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")

Integrating Opik with Ragas

Using Ragas metrics to score traces

Ragas provides a set of metrics that can be used to evaluate the quality of a RAG pipeline, including but not limited to: answer_relevancy, answer_similarity, answer_correctness, context_precision, context_recall, context_entity_recall, summarization_score. You can find a full list of metrics in the Ragas documentation.

These metrics can be computed on the fly and logged to traces or spans in Opik. For this example, we will start by creating a simple RAG pipeline and then scoring it using the answer_relevancy metric.

Create the Ragas metric

In order to use the Ragas metric without using the evaluate function, you need to initialize the metric with a RunConfig object and an LLM provider. For this example, we will use LangChain as the LLM provider with the Opik tracer enabled.

We will first start by initializing the Ragas metric:

1# Import the metric
2from ragas.metrics import AnswerRelevancy
3
4# Import some additional dependencies
5from langchain_openai.chat_models import ChatOpenAI
6from langchain_openai.embeddings import OpenAIEmbeddings
7from ragas.llms import LangchainLLMWrapper
8from ragas.embeddings import LangchainEmbeddingsWrapper
9from opik.evaluation.metrics import RagasMetricWrapper
10
11# Initialize the Ragas metric
12llm = LangchainLLMWrapper(ChatOpenAI())
13emb = LangchainEmbeddingsWrapper(OpenAIEmbeddings())
14
15ragas_answer_relevancy = AnswerRelevancy(llm=llm, embeddings=emb)
16
17# Wrap the Ragas metric with RagasMetricWrapper for Opik integration
18answer_relevancy_metric = RagasMetricWrapper(
19 ragas_answer_relevancy,
20 track=True, # This enables automatic tracing in Opik
21)

Once the metric wrapper is set up, you can use it to score a sample question. The RagasMetricWrapper handles all the complexity of async execution and Opik integration automatically.

1# For Jupyter notebook compatibility
2# This is needed for async operations in Jupyter notebooks
3import nest_asyncio
4
5nest_asyncio.apply()
1import os
2
3os.environ["OPIK_PROJECT_NAME"] = "ragas-integration"
4
5# Score a simple example using the RagasMetricWrapper
6score_result = answer_relevancy_metric.score(
7 user_input="What is the capital of France?",
8 response="Paris",
9 retrieved_contexts=["Paris is the capital of France.", "Paris is in France."],
10)
11
12print(f"Answer Relevancy score: {score_result.value}")
13print(f"Metric name: {score_result.name}")

If you now navigate to Opik, you will be able to see that a new trace has been created in the Default Project project.

Score traces

You can score traces by using the update_current_trace function.

The advantage of this approach is that the scoring span is added to the trace allowing for a more fine-grained analysis of the RAG pipeline. It will however run the Ragas metric calculation synchronously and so might not be suitable for production use-cases.

1from opik import track, opik_context
2
3
4@track
5def retrieve_contexts(question):
6 # Define the retrieval function, in this case we will hard code the contexts
7 return ["Paris is the capital of France.", "Paris is in France."]
8
9
10@track
11def answer_question(question, contexts):
12 # Define the answer function, in this case we will hard code the answer
13 return "Paris"
14
15
16@track
17def rag_pipeline(question):
18 # Define the pipeline
19 contexts = retrieve_contexts(question)
20 answer = answer_question(question, contexts)
21
22 # Score the pipeline using the RagasMetricWrapper
23 score_result = answer_relevancy_metric.score(
24 user_input=question, response=answer, retrieved_contexts=contexts
25 )
26
27 # Add the score to the current trace
28 opik_context.update_current_trace(
29 feedback_scores=[{"name": score_result.name, "value": score_result.value}]
30 )
31
32 return answer
33
34
35rag_pipeline("What is the capital of France?")

Evaluating datasets using the Opik evaluate function

You can use Ragas metrics with the Opik evaluate function. This will compute the metrics on all the rows of the dataset and return a summary of the results.

The RagasMetricWrapper can be used directly with the Opik evaluate function - no additional wrapper code is needed!

1from datasets import load_dataset
2import opik
3
4
5opik_client = opik.Opik()
6
7# Create a small dataset
8fiqa_eval = load_dataset("explodinggradients/fiqa", "ragas_eval")
9
10# Reformat the dataset to match the schema expected by the Ragas evaluate function
11hf_dataset = fiqa_eval["baseline"].select(range(3))
12dataset_items = hf_dataset.map(
13 lambda x: {
14 "user_input": x["question"],
15 "reference": x["ground_truths"][0],
16 "retrieved_contexts": x["contexts"],
17 }
18)
19dataset = opik_client.get_or_create_dataset("ragas-demo-dataset")
20dataset.insert(dataset_items)
21
22
23# Create an evaluation task
24def evaluation_task(x):
25 return {
26 "user_input": x["question"],
27 "response": x["answer"],
28 "retrieved_contexts": x["contexts"],
29 }
30
31
32# Use the RagasMetricWrapper directly - no need for custom wrapper!
33opik.evaluation.evaluate(
34 dataset,
35 evaluation_task,
36 scoring_metrics=[answer_relevancy_metric],
37 task_threads=1,
38)

Evaluating datasets using the Ragas evaluate function

If you looking at evaluating a dataset, you can use the Ragas evaluate function. When using this function, the Ragas library will compute the metrics on all the rows of the dataset and return a summary of the results.

You can use the OpikTracer callback to log the results of the evaluation to the Opik platform:

1from datasets import load_dataset
2from opik.integrations.langchain import OpikTracer
3from ragas.metrics import context_precision, answer_relevancy, faithfulness
4from ragas import evaluate
5
6fiqa_eval = load_dataset("explodinggradients/fiqa", "ragas_eval")
7
8# Reformat the dataset to match the schema expected by the Ragas evaluate function
9dataset = fiqa_eval["baseline"].select(range(3))
10
11dataset = dataset.map(
12 lambda x: {
13 "user_input": x["question"],
14 "reference": x["ground_truths"][0],
15 "retrieved_contexts": x["contexts"],
16 }
17)
18
19opik_tracer_eval = OpikTracer(tags=["ragas_eval"], metadata={"evaluation_run": True})
20
21result = evaluate(
22 dataset,
23 metrics=[context_precision, faithfulness, answer_relevancy],
24 callbacks=[opik_tracer_eval],
25)
26
27print(result)