Getting Started

Quickstart

This guide helps you integrate the Opik platform with your existing Agent. The goal of this guide is to help you log your first traces and start tracking your prompts and agent configuration in Opik.

Opik traces page showing trace details with span tree, outputs, and feedback scores

Prerequisites

Before you begin, you’ll need to choose how you want to use Opik:

Logging your first LLM calls

Opik makes it easy to integrate with your existing LLM application. Pick the tab that matches your stack and follow the three steps to log your first trace:

If you are using the Python function decorator, you can integrate by:

1

Install the Opik Python SDK:

$pip install opik
2

Configure the Opik Python SDK:

$opik configure
3

Wrap your function with the @track decorator:

1from opik import track
2
3@track
4def my_function(input: str) -> str:
5 return input

All calls to the my_function will now be logged to Opik. This works well for any function even nested ones and is also supported by most integrations (just wrap any parent function with the @track decorator).

Analyze your traces

After running your application, you will start seeing your traces in Opik and you can use Ollie to analyze them and improve your agent.

If you don’t see traces appearing, reach out to us on Slack or raise an issue on GitHub and we’ll help you troubleshoot.

Next steps

Now that you have logged your first Agent calls to, why not check out:

  1. In depth guide on agent observability: Learn how to customize the data that is logged to Opik and how to log conversations.
  2. Opik Experiments: Opik allows you to automated the evaluation process of your LLM application so that you no longer need to manually review every LLM response.
  3. Opik’s evaluation metrics: Opik provides a suite of evaluation metrics (Hallucination, Answer Relevance, Context Recall, etc.) that you can use to score your LLM responses.