Getting Started

Quickstart

This guide helps you integrate the Opik platform with your existing LLM application. The goal of this guide is to help you log your first LLM calls and chains to the Opik platform.

⚡ Quick Start with AI Assistants

Get Opik integrated instantly by copying this prompt to Cursor, Claude, or any AI coding assistant:

# OPIK Agentic Onboarding
## Goals
You must help me:
1. Integrate the Opik client with my existing LLM application
2. Set up tracing for my LLM calls and chains
## Rules
Before you begin, you must understand and strictly adhere to these core principles:
1. Code Preservation & Integration Guidelines:
- Existing business logic must remain untouched and unmodified
- Only add Opik-specific code (decorators, imports, handlers, env vars)
- Integration must be non-invasive and backwards compatible
2. Process Requirements:
- Follow the workflow steps sequentially without deviation
- Validate completion of each step before proceeding
- Request explicit approval for any workflow modifications
3. Documentation & Resources:
- Reference official Opik documentation at https://www.comet.com/docs/opik/quickstart.md
- Follow Opik best practices and recommended patterns
- Maintain detailed integration notes and configuration details
4. Testing & Validation:
- Verify Opik integration without impacting existing functionality
- Validate tracing works correctly for all LLM interactions
- Ensure proper error handling and logging
## Integration Workflow
### Step 1: Language and Compatibility Check
First, analyze the codebase to identify:
1. Primary programming language and frameworks
2. Existing LLM integrations and patterns
Compatibility Requirements:
- Supported Languages: Python, JavaScript/TypeScript
If the codebase uses unsupported languages:
- Stop immediately
- Inform me that the codebase is unsupported for AI integration
Only proceed to Step 2 if:
- Language is Python or JavaScript/TypeScript
### Step 2: Codebase Discovery & Entrypoint Confirmation
After verifying language compatibility, perform a full codebase scan with the following objectives:
- LLM Touchpoints: Locate all files and functions that invoke or interface with LLMs or can be a candidates for tracing.
- Entrypoint Detection: Identify the primary application entry point(s) (e.g., main script, API route, CLI handler). If ambiguous, pause and request clarification on which component(s) are most important to trace before proceeding.
⚠️ Do not proceed to Step 3 without explicit confirmation if the entrypoint is unclear.
- Return the LLM Touchpoints to me
### Step 3: Discover Available Integrations
After I confirm the LLM Touchpoints and entry point, find the list of supported integrations at https://www.comet.com/docs/opik/tracing/integrations/overview.md
### Step 4: Deep Analysis Confirmed files for LLM Frameworks & SDKs
Using the files confirmed in Step 2, perform targeted inspection to detect specific LLM-related technologies in use, such as:
SDKs: openai, anthropic, huggingface, etc.
Frameworks: LangChain, LlamaIndex, Haystack, etc.
### Step 5: Pre-Implementation Development Plan (Approval Required)
Do not write or modify code yet. You must propose me a step-by-step plan including:
- Opik packages to install
- Files to be modified
- Code snippets for insertion, clearly scoped and annotated
- Where to place Opik API keys, with placeholder comments (Visit https://comet.com/opik/your-workspace-name/get-started to copy your API key)
Wait for approval before proceeding!
### Step 6: Execute the Integration Plan
After approval:
- Run the package installation command via terminal (pip install opik, npm install opik, etc.).
- Apply code modifications exactly as described in Step 5.
- Keep all additions minimal and non-invasive.
Upon completion, review the changes made and confirm installation success.
### Step 7: Request User Review and Wait
Notify me that all integration steps are complete.
"Please run the application and verify if Opik is capturing traces as expected. Let me know if you need adjustments."
### Step 8: Debugging Loop (If Needed)
If issues are reported:
1. Parse the error or unexpected behavior from feedback.
2. Re-query the Opik docs using https://www.comet.com/docs/opik/quickstart.md if needed.
3. Propose a minimal fix and await approval.
4. Apply and revalidate.

Set up

Getting started is as simple as creating an account on Comet or self-hosting the platform.

Once your account is created, you can start logging traces by installing the Opik Python SDK:

$pip install opik

and configuring the SDK with:

If you are using the Python SDK, we recommend running the opik configure command from the command line which will prompt you for all the necessary information:

$opik configure

You can learn more about configuring the Python SDK here.

How can I diagnose issues with Opik?

If you are experiencing any problems using Opik, such as receiving 400 or 500 errors from the backend, or being unable to connect at all, we recommend running the following command in your terminal:

$opik healthcheck

This command will analyze your configuration and backend connectivity, providing useful insights into potential issues.

Reviewing these sections can help pinpoint the source of the problem and suggest possible resolutions.

Adding Opik observability to your codebase

Logging LLM calls

The first step in integrating Opik with your codebase is to track your LLM calls. If you are using OpenAI, OpenRouter, or any LLM provider that is supported by LiteLLM, then you can use one of our integrations:

1from opik.integrations.openai import track_openai
2from openai import OpenAI
3
4# Wrap your OpenAI client
5openai_client = OpenAI()
6openai_client = track_openai(openai_client)

All OpenAI calls made using the openai_client will now be logged to Opik.

Logging chains

It is common for LLM applications to use chains rather than just calling the LLM once. This is achieved by either using a framework like LangChain, LangGraph or LLamaIndex, or by writing custom python code.

Opik makes it easy for your to log your chains no matter how you implement them:

If you are not using any frameworks to build your chains, you can use the @track decorator to log your chains. When a function is decorated with @track, the input and output of the function will be logged to Opik. This works well even for very nested chains:

1from opik import track
2from opik.integrations.openai import track_openai
3from openai import OpenAI
4
5# Wrap your OpenAI client
6openai_client = OpenAI()
7openai_client = track_openai(openai_client)
8
9# Create your chain
10@track
11def llm_chain(input_text):
12 context = retrieve_context(input_text)
13 response = generate_response(input_text, context)
14
15 return response
16
17@track
18def retrieve_context(input_text):
19 # For the purpose of this example, we are just returning a hardcoded list of strings
20 context =[
21 "What specific information are you looking for?",
22 "How can I assist you with your interests today?",
23 "Are there any topics you'd like to explore or learn more about?",
24 ]
25 return context
26
27@track
28def generate_response(input_text, context):
29 full_prompt = (
30 f" If the user asks a question that is not specific, use the context to provide a relevant response.\n"
31 f"Context: {', '.join(context)}\n"
32 f"User: {input_text}\n"
33 f"AI:"
34 )
35
36 response = openai_client.chat.completions.create(
37 model="gpt-3.5-turbo",
38 messages=[{"role": "user", "content": full_prompt}]
39 )
40 return response.choices[0].message.content
41
42llm_chain("Hello, how are you?")

While this code sample assumes that you are using OpenAI, the same principle applies if you are using any other LLM provider.

Your chains will now be logged to Opik and can be viewed in the Opik UI. To learn more about how you can customize the logged data, see the Log Traces guide.

Next steps

Now that you have logged your first LLM calls and chains to Opik, why not check out:

  1. Opik’s evaluation metrics: Opik provides a suite of evaluation metrics (Hallucination, Answer Relevance, Context Recall, etc.) that you can use to score your LLM responses.
  2. Opik Experiments: Opik allows you to automated the evaluation process of your LLM application so that you no longer need to manually review every LLM response.