Log traces

If you are just getting started with Opik, we recommend first checking out the Quickstart guide that will walk you through the process of logging your first LLM call.

LLM applications are complex systems that do more than just call an LLM API, they will often involve retrieval, pre-processing and post-processing steps. Tracing is a tool that helps you understand the flow of your application and identify specific points in your application that may be causing issues.

Opik’s tracing functionality allows you to track not just all the LLM calls made by your application but also any of the other steps involved.

Opik supports agent observability using our Typescript SDK, Python SDK, first class OpenTelemetry support and our REST API.

We recommend starting with one of our integrations to get started quickly, you can find a full list of our integrations in the integrations overview page.

We won’t be covering how to track chat conversations in this guide, you can learn more about this in the Logging conversations guide.

Enable agent observability

1. Installing the SDK

Before adding observability to your application, you will first need to install and configure the Opik SDK.

$npm install opik

You can then set the Opik environment variables in your .env file:

$# Set OPIK_API_KEY and OPIK_WORKSPACE in your .env file
$OPIK_API_KEY=your_api_key_here
$OPIK_WORKSPACE=your_workspace_name
$
$# Optional if you are using Opik Cloud:
$OPIK_URL_OVERRIDE=https://www.comet.com/opik/api

Opik is open-source and can be hosted locally using Docker, please refer to the self-hosting guide to get started. Alternatively, you can use our hosted platform by creating an account on Comet.

2. Using an integration

Once you have installed and configured the Opik SDK, you can start using it to track your agent calls:

If you are using the OpenAI TypeScript SDK, you can integrate by:

1

Install the Opik TypeScript SDK:

$npm install opik-openai
2

Configure the Opik TypeScript SDK using environment variables:

$export OPIK_API_KEY="<your-api-key>" # Only required if you are using the Opik Cloud version
$export OPIK_URL_OVERRIDE="https://www.comet.com/opik/api" # Cloud version
$# export OPIK_URL_OVERRIDE="http://localhost:5173/api" # Self-hosting
3

Wrap your OpenAI client with the trackOpenAI function:

1import OpenAI from "openai";
2import { trackOpenAI } from "opik-openai";
3
4// Initialize the original OpenAI client
5const openai = new OpenAI({
6 apiKey: process.env.OPENAI_API_KEY,
7});
8
9// Wrap the client with Opik tracking
10const trackedOpenAI = trackOpenAI(openai);
11
12// Use the tracked client just like the original
13const completion = await trackedOpenAI.chat.completions.create({
14 model: "gpt-4",
15 messages: [{ role: "user", content: "Hello, how can you help me today?" }],
16});
17console.log(completion.choices[0].message.content);
18
19// Ensure all traces are sent before your app terminates
20await trackedOpenAI.flush();

All OpenAI calls made using the trackedOpenAI will now be logged to Opik.

Opik has more than 40 integrations with the majority of the popular frameworks and libraries. You can find a full list of integrations in the integrations overview page.

If you would like more control over the logging process, you can use the low-level SDKs to log your traces and spans.

3. Analyzing your agents

Now that you have observability enabled for your agents, you can start to review and analyze the agent calls in Opik. In the Opik UI, you can review each agent call, see the agent graph and review all the tool calls made by the agent.

As a next step, you can create an offline evaluation to evaluate your agent’s performance on a fixed set of samples.

Advanced usage

Using function decorators

Function decorators are a great way to add Opik logging to your existing application. When you add the @track decorator to a function, Opik will create a span for that function call and log the input parameters and function output for that function. If we detect that a decorated function is being called within another decorated function, we will create a nested span for the inner function.

While decorators are most popular in Python, we also support them in our Typescript SDK:

TypeScript started supporting decorators from version 5 but it’s use is still not widespread. The Opik typescript SDK also supports decorators but it’s currently considered experimental.

1import { track } from "opik";
2
3class TranslationService {
4 @track({ type: "llm" })
5 async generateText() {
6 // Your LLM call here
7 return "Generated text";
8 }
9
10 @track({ name: "translate" })
11 async translate(text: string) {
12 // Your translation logic here
13 return `Translated: ${text}`;
14 }
15
16 @track({ name: "process", projectName: "translation-service" })
17 async process() {
18 const text = await this.generateText();
19 return this.translate(text);
20 }
21}

You can also specify custom tags, metadata, and/or a thread_id for each trace and/or span logged for the decorated function. For more information, see Logging additional data using the opik_args parameter

Using the low-level SDKs

If you need full control over the logging process, you can use the low-level SDKs to log your traces and spans:

You can use the Opik client to log your traces and spans:

1import { Opik } from "opik";
2
3const client = new Opik({
4 apiUrl: "https://www.comet.com/opik/api",
5 apiKey: "your-api-key", // Only required if you are using Opik Cloud
6 projectName: "your-project-name",
7 workspaceName: "your-workspace-name", // Optional
8});
9
10// Log a trace with an LLM span
11const trace = client.trace({
12 name: `Trace`,
13 input: {
14 prompt: `Hello!`,
15 },
16 output: {
17 response: `Hello, world!`,
18 },
19});
20
21const span = trace.span({
22 name: `Span`,
23 type: "llm",
24 input: {
25 prompt: `Hello, world!`,
26 },
27 output: {
28 response: `Hello, world!`,
29 },
30});
31
32// Flush the client to send all traces and spans
33await client.flush();

Make sure you define the environment variables for the Opik client in your .env file, you can find more information about the configuration here.

Logging traces/spans using context managers

If you are using the low-level SDKs, you can use the context managers to log traces and spans. Context managers provide a clean and Pythonic way to manage the lifecycle of traces and spans, ensuring proper cleanup and error handling.

Opik provides two main context managers for logging:

opik.start_as_current_trace()

Use this context manager to create and manage a trace. A trace represents the overall execution flow of your application.

For detailed API reference, see opik.start_as_current_trace.

1import opik
2
3# Basic trace creation
4with opik.start_as_current_trace("my-trace", project_name="my-project") as trace:
5 # Your application logic here
6 trace.input = {"user_query": "What is the weather?"}
7 trace.output = {"response": "It's sunny today!"}
8 trace.tags = ["weather", "api-call"]
9 trace.metadata = {"model": "gpt-4", "temperature": 0.7}

Parameters:

  • name (str): The name of the trace
  • input (Dict[str, Any], optional): Input data for the trace
  • output (Dict[str, Any], optional): Output data for the trace
  • tags (List[str], optional): Tags to categorize the trace
  • metadata (Dict[str, Any], optional): Additional metadata
  • project_name (str, optional): Project name (falls back to active project context, then client configuration)
  • thread_id (str, optional): Thread identifier for multi-threaded applications
  • flush (bool, optional): Whether to flush data immediately (default: False)

opik.start_as_current_span()

Use this context manager to create and manage a span within a trace. Spans represent individual operations or function calls.

For detailed API reference, see opik.start_as_current_span.

1import opik
2
3# Basic span creation
4with opik.start_as_current_span("llm-call", type="llm", project_name="my-project") as span:
5 # Your LLM call here
6 span.input = {"prompt": "Explain quantum computing"}
7 span.output = {"response": "Quantum computing is..."}
8 span.model = "gpt-4"
9 span.provider = "openai"
10 span.usage = {
11 "prompt_tokens": 10,
12 "completion_tokens": 50,
13 "total_tokens": 60
14 }

Parameters:

  • name (str): The name of the span
  • type (SpanType, optional): Type of span (“general”, “tool”, “llm”, “guardrail”, etc.)
  • input (Dict[str, Any], optional): Input data for the span
  • output (Dict[str, Any], optional): Output data for the span
  • tags (List[str], optional): Tags to categorize the span
  • metadata (Dict[str, Any], optional): Additional metadata
  • project_name (str, optional): Project name
  • model (str, optional): Model name for LLM spans
  • provider (str, optional): Provider name for LLM spans
  • flush (bool, optional): Whether to flush data immediately

Nested Context Managers

You can nest spans within traces to create hierarchical structures:

1import opik
2
3with opik.start_as_current_trace("chatbot-conversation", project_name="chatbot") as trace:
4 trace.input = {"user_message": "Help me with Python"}
5
6 # First span: Process user input
7 with opik.start_as_current_span("process-input", type="general") as span:
8 span.input = {"raw_input": "Help me with Python"}
9 span.output = {"processed_input": "Python programming help request"}
10
11 # Second span: Generate response
12 with opik.start_as_current_span("generate-response", type="llm") as span:
13 span.input = {"prompt": "Python programming help request"}
14 span.output = {"response": "I'd be happy to help with Python!"}
15 span.model = "gpt-4"
16 span.provider = "openai"
17
18 trace.output = {"final_response": "I'd be happy to help with Python!"}

Error Handling

Context managers automatically handle errors and ensure proper cleanup:

1import opik
2
3try:
4 with opik.start_as_current_trace("risky-operation", project_name="my-project") as trace:
5 trace.input = {"data": "important data"}
6 # This will raise an exception
7 result = 1 / 0
8 trace.output = {"result": result}
9except ZeroDivisionError:
10 # The trace is still properly closed and logged
11 print("Error occurred, but trace was logged")

Dynamic Parameter Updates

You can modify trace and span parameters both inside and outside the context manager:

1import opik
2
3# Parameters set outside the context manager
4with opik.start_as_current_trace(
5 "dynamic-trace",
6 input={"initial": "data"},
7 tags=["initial-tag"],
8 project_name="my-project"
9) as trace:
10 # Override parameters inside the context manager
11 trace.input = {"updated": "data"}
12 trace.tags = ["updated-tag", "new-tag"]
13 trace.metadata = {"custom": "metadata"}
14
15 # The final trace will use the updated values

Flush Control

Control when data is sent to Opik:

1import opik
2
3# Immediate flush
4with opik.start_as_current_trace("immediate-trace", flush=True) as trace:
5 trace.input = {"data": "important"}
6 # Data is sent immediately when exiting the context
7
8# Deferred flush (default)
9with opik.start_as_current_trace("deferred-trace", flush=False) as trace:
10 trace.input = {"data": "less urgent"}
11 # Data will be sent asynchronously later or when the program exits

Best Practices

  1. Use descriptive names: Choose clear, descriptive names for your traces and spans that explain what they represent.

  2. Set appropriate types: Use the correct span types (“llm”, “retrieval”, “general”, etc.) to help with filtering and analysis.

  3. Include relevant metadata: Add metadata that will be useful for debugging and analysis, such as model names, parameters, and custom metrics.

  4. Handle errors gracefully: Let the context manager handle cleanup, but ensure your application logic handles errors appropriately.

  5. Use project organization: Organize your traces by project to keep your Opik dashboard clean and organized.

  6. Consider performance: Use flush=True only when immediate data availability is required, as it can slow down your application by triggering a synchronous, immediate data upload.

Logging to a specific project

By default, traces are logged to the Default Project project. You can change the project you want the trace to be logged to in a couple of ways:

You can use the OPIK_PROJECT_NAME environment variable to set the project you want the trace to be logged or pass a parameter to the Opik client.

1import { Opik } from "opik";
2
3const client = new Opik({
4 projectName: "my_project",
5 // apiKey: "my_api_key",
6 // apiUrl: "https://www.comet.com/opik/api",
7 // workspaceName: "my_workspace",
8});

Project name resolution (Python SDK)

The project name is determined differently depending on whether an active project context already exists.

When no project context is active

This applies to the top-level @track-decorated function call, the Opik() client, or a native integration (e.g., track_openai, OpikTracer) used outside any traced context. The project name is resolved in this order:

  1. Explicit project_name argument — passed directly to @track(project_name="..."), Opik(project_name="..."), OpikTracer(project_name="..."), or a client method like client.trace(project_name="...")
  2. Client configuration — from the OPIK_PROJECT_NAME environment variable or ~/.opik.config file
  3. Default — falls back to "Default Project" (a warning is logged once to remind you to configure a project name)

The first @track(project_name="...") or opik.project_context("...") call that runs establishes the active project context for all nested operations.

When a project context is active

Once a project context is established (by a parent @track(project_name="...") or opik.project_context("...")), all nested operations use the context project name. This includes:

  • Nested @track-decorated functions — even if they pass a different project_name, the outer context wins (a warning is logged)
  • Native integrations (e.g., OpikTracer, track_openai) — if initialized inside an active context, the context project overrides the integration’s project_name argument (a warning is logged)
  • Opik() client methods — if a method like client.trace(project_name="...") is called with an explicit project_name, the explicit argument wins; if project_name is omitted, the context project is used

This ensures that all traces and spans within a single execution flow are logged to the same project.

@track context propagation

When @track(project_name="...") is used on the top-level function, it sets the project context for the entire call tree:

1from opik import track
2
3@track(project_name="my-agent")
4def agent(query):
5 context = retrieve(query)
6 return generate(context)
7
8@track
9def retrieve(query):
10 # Inherits "my-agent" from the parent context
11 ...
12
13@track
14def generate(context):
15 # Also inherits "my-agent" from the parent context
16 ...

If a nested function specifies a different project_name, it is ignored and the outer project is preserved:

1@track(project_name="my-agent")
2def agent(query):
3 helper(query) # Still logs to "my-agent", NOT "other-project"
4
5@track(project_name="other-project")
6def helper(query):
7 # Warning is logged: outer project "my-agent" will be used
8 ...

opik.project_context()

The opik.project_context() context manager sets the project name for all Opik operations within a block — @track-decorated functions, native integrations, and Opik() client calls (when project_name is not passed explicitly):

1import opik
2
3with opik.project_context("customer-support"):
4 # @track-decorated functions and native integrations
5 # all use "customer-support" as the project name
6 my_agent(query)

Nesting rules are the same: the first project_context or @track(project_name=...) to run owns the context. Inner calls with a different project name are ignored (a warning is logged).

When a script combines @track tracing with other Opik API calls — such as evaluate(), get_or_create_dataset(), or Prompt() — traces and API objects can land in different projects if the project name is not set consistently. Make sure the value passed to opik.configure(project_name=...) (which controls where @track traces go) matches the project_name argument passed explicitly to each API call:

1import opik
2
3opik.configure(project_name="my-project")
4
5dataset = client.get_or_create_dataset(name="my-dataset", project_name="my-project")
6
7evaluation = evaluate(
8 dataset=dataset,
9 task=evaluation_task,
10 project_name="my-project", # must match opik.configure value above
11 ...
12)

Flushing traces and spans

This process is optional and is only needed if you are running a short-lived script or if you are debugging why traces and spans are not being logged to Opik.

As the Typescript SDK has been designed to be used in production environments, we batch traces and spans and send them to Opik in the background.

If you are running a short-lived script, you can flush the traces to Opik by using the flush method of the Opik client.

1import { Opik } from "opik";
2
3const client = new Opik();
4client.flush();

Disabling the logging process

This is currently not supported in the Typescript SDK. To disable the logging process,

Next steps

Once you have the observability set up for your agent, you can go one step further and: