Observability for Pipecat with Opik

Pipecat is an open-source Python framework for building real-time voice and multimodal conversational AI agents. Developed by Daily, it enables fully programmable AI voice agents and supports multimodal interactions, positioning itself as a flexible solution for developers looking to build conversational AI systems.

This guide explains how to integrate Opik with Pipecat for observability and tracing of real-time voice agents, enabling you to monitor, debug, and optimize your Pipecat agents in the Opik dashboard.

Account Setup

Comet provides a hosted version of the Opik platform, simply create an account and grab your API Key.

You can also run the Opik platform locally, see the installation guide for more information.

Getting started

To use the Pipecat integration with Opik, you will need to have Pipecat and the required OpenTelemetry packages installed:

$pip install pipecat-ai[daily,webrtc,silero,cartesia,deepgram,openai,tracing] opentelemetry-exporter-otlp-proto-http websockets
$export OTEL_EXPORTER_OTLP_ENDPOINT=https://www.comet.com/opik/api/v1/private/otel
>export OTEL_EXPORTER_OTLP_HEADERS='Authorization=<your-api-key>,Comet-Workspace=default'

Using Opik with Pipecat

For the basic example, you’ll need an OpenAI API key. You can set it as an environment variable:

$export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"

Or set it programmatically:

1import os
2import getpass
3
4if "OPENAI_API_KEY" not in os.environ:
5 os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")

Enable tracing in your Pipecat application by setting up OpenTelemetry instrumentation and configuring your pipeline task. For complete details on Pipecat’s OpenTelemetry implementation, see the official Pipecat OpenTelemetry documentation:

1# Initialize OpenTelemetry with the http exporter
2from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
3from pipecat.utils.tracing.setup import setup_tracing
4
5# Configured automatically from .env
6exporter = OTLPSpanExporter()
7
8setup_tracing(
9 service_name="pipecat-demo",
10 exporter=exporter,
11)
12
13# Enable tracing in your PipelineTask
14task = PipelineTask(
15 pipeline,
16 params=PipelineParams(
17 allow_interruptions=True,
18 enable_metrics=True, # Required for some service metrics
19 ),
20 enable_tracing=True, # Enables both turn and conversation tracing
21 conversation_id="customer-123", # Optional - will auto-generate if not provided
22)

Trace Structure

Pipecat organizes traces hierarchically following the natural structure of conversations, as documented in their OpenTelemetry guide:

Conversation (conversation_id)
├── turn
│ ├── stt (Speech-to-Text)
│ ├── llm (Language Model)
│ └── tts (Text-to-Speech)
└── turn
├── stt
├── llm
└── tts

This structure allows you to track the complete lifecycle of conversations and measure latency for individual turns and services.

Understanding the Traces

Based on Pipecat’s OpenTelemetry implementation, the traces include:

  • Conversation Spans: Top-level spans with conversation ID and type
  • Turn Spans: Individual conversation turns with turn number, duration, and interruption status
  • Service Spans: Detailed service operations with rich attributes:
    • LLM Services: Model, input/output tokens, response text, tool configurations, TTFB metrics
    • TTS Services: Voice ID, character count, synthesized text, TTFB metrics
    • STT Services: Transcribed text, language detection, voice activity detection
  • Performance Metrics: Time to first byte (TTFB) and processing durations for each service

Results viewing

Once your Pipecat applications are traced with Opik, you can view the OpenTelemetry traces in the Opik UI. You will see:

  • Hierarchical conversation and turn structure as sent by Pipecat
  • Service-level spans with the attributes Pipecat includes (LLM tokens, TTS character counts, STT transcripts)
  • Performance metrics like processing durations and time-to-first-byte where provided by Pipecat
  • Standard OpenTelemetry trace visualization and search capabilities

Getting Help

Further improvements

If you would like to see us improve this integration, simply open a new feature request on Github.