Observability for Pipecat with Opik
Pipecat is an open-source Python framework for building real-time voice and multimodal conversational AI agents. Developed by Daily, it enables fully programmable AI voice agents and supports multimodal interactions, positioning itself as a flexible solution for developers looking to build conversational AI systems.
This guide explains how to integrate Opik with Pipecat for observability and tracing of real-time voice agents, enabling you to monitor, debug, and evaluate your Pipecat agents in the Opik dashboard.
Account Setup
Comet provides a hosted version of the Opik platform, simply create an account and grab your API Key.
You can also run the Opik platform locally, see the installation guide for more information.
Key Features
- Hierarchical Tracing: Track entire conversations, turns, and service calls
- Service Tracing: Detailed spans for TTS, STT, and LLM services with rich context
- TTFB Metrics: Capture Time To First Byte metrics for latency analysis
- Usage Statistics: Track character counts for TTS and token usage for LLMs
- Real-time Monitoring: Monitor voice agent performance and conversation quality
Getting Started
Installation
To use Pipecat with Opik, you’ll need to have both the pipecat-ai
and opik
packages installed:
Configuring Opik
Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:
- CLI configuration:
opik configure
- Code configuration:
opik.configure()
- Self-hosted vs Cloud vs Enterprise setup
- Configuration files and environment variables
Configuring Pipecat
In order to configure Pipecat, you will need to have your API keys for the services you’re using (OpenAI, Deepgram, etc.). You can set them as environment variables:
Or set them programmatically:
OpenTelemetry Integration
Pipecat supports OpenTelemetry tracing, and Opik has an OpenTelemetry endpoint. By following these steps, you can enable Opik tracing for your Pipecat application.
Environment Configuration
Create a .env
file with your Opik API keys to enable tracing:
For self-hosted Opik instances:
Add OpenTelemetry to your Pipeline Task
Enable tracing in your Pipecat application:
Trace Structure
Traces are organized hierarchically in Opik:
This organization helps you track conversation-to-conversation and turn-to-turn interactions.
Using with @track decorator
Use the @track
decorator to create comprehensive traces when working with your Pipecat voice agents:
Understanding the Traces
- Conversation Spans: The top-level span representing an entire conversation
- Turn Spans: Child spans of conversations that represent each turn in the dialog
- Service Spans: Detailed service operations nested under turns
- Service Attributes: Each service includes rich context about its operation:
- TTS: Voice ID, character count, service type
- STT: Transcription text, language, model
- LLM: Messages, tokens used, model, service configuration
- Metrics: Performance data like
metrics.ttfb_ms
and processing durations
Results viewing
Once your Pipecat voice agents are traced with Opik, you can view them in the Opik UI. Each conversation will contain:
- Complete conversation flow with turns and service calls
- Detailed service metrics and performance data
- Token usage and cost tracking for LLM services
- Audio processing metrics for TTS and STT services
- Real-time conversation monitoring
Feedback Scores and Evaluation
Once your Pipecat voice agents are traced with Opik, you can evaluate your voice AI applications using Opik’s evaluation framework:
Advanced Configuration
Custom Service Tracing
You can add custom tracing to your Pipecat services:
Conversation Context Tracking
Track conversation context across turns:
Environment Variables
Make sure to set the following environment variables:
Troubleshooting
Common Issues
- No Traces in Opik: Ensure that your credentials are correct and follow the troubleshooting guide
- Missing Metrics: Check that
enable_metrics=True
in PipelineParams - Connection Errors: Verify network connectivity to Opik
- Exporter Issues: Try the Console exporter (
OTEL_CONSOLE_EXPORT=true
) to verify tracing works - Service Not Found: Ensure all required services (STT, LLM, TTS) are properly configured
Getting Help
- Check the Pipecat Documentation for framework-specific issues
- Review the OpenTelemetry Python Documentation for tracing setup
- Contact Pipecat support for framework-specific problems
- Check Opik documentation for tracing and evaluation features
Next Steps
Once you have Pipecat integrated with Opik, you can:
- Evaluate your voice AI applications using Opik’s evaluation framework
- Create datasets to test and improve your voice agents
- Set up feedback collection to gather human evaluations
- Monitor performance across different voice models and configurations