Observability for Helicone with Opik
Helicone is an open-source LLM observability platform that provides monitoring, logging, and analytics for LLM applications. It acts as a proxy layer between your application and LLM providers, offering features like request logging, caching, rate limiting, and cost tracking.
Gateway Overview
Helicone provides a comprehensive observability layer for LLM applications with features including:
- Unified API: OpenAI-compatible API with access to 100+ models through Helicone’s model registry
- Intelligent Routing: Automatic failures and fallbacks across providers to ensure reliability
- No Rate Limits: Skip provider tier restrictions with zero markup on credits
- Request Logging: Automatic logging of all LLM requests and responses
- Caching: Reduce costs and improve latency with semantic caching
- Cost Tracking: Monitor spending across different models and providers with unified observability
- Multi-Provider Support: Works with OpenAI, Anthropic, Azure OpenAI, and more
Account Setup
Comet provides a hosted version of the Opik platform. Simply create an account and grab your API Key.
You can also run the Opik platform locally, see the installation guide for more information.
Getting Started
Installation
First, ensure you have both opik and openai packages installed:
Configuring Opik
Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:
- CLI configuration:
opik configure - Code configuration:
opik.configure() - Self-hosted vs Cloud vs Enterprise setup
- Configuration files and environment variables
Configuring Helicone
You’ll need a Helicone API key. You can get one by signing up at Helicone.
Set your API key as an environment variable:
Or set it programmatically:
Logging LLM Calls
Since Helicone provides an OpenAI-compatible proxy, we can use the Opik OpenAI SDK wrapper to automatically log Helicone calls as generations in Opik.
Simple LLM Call
Advanced Usage
Using with the @track decorator
If you have multiple steps in your LLM pipeline, you can use the @track decorator to log the traces for each step. If Helicone is called within one of these steps, the LLM call will be associated with that corresponding step:
The trace will show nested LLM calls with hierarchical spans.
Further Improvements
If you have suggestions for improving the Helicone integration, please let us know by opening an issue on GitHub.