Observability for Vercel AI Gateway with Opik
Vercel AI Gateway provides a unified interface to access multiple AI providers with edge-optimized performance, built-in caching, and comprehensive analytics. It’s designed to work seamlessly with Vercel’s edge infrastructure.
Gateway Overview
Vercel AI Gateway provides enterprise-grade features for managing AI API access, including:
- Unified API: Access hundreds of models across providers via Vercel’s AI Gateway with minimal code changes
- Transparent Pricing: No markup on tokens with “Bring Your Own Key” support
- Automatic Failover: High reliability with requests routed to alternate providers if primary is unavailable
- Low Latency: Sub-20ms routing latency through Vercel’s global edge network
- Intelligent Caching: Reduce costs and improve response times with smart caching
Account Setup
Comet provides a hosted version of the Opik platform. Simply create an account and grab your API Key.
You can also run the Opik platform locally, see the installation guide for more information.
Getting Started
Installation
First, ensure you have both opik and openai packages installed:
Configuring Opik
Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:
- CLI configuration:
opik configure - Code configuration:
opik.configure() - Self-hosted vs Cloud vs Enterprise setup
- Configuration files and environment variables
Configuring Vercel AI Gateway
You’ll need to set up the Vercel AI Gateway in your Vercel project. Follow the Vercel AI Gateway setup guide to configure your gateway.
Set your API keys as environment variables:
Or set them programmatically:
Logging LLM Calls
Since Vercel AI Gateway provides an OpenAI-compatible API, we can use the Opik OpenAI SDK wrapper to automatically log Vercel AI Gateway calls as generations in Opik.
Simple LLM Call
Advanced Usage
Using with the @track decorator
If you have multiple steps in your LLM pipeline, you can use the @track decorator to log the traces for each step. If Vercel AI Gateway is called within one of these steps, the LLM call will be associated with that corresponding step:
The trace will show nested LLM calls with hierarchical spans.
Further Improvements
If you have suggestions for improving the Vercel AI Gateway integration, please let us know by opening an issue on GitHub.