Observability for Anannas AI with Opik
Anannas AI is a unified inference gateway providing access to 500+ models (OpenAI, Anthropic, Mistral, Gemini, DeepSeek, and more) through an OpenAI-compatible API.
Gateway Overview
Anannas AI provides a unified interface for accessing hundreds of LLM models through a single OpenAI-compatible API, making it easy to switch between providers and models without changing your code.
Key Features:
- 500+ Models: Single API for accessing models from OpenAI, Anthropic, Mistral, Gemini, DeepSeek, and more
- OpenAI-Compatible API: Drop-in replacement for OpenAI SDK with standard request/response format
- Provider Health Monitoring: Automatic fallback routing in case of failures or degraded performance
- Low Overhead: ~0.48ms latency overhead with 5% markup
- BYOK Support: Bring Your Own Key for enterprise deployments
Account Setup
Comet provides a hosted version of the Opik platform. Simply create an account and grab your API Key.
You can also run the Opik platform locally, see the installation guide for more information.
Getting Started
Installation
First, ensure you have both opik and openai packages installed:
Configuring Opik
Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:
- CLI configuration:
opik configure - Code configuration:
opik.configure() - Self-hosted vs Cloud vs Enterprise setup
- Configuration files and environment variables
Configuring Anannas AI
You’ll need an Anannas AI API key. You can get one from Anannas AI.
Set it as an environment variable:
Or set it programmatically:
Logging LLM Calls
Since Anannas AI provides an OpenAI-compatible API, we can use the Opik OpenAI SDK wrapper to automatically log Anannas AI calls as generations in Opik.
Simple LLM Call
Advanced Usage
Using with the @track decorator
If you have multiple steps in your LLM pipeline, you can use the @track decorator to log the traces for each step. If Anannas AI is called within one of these steps, the LLM call will be associated with that corresponding step:
The trace will show nested LLM calls with hierarchical spans.
Further Improvements
If you have suggestions for improving the Anannas AI integration, please let us know by opening an issue on GitHub.