Observability for Portkey with Opik

Portkey is an enterprise-grade AI gateway that provides a unified interface to access 200+ LLMs with advanced features like smart routing, automatic fallbacks, load balancing, and comprehensive observability.

Gateway Overview

Portkey provides enterprise-grade features for managing LLM API access, including:

  • 250+ AI Models: Single consistent API to connect with models from OpenAI, Anthropic, Google, Azure, AWS, and more
  • Multi-Modal Support: Language, vision, audio, and image models
  • Advanced Routing: Fallbacks, load balancing, conditional routing based on metadata, and provider weights
  • Smart Caching: Simple and semantic caching to reduce latency and cost
  • Security & Governance: Guardrails, secure key management (virtual keys), role-based access control
  • Compliance: SOC2, HIPAA, GDPR compliant with data privacy controls
  • Observability: Request/response logging, latency tracking, cost metrics, error rates, and throughput monitoring

Account Setup

Comet provides a hosted version of the Opik platform. Simply create an account and grab your API Key.

You can also run the Opik platform locally, see the installation guide for more information.

Getting Started

Installation

First, ensure you have opik, openai, and portkey-ai packages installed:

$pip install opik openai portkey-ai

Configuring Opik

Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:

  • CLI configuration: opik configure
  • Code configuration: opik.configure()
  • Self-hosted vs Cloud vs Enterprise setup
  • Configuration files and environment variables

Configuring Portkey

You’ll need a Portkey API key and virtual keys for your LLM providers. You can get these from the Portkey dashboard.

Set your API keys as environment variables:

$export PORTKEY_API_KEY="YOUR_PORTKEY_API_KEY"
>export PORTKEY_VIRTUAL_KEY="YOUR_PORTKEY_VIRTUAL_KEY"

Or set them programmatically:

1import os
2import getpass
3
4if "PORTKEY_API_KEY" not in os.environ:
5 os.environ["PORTKEY_API_KEY"] = getpass.getpass("Enter your Portkey API key: ")
6
7if "PORTKEY_VIRTUAL_KEY" not in os.environ:
8 os.environ["PORTKEY_VIRTUAL_KEY"] = getpass.getpass("Enter your Portkey virtual key: ")

Logging LLM Calls

Since Portkey provides an OpenAI-compatible API, we can use the Opik OpenAI SDK wrapper to automatically log Portkey calls as generations in Opik.

Simple LLM Call

1import os
2from opik.integrations.openai import track_openai
3from openai import OpenAI
4from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
5
6client = OpenAI(
7 api_key=os.environ["OPENAI_API_KEY"],
8 base_url=PORTKEY_GATEWAY_URL,
9 default_headers=createHeaders(
10 api_key=os.environ["PORTKEY_API_KEY"],
11 provider="@OPENAI_PROVIDER"
12 )
13)
14
15# Wrap the client with Opik tracking
16client = track_openai(client, project_name="portkey-integration-demo")
17
18# Make a chat completion request
19response = client.chat.completions.create(
20 model="gpt-3.5-turbo",
21 messages=[
22 {"role": "system", "content": "You are a knowledgeable AI assistant."},
23 {"role": "user", "content": "What is the largest city in France?"}
24 ]
25)
26
27# Print the assistant's reply
28print(response.choices[0].message.content)

Advanced Usage

Using with the @track decorator

If you have multiple steps in your LLM pipeline, you can use the @track decorator to log the traces for each step. If Portkey is called within one of these steps, the LLM call will be associated with that corresponding step:

1import os
2from opik import track
3from opik.integrations.openai import track_openai
4from openai import OpenAI
5from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
6
7# Create an OpenAI client configured for Portkey
8client = OpenAI(
9 api_key=os.environ["OPENAI_API_KEY"],
10 base_url=PORTKEY_GATEWAY_URL,
11 default_headers=createHeaders(
12 api_key=os.environ["PORTKEY_API_KEY"],
13 provider="@OPENAI_PROVIDER"
14 )
15)
16
17# Wrap the client with Opik tracking
18client = track_openai(client, project_name="portkey-integration-demo")
19
20@track
21def generate_response(prompt: str):
22 response = client.chat.completions.create(
23 model="gpt-3.5-turbo",
24 messages=[
25 {"role": "system", "content": "You are a knowledgeable AI assistant."},
26 {"role": "user", "content": prompt}
27 ]
28 )
29 return response.choices[0].message.content
30
31@track
32def refine_response(initial_response: str):
33 response = client.chat.completions.create(
34 model="gpt-4",
35 messages=[
36 {"role": "system", "content": "You enhance and polish text responses."},
37 {"role": "user", "content": f"Please improve this response: {initial_response}"}
38 ]
39 )
40 return response.choices[0].message.content
41
42@track(project_name="portkey-integration-demo")
43def generate_and_refine(prompt: str):
44 # First LLM call: Generate initial response
45 initial = generate_response(prompt)
46
47 # Second LLM call: Refine the response
48 refined = refine_response(initial)
49
50 return refined
51
52# Example usage
53result = generate_and_refine("Explain quantum computing in simple terms.")

The trace will show nested LLM calls with hierarchical spans.

Further Improvements

If you have suggestions for improving the Portkey integration, please let us know by opening an issue on GitHub.