Observability for TrueFoundry with Opik

TrueFoundry is an enterprise MLOps platform that provides a unified interface for deploying and managing ML models, including LLMs. It offers features like model deployment, monitoring, A/B testing, and cost optimization.

Gateway Overview

TrueFoundry provides enterprise-grade features for managing ML and LLM deployments, including:

  • Unified OpenAI-Compatible Endpoint: Routes through TrueFoundry to any supported model (OpenAI, Anthropic, self-hosted, etc.)
  • End-to-End Tracing: Full request/response logs with system messages, token breakdowns (prompt, completion, total), latency per call, and cost analytics per model and environment
  • Production-Grade Controls: Rate limiting, quotas by user/team, budget alerts and spend caps, scoped API keys with RBAC
  • Data Sovereignty: VPC and on-premises deployment options for compliance and data privacy
  • Multi-Cloud Support: Deploy across AWS, Azure, GCP, and on-premise infrastructure

Account Setup

Comet provides a hosted version of the Opik platform. Simply create an account and grab your API Key.

You can also run the Opik platform locally, see the installation guide for more information.

Getting Started

Installation

First, ensure you have both opik and openai packages installed:

$pip install opik openai

Configuring Opik

Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:

  • CLI configuration: opik configure
  • Code configuration: opik.configure()
  • Self-hosted vs Cloud vs Enterprise setup
  • Configuration files and environment variables

Configuring TrueFoundry

You’ll need your TrueFoundry API endpoint and credentials. You can get these from your TrueFoundry dashboard.

Set your configuration as environment variables:

$export TRUEFOUNDRY_API_KEY="YOUR_TRUEFOUNDRY_API_KEY"
>export TRUEFOUNDRY_BASE_URL="YOUR_TRUEFOUNDRY_BASE_URL"

Or set them programmatically:

1import os
2import getpass
3
4if "TRUEFOUNDRY_API_KEY" not in os.environ:
5 os.environ["TRUEFOUNDRY_API_KEY"] = getpass.getpass("Enter your TrueFoundry API key: ")
6
7if "TRUEFOUNDRY_BASE_URL" not in os.environ:
8 os.environ["TRUEFOUNDRY_BASE_URL"] = input("Enter your TrueFoundry base URL: ")

Logging LLM Calls

Since TrueFoundry provides an OpenAI-compatible API for LLM deployments, we can use the Opik OpenAI SDK wrapper to automatically log TrueFoundry calls as generations in Opik.

Simple LLM Call

1import os
2from opik.integrations.openai import track_openai
3from openai import OpenAI
4
5# Create an OpenAI client with TrueFoundry's base URL
6client = OpenAI(
7 api_key=os.environ["TRUEFOUNDRY_API_KEY"],
8 base_url=os.environ["TRUEFOUNDRY_BASE_URL"]
9)
10
11# Wrap the client with Opik tracking
12client = track_openai(client, project_name="truefoundry-integration-demo")
13
14# Make a chat completion request
15response = client.chat.completions.create(
16 model="your-deployed-model-name",
17 messages=[
18 {"role": "system", "content": "You are a knowledgeable AI assistant."},
19 {"role": "user", "content": "What is the largest city in France?"}
20 ]
21)
22
23# Print the assistant's reply
24print(response.choices[0].message.content)

Advanced Usage

Using with the @track decorator

If you have multiple steps in your LLM pipeline, you can use the @track decorator to log the traces for each step. If TrueFoundry is called within one of these steps, the LLM call will be associated with that corresponding step:

1import os
2from opik import track
3from opik.integrations.openai import track_openai
4from openai import OpenAI
5
6# Create and wrap the OpenAI client with TrueFoundry's base URL
7client = OpenAI(
8 api_key=os.environ["TRUEFOUNDRY_API_KEY"],
9 base_url=os.environ["TRUEFOUNDRY_BASE_URL"]
10)
11client = track_openai(client)
12
13@track
14def generate_response(prompt: str):
15 response = client.chat.completions.create(
16 model="your-deployed-model-name",
17 messages=[
18 {"role": "system", "content": "You are a knowledgeable AI assistant."},
19 {"role": "user", "content": prompt}
20 ]
21 )
22 return response.choices[0].message.content
23
24@track
25def refine_response(initial_response: str):
26 response = client.chat.completions.create(
27 model="your-deployed-model-name",
28 messages=[
29 {"role": "system", "content": "You enhance and polish text responses."},
30 {"role": "user", "content": f"Please improve this response: {initial_response}"}
31 ]
32 )
33 return response.choices[0].message.content
34
35@track(project_name="truefoundry-integration-demo")
36def generate_and_refine(prompt: str):
37 # First LLM call: Generate initial response
38 initial = generate_response(prompt)
39
40 # Second LLM call: Refine the response
41 refined = refine_response(initial)
42
43 return refined
44
45# Example usage
46result = generate_and_refine("Explain quantum computing in simple terms.")

The trace will show nested LLM calls with hierarchical spans.

Further Improvements

If you have suggestions for improving the TrueFoundry integration, please let us know by opening an issue on GitHub.