Using Opik with VertexAI

Opik integrates with VertexAI to provide a simple way to log traces for all VertexAI LLM calls. This works for all the supported models.

Creating an account on Comet.com

Comet provides a hosted version of the Opik platform, simply create an account and grab your API Key.

You can also run the Opik platform locally, see the installation guide for more information.

1%pip install --upgrade opik google-genai -q
1import opik
2
3opik.configure(use_local=False)

Preparing our environment

First, we will set up our Google GenAI client with VertexAI credentials.

1from google import genai
2
3PROJECT_ID = "[your-project-id]"
4LOCATION = "us-central1"
5
6if not PROJECT_ID or PROJECT_ID == "[your-project-id]":
7 raise ValueError("Please set your PROJECT_ID")
8
9client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)

Logging traces

In order to log traces to Opik, we need to wrap calls made via VertexAI with the track_genai function:

1import os
2from opik.integrations.genai import track_genai
3
4os.environ["OPIK_PROJECT_NAME"] = "vertexai-integration-demo"
5vertexai_client = track_genai(client)
6
7
8prompt = """
9Write a short two sentence story about Opik.
10"""
11
12response = vertexai_client.models.generate_content(
13 model="gemini-2.0-flash-001", contents=prompt
14)
15print(response.text)

The prompt and response messages are automatically logged to Opik and can be viewed in the UI.

OpenAI Integration

Using it with the track decorator

If you have multiple steps in your LLM pipeline, you can use the track decorator to log the traces for each step. If Gemini model is called within one of these steps, the LLM call with be associated with that corresponding step:

1from opik import track
2
3
4@track
5def generate_story(prompt):
6 response = vertexai_client.models.generate_content(
7 model="gemini-2.0-flash-001", contents=prompt
8 )
9 return response.text
10
11
12@track
13def generate_topic():
14 prompt = "Generate a topic for a story about Opik."
15 response = vertexai_client.models.generate_content(
16 model="gemini-2.0-flash-001", contents=prompt
17 )
18 return response.text
19
20
21@track
22def generate_opik_story():
23 topic = generate_topic()
24 story = generate_story(topic)
25 return story
26
27
28generate_opik_story()

The trace can now be viewed in the UI:

VertexAI Cookbook