Observability for Groq with Opik

Groq is Fast AI Inference.

Account Setup

Comet provides a hosted version of the Opik platform, simply create an account and grab your API Key.

You can also run the Opik platform locally, see the installation guide for more information.

Getting Started

Installation

To start tracking your Groq LLM calls, you can use our LiteLLM integration. You’ll need to have both the opik and litellm packages installed. You can install them using pip:

$pip install opik litellm

Configuring Opik

Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:

  • CLI configuration: opik configure
  • Code configuration: opik.configure()
  • Self-hosted vs Cloud vs Enterprise setup
  • Configuration files and environment variables

If you’re unable to use our LiteLLM integration with Groq, please open an issue

Configuring Groq

In order to configure Groq, you will need to have your Groq API Key. You can create and manage your Groq API Keys on this page.

You can set it as an environment variable:

{pytest_codeblocks_skip=true}
$export GROQ_API_KEY="YOUR_API_KEY"

Or set it programmatically:

1import os
2import getpass
3
4if "GROQ_API_KEY" not in os.environ:
5 os.environ["GROQ_API_KEY"] = getpass.getpass("Enter your Groq API key: ")

Logging LLM calls

In order to log the LLM calls to Opik, you will need to create the OpikLogger callback. Once the OpikLogger callback is created and added to LiteLLM, you can make calls to LiteLLM as you normally would:

1from litellm.integrations.opik.opik import OpikLogger
2import litellm
3import os
4
5os.environ["OPIK_PROJECT_NAME"] = "groq-integration-demo"
6
7opik_logger = OpikLogger()
8litellm.callbacks = [opik_logger]
9
10prompt = """
11Write a short two sentence story about Opik.
12"""
13
14response = litellm.completion(
15 model="groq/llama3-8b-8192",
16 messages=[{"role": "user", "content": prompt}]
17)
18
19print(response.choices[0].message.content)

Advanced Usage

Using with the @track decorator

If you are using LiteLLM within a function tracked with the @track decorator, you will need to pass the current_span_data as metadata to the litellm.completion call:

1from opik import track
2from opik.opik_context import get_current_span_data
3import litellm
4
5@track
6def generate_story(prompt):
7 response = litellm.completion(
8 model="groq/llama3-8b-8192",
9 messages=[{"role": "user", "content": prompt}],
10 metadata={
11 "opik": {
12 "current_span_data": get_current_span_data(),
13 },
14 },
15 )
16 return response.choices[0].message.content
17
18@track
19def generate_topic():
20 prompt = "Generate a topic for a story about Opik."
21 response = litellm.completion(
22 model="groq/llama3-8b-8192",
23 messages=[{"role": "user", "content": prompt}],
24 metadata={
25 "opik": {
26 "current_span_data": get_current_span_data(),
27 },
28 },
29 )
30 return response.choices[0].message.content
31
32@track
33def generate_opik_story():
34 topic = generate_topic()
35 story = generate_story(topic)
36 return story
37
38# Execute the multi-step pipeline
39generate_opik_story()