Skip to content

Integrate with OpenAI

OpenAI is an AI research and deployment company. Their mission is to ensure that artificial general intelligence benefits all of humanity.

Instrument your runs with Comet to start managing experiments, log prompts iterations and track automatically code and Git metadata for faster and easier reproducibility and collaboration.

Open In Colab

Comet SDKMinimum SDK versionMinimum openai version
LLM-SDK1.4.10.27.0

Start logging

Add the following lines of code to your script or notebook:

import comet_llm

from openai import OpenAI

comet_llm.init(
    # api_key="YOUR_API_KEY",
    project="openai-example",
)

client = OpenAI()
response = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"},
    {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
    {"role": "user", "content": "Where was it played?"}
  ]
)

print(response.choices[0].message.content)

Log automatically

The Comet OpenAI integration automatically tracks every OpenAI Chat Completions generated in your scripts. Each completion will be logged to an LLM project. For each call to openai.ChatCompletion.create, the Comet OpenAI Integration will log the following items by default, without any additional configuration:

  • messages and function_call as inputs
  • choices as outputs
  • usage token as Metadata
  • Everything else as Metadata

If you have created an LLM chain using comet_ml.start_chain, the completions will be added to the current chain. Otherwise, the completions will be logged individually.

Note

The OpenAI integration don't support Streaming mode at the moment. No outputs will be logged in that case

End-to-end example

Following is a basic example of using Comet with OpenAI.

If you can't wait, check out the results of this example OpenAI project for a preview of what's to come.

Install dependencies

pip install "comet_llm>=1.4.2" "openai>=1.0.0"

Set-up your OpenAI API Key

Get your OpenAI API Key and set it as the environment variable OPENAI_API_KEY or uncomment the line in the code block below.

Run the example

import os
import comet_llm

# os.environ["OPENAI_API_KEY"] = "..."

from openai import OpenAI

client = OpenAI()

comet_ml.init(project_name="comet-example-openai")

experiment = comet_ml.Experiment()


def answer_question(
    question,
    model="gpt-3.5-turbo",
    max_tokens=150,
    stop_sequence=None,
):
    """
    Answer a question
    """
    # Create a chat completion using the question and system instructions
    messages = [
        {
            "role": "system",
            "content": "Answer the question and if the question can't be answered, say \"I don't know\"",
        },
        {"role": "user", "content": question},
    ]

    response = client.chat.completions.create(
        messages=messages,
        temperature=0,
        max_tokens=max_tokens,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        stop=stop_sequence,
        model=model,
    )
    return response.choices[0].message.content.strip()


print(answer_question("What is your name?"))
print(answer_question("What is OpenAI?"))
print(answer_question("What is CometML?"))
print(answer_question("What is the airspeed velocity of an unladen swallow?"))

Try it out!

Don't just take our word for it, try it out for yourself.

Feb. 24, 2024