Skip to content

Integrate with OpenAI

OpenAI is an AI research and deployment company. Their mission is to ensure that artificial general intelligence benefits all of humanity.

Instrument your runs with Comet to start managing experiments, log prompts iterations and track automatically code and Git metadata for faster and easier reproducibility and collaboration.

Open In Colab

Start logging

Add the following lines of code to your script or notebook:

import comet_ml
import openai

experiment = Experiment(
    api_key="YOUR_API_KEY",
    project_name="YOUR_PROJECT_NAME",
    workspace="YOUR_WORKSPACE",
)

openai.api_key = os.getenv("OPENAI_API_KEY")
openai.Completion.create(
  model="text-davinci-003",
  prompt="Say this is a test",
  max_tokens=7,
  temperature=0
)

Note

There are other ways to configure Comet. See more here.

Log automatically

After an Experiment has been created, Comet automatically logs the following items, by default, with no additional configuration:

  • All individual prompt and choice as Text visible in the Text tab of the UI.
  • All calls to the Completions endpoint as a JSON asset.
  • How many prompt, completions and total tokens were consumed as metrics.

Note

Don't see what you need to log here? We have your back. You can manually log any kind of data to Comet using the Experiment object. For example, use experiment.log_image to log images, or experiment.log_audio to log audio.

End-to-end example

Following is a basic example of using Comet with OpenAI.

If you can't wait, check out the results of this example OpenAI experiment for a preview of what's to come.

Install dependencies

pip install comet_ml openai

Set-up your OpenAI API Key

Get your OpenAI API Key and set it as the environment variable OPENAI_API_KEY or uncomment the line in the code block below.

Run the example

import os
import comet_ml

# os.environ["OPENAI_API_KEY"] = "..."

import openai

comet_ml.init(project_name="comet-example-openai")

experiment = comet_ml.Experiment()


def answer_question(
    question="Am I allowed to publish model outputs to Twitter, without a human review?",
    model="text-davinci-003",
    max_len=1800,
    max_tokens=150,
    stop_sequence=None,
):
    """
    Answer a question
    """
    try:
        # Create a completions using the question and context
        response = openai.Completion.create(
            prompt=f"Answer the question and if the question can't be answered, say \"I don't know\"\n\n---\n\nQuestion: {question}\nAnswer:",
            temperature=0,
            max_tokens=max_tokens,
            top_p=1,
            frequency_penalty=0,
            presence_penalty=0,
            stop=stop_sequence,
            model=model,
        )
        return response["choices"][0]["text"].strip()
    except Exception as e:
        print(e)
        return ""


print(answer_question("What is your name?"))
print(answer_question("What is OpenAI?"))
print(answer_question("What is CometML?"))

Try it out!

Don't just take our word for it, try it out for yourself.

May. 24, 2023