Skip to content

Track Large Language Models

When it comes to building applications using Large Language Models, a lot of time is spent working on prompt engineering rather than training models. This new workflow requires a different set of tools that Comet is releasing under the umbrella of LLMOps.

Comet LMMOps tools fall under three different categories:

  • Prompt Playground: Interact with Large Language Models from the Comet UI. All your prompts and responses will be automatically logged to Comet.
  • Prompt History: Track all your prompt / response pairs. You can also view prompt chains to identify where issues might be occuring.
  • Prompt Usage Tracking: Granular view in token usage tracking.

Note

Comet's LLMOps suite is in Beta and under active development, if you are interested in getting access to the latest un-released features please reach out to support@comet.com

Prompt Management

Comet has integrations with both LangChain and the OpenAI SDK, when using these tools Comet will automatically save the prompts, responses and chains.

The Comet integration is just two lines of code:

import os
os.environ["COMET_API_KEY"] = "Your Comet API Key"
os.environ["OPENAI_API_KEY"] = "Your OpenAI API Key"
os.environ["SERPAPI_API_KEY"] = "Your SerpAPI API Key"

from langchain.agents import initialize_agent, load_tools
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.llms import OpenAI

comet_callback = CometCallbackHandler(
    project_name="comet-example-langchain",
    complexity_metrics=True,
    stream_logs=True,
    tags=["agent"],
)
manager = CallbackManager([StdOutCallbackHandler(), comet_callback])
llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)

tools = load_tools(["serpapi", "llm-math"], llm=llm, callback_manager=manager)
agent = initialize_agent(
    tools,
    llm,
    agent="zero-shot-react-description",
    callback_manager=manager,
    verbose=True,
)
agent.run(
    "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
comet_callback.flush_tracker(agent, finish=True)
import comet_ml
import openai

experiment = Experiment(
    api_key="YOUR_API_KEY",
    project_name="YOUR_PROJECT_NAME",
    workspace="YOUR_WORKSPACE",
)

openai.api_key = os.getenv("OPENAI_API_KEY")
openai.Completion.create(
model="text-davinci-003",
prompt="Say this is a test",
max_tokens=7,
temperature=0
)

Once the data is logged to Comet, you can analyze the prompts and chains using the LLMOps - Prompt History panel. This panel can be found in the featured section and can be added at both a project and an experiment level:

Adding the Prompt History panel

Example Project

A demo project with the Prompt History panel is available here: LangChain demo project.

May. 24, 2023