LLMOps Tools
Comet LMMOp's tools fall under three different categories:
- Prompt Playground: Interact with Large Language Models from the Comet UI. All your prompts and responses will be automatically logged to Comet.
- Prompt History: Track all your prompt / response pairs. You can also view prompt chains to identify where issues might be occuring.
- Prompt Usage Tracking: Granular view in token usage tracking.
Note
Comet's LLMOps suite is under active development, if you are interested in getting access to the latest un-released features please reach out to support@comet.com
Adding this tools to your projects¶
Comet's LLMOps tools are all available under as panels which can be added to both your Comet projects or to any experiment. You can find all these panels by searching for llmops
in the Feature Panels
when adding a panel.
Prompt Playground¶
Comet's Prompt Playground allows you to query OpenAI's Large Language Models straight from your dashboard. All the prompts and responses will be automatically recorded so that you can review them at a later date using the Prompt History panel.
To add the Prompt Playground panel to one of your dashboards, simply navigate to the featured panel list and choose Prompt Playground
.
The Prompt Playground has been designed to allow you to define many different prompt templates and have them evaluated against a set of contexts quickly. In the columns text fields, if you add the variable {{ context }}
to the prompt, this will be replaced dynamically by the context defined on each row. When you click on Run
, the prompts will be sent to the selected OpenAI model and you will be able to view the responses for each prompt template / context combinations within the panel.
Prompt History¶
Prompt Engineering relies on the manual review on many different prompt / completions combinations. The Prompt History panel can be used to visualize these as well as visualizing the chains that lead to a given final response.
The Prompt History panel currently supports our LangChain and OpenAI integrations as well as the Prompt Playground panel.
Prompt Usage Tracking¶
Some of the best Large Language Models can only be accessed through paid APIs that are priced by token. When experimenting with many different use-cases, the costs associated with these experiments can become complicated to track. With the Prompt Usage Tracking Panel, you will be able to track daily token usage by project as well as view token usage by experiment run.