{"id":7426,"date":"2023-09-13T14:56:08","date_gmt":"2023-09-13T22:56:08","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=7426"},"modified":"2025-04-24T17:14:03","modified_gmt":"2025-04-24T17:14:03","slug":"organize-your-prompt-engineering-with-cometllm","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/","title":{"rendered":"Organize Your Prompt Engineering with CometLLM"},"content":{"rendered":"\n<h2 class=\"wp-block-heading lp lq fo be lr ls lt lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm bj\" id=\"056d\">Introduction<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mn mo fo be b mp mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj fh bj\" id=\"8601\"><a class=\"af nk\" href=\"https:\/\/www.comet.com\/site\/blog\/prompt-engineering\/\" target=\"_blank\" rel=\"noopener ugc nofollow\">Prompt Engineering<\/a>&nbsp;is arguably the most critical aspect in harnessing the power of Large Language Models (LLMs) like ChatGPT. Whether you\u2019re building tools for content creation, question answering, translation, or coding assistance, well designed prompts act as the roadmap for LLMs to generate accurate, ethical, and contextually relevant outputs. However; current prompt engineering workflows are incredibly tedious and cumbersome. Logging prompts and their outputs to .csv files and then visualizing them in Excel becomes inefficient and confusing. Developers are often experimenting with hundreds if not thousands of prompts for a given use-case and are in dire need of tools that scale well!<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mn mo fo be b mp nl mr ms mt nm mv mw mx nn mz na nb no nd ne nf np nh ni nj fh bj\" id=\"108a\">Comet is happy to announce a new solution that aims to make the entire prompt engineering process much easier. Use CometLLM to log and visualize all your prompts and chains and unleash the full potential of Large Language Models!<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mn mo fo be b mp nl mr ms mt nm mv mw mx nn mz na nb no nd ne nf np nh ni nj fh bj\" id=\"de3d\">P.S: If you prefer to learn by code, check out our informative&nbsp;<a class=\"af nk\" href=\"https:\/\/colab.research.google.com\/github\/comet-ml\/comet-llm\/blob\/main\/examples\/CometLLM_Prompts.ipynb\" target=\"_blank\" rel=\"noopener ugc nofollow\">Colab notebook<\/a>&nbsp;where we show you how to log prompts from open-source HuggingFace Models!<\/p>\n\n\n\n<figure class=\"wp-block-image nt nu nv nw nx ny nq nr paragraph-image\"><img decoding=\"async\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*afRn-f3XjpH8zxXu4PRigA.png\" alt=\"Comet_LLM user interface for logging experiment prompts\"\/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading lp lq fo be lr ls lt lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm bj\" id=\"5b5e\">Get Started With CometLLM<\/h2>\n\n\n\n<p class=\"pw-post-body-paragraph mn mo fo be b mp mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj fh bj\" id=\"b2cb\">Integrating CometLLM into your prompt experimentation workflows is seamless!<\/p>\n\n\n\n<p class=\"pw-post-body-paragraph mn mo fo be b mp nl mr ms mt nm mv mw mx nn mz na nb no nd ne nf np nh ni nj fh bj\" id=\"a4fd\">First install the package via pip.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"4dc1\" class=\"oj lq fo og b bf ok ol l om on\" data-selectable-paragraph=\"\">pip install comet_llm<\/span><\/pre>\n\n\n\n<p class=\"pw-post-body-paragraph mn mo fo be b mp nl mr ms mt nm mv mw mx nn mz na nb no nd ne nf np nh ni nj fh bj\" id=\"fcf8\">Then, head over to&nbsp;<a class=\"af nk\" href=\"\/signup\" target=\"_blank\" rel=\"noopener ugc nofollow\">Comet to create a free account<\/a>&nbsp;and get your API key! Once you have that, you\u2019re all set to start logging prompts and their outputs to Comet.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span id=\"6283\" class=\"oj lq fo og b bf ok ol l om on\" data-selectable-paragraph=\"\"><span class=\"hljs-keyword\">import<\/span> comet_llm\n\ncomet_llm.log_prompt(\n    prompt=<span class=\"hljs-string\">\"What is your name?\"<\/span>,\n    output=<span class=\"hljs-string\">\" My name is Alex.\"<\/span>,\n    api_key=<span class=\"hljs-string\">\"YOUR_COMET_API_KEY\"<\/span>,\n    project = <span class=\"hljs-string\">\"YOUR_LLM_PROJECT\"<\/span>,\n)<\/span><\/pre>\n\n\n\n<div class=\"fh fi fj fk fl\">\n<div class=\"ab ca\">\n<div class=\"ch bg et eu ev ew\">\n<h2 id=\"6935\" class=\"lp lq fo be lr ls lt lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm bj\">Add Token Usage to Prompt Metadata<\/h2>\n<p id=\"0fac\" class=\"pw-post-body-paragraph mn mo fo be b mp mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj fh bj\" data-selectable-paragraph=\"\">Prompt usage tokens refer to the number of tokens within a language model\u2019s input that are consumed by the prompts or instructions provided to the model. In the context of OpenAI\u2019s GPT model, the total number of tokens in an input affects the cost, response time, and availability of the model. Log your Usage Token as metadata to your prompts to make sure you are using the most cost-effective prompts for your use-case!<\/p>\n<figure class=\"nt nu nv nw nx ny nq nr paragraph-image\">\n<div class=\"nz oa eb ob bg oc\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg od oe c\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*AW8FSsRkHar9by1FX9epXw.gif\" alt=\"Examining prompt metadata in Comet_LLM user interface\" width=\"700\" height=\"416\"><\/figure><div class=\"nq nr oo\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*AW8FSsRkHar9by1FX9epXw.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*AW8FSsRkHar9by1FX9epXw.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*AW8FSsRkHar9by1FX9epXw.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*AW8FSsRkHar9by1FX9epXw.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*AW8FSsRkHar9by1FX9epXw.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*AW8FSsRkHar9by1FX9epXw.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*AW8FSsRkHar9by1FX9epXw.gif 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*AW8FSsRkHar9by1FX9epXw.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*AW8FSsRkHar9by1FX9epXw.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*AW8FSsRkHar9by1FX9epXw.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*AW8FSsRkHar9by1FX9epXw.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*AW8FSsRkHar9by1FX9epXw.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*AW8FSsRkHar9by1FX9epXw.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*AW8FSsRkHar9by1FX9epXw.gif 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<h2 id=\"6388\" class=\"lp lq fo be lr ls lt lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm bj\"><strong class=\"al\">Score Prompt Outputs<\/strong><\/h2>\n<p id=\"1d35\" class=\"pw-post-body-paragraph mn mo fo be b mp mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj fh bj\" data-selectable-paragraph=\"\">Prompt Feedback is crucial for improving the overall quality of Large Language Models. Use the Comet UI to give human-feedback on a prompt and it will automatically document your score!<\/p>\n<figure class=\"nt nu nv nw nx ny nq nr paragraph-image\">\n<div class=\"nz oa eb ob bg oc\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg od oe c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*jABfDClVZnDLsjfdG25zLg.gif\" alt=\"\" width=\"700\" height=\"329\"><\/figure><div class=\"nq nr op\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*jABfDClVZnDLsjfdG25zLg.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*jABfDClVZnDLsjfdG25zLg.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*jABfDClVZnDLsjfdG25zLg.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*jABfDClVZnDLsjfdG25zLg.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*jABfDClVZnDLsjfdG25zLg.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*jABfDClVZnDLsjfdG25zLg.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*jABfDClVZnDLsjfdG25zLg.gif 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*jABfDClVZnDLsjfdG25zLg.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*jABfDClVZnDLsjfdG25zLg.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*jABfDClVZnDLsjfdG25zLg.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*jABfDClVZnDLsjfdG25zLg.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*jABfDClVZnDLsjfdG25zLg.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*jABfDClVZnDLsjfdG25zLg.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*jABfDClVZnDLsjfdG25zLg.gif 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<h2 id=\"4a43\" class=\"lp lq fo be lr ls lt lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm bj\">Evaluate Prompt Templates<\/h2>\n<p id=\"4ce8\" class=\"pw-post-body-paragraph mn mo fo be b mp mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj fh bj\" data-selectable-paragraph=\"\">Prompt Templates are predefined structures or formats that guide users in providing input to a language model. Templates help in improving the clarity, consistency, and the overall quality of LLM responses. Users can log the prompt template, along with their prompt input and outputs to decide which templates are providing better responses.<\/p>\n<pre class=\"nt nu nv nw nx of og oh bo oi ba bj\"><span id=\"1f0d\" class=\"oj lq fo og b bf ok ol l om on\" data-selectable-paragraph=\"\"><span class=\"hljs-keyword\">import<\/span> comet_llm\n\ncomet_llm.log_prompt(\n    prompt=<span class=\"hljs-string\">\"Answer the question and if the question can't be answered, say \\\"I don't know\\\"\\n\\n---\\n\\nQuestion: What is your name?\\nAnswer:\"<\/span>,\n    prompt_template=<span class=\"hljs-string\">\"Answer the question and if the question can't be answered, say \\\"I don't know\\\"\\n\\n---\\n\\nQuestion: {{question}}?\\nAnswer:\"<\/span>,\n    prompt_template_variables={<span class=\"hljs-string\">\"question\"<\/span>: <span class=\"hljs-string\">\"What is your name?\"<\/span>},\n    metadata= {\n        <span class=\"hljs-string\">\"usage.prompt_tokens\"<\/span>: <span class=\"hljs-number\">7<\/span>,\n        <span class=\"hljs-string\">\"usage.completion_tokens\"<\/span>: <span class=\"hljs-number\">5<\/span>,\n        <span class=\"hljs-string\">\"usage.total_tokens\"<\/span>: <span class=\"hljs-number\">12<\/span>,\n    },\n    output=<span class=\"hljs-string\">\" My name is Alex.\"<\/span>,\n    duration=<span class=\"hljs-number\">16.598<\/span>,\n)<\/span><\/pre>\n<h2 id=\"b66d\" class=\"lp lq fo be lr ls lt lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm bj\">Search for Specific Prompts<\/h2>\n<p id=\"1dce\" class=\"pw-post-body-paragraph mn mo fo be b mp mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj fh bj\" data-selectable-paragraph=\"\">Prompt Engineering is an iterative process. Developers are often trying hundreds if not thousands of prompts. CometLLM makes it easy to find specific prompts with our&nbsp;<a class=\"af nk\" href=\"https:\/\/www.comet.com\/docs\/v2\/guides\/large-language-models\/llm-project\/#searching-for-prompts\" target=\"_blank\" rel=\"noopener ugc nofollow\">search functionality.<\/a>&nbsp;Search for a keyword within a<strong class=\"be oq\">&nbsp;prompt input, output or template<\/strong>&nbsp;and quickly find the relevant prompts!<\/p>\n<figure class=\"nt nu nv nw nx ny nq nr paragraph-image\">\n<div class=\"nz oa eb ob bg oc\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg od oe c\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*ZStokIWRy5iAvOmHNYKl8Q.gif\" alt=\"Organizing and filtering prompts in the Comet_LLM user interface. Also, search for keywords in prompts.\" width=\"700\" height=\"408\"><\/figure><div class=\"nq nr ns\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*ZStokIWRy5iAvOmHNYKl8Q.gif 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<h2 id=\"528d\" class=\"lp lq fo be lr ls lt lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm bj\">Compare and Contrast LLMs<\/h2>\n<p id=\"3f29\" class=\"pw-post-body-paragraph mn mo fo be b mp mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj fh bj\" data-selectable-paragraph=\"\">There are many powerful LLMs out there! (GPT-3.5 , LLama 2, Falcon, Davinci). But which one is the best for a particular use-case? Tag your prompt responses with the LLM that generated them. Then use CometLLM\u2019s group-by functionality to easily group prompts by model and decide which one is generating the highest quality responses!<\/p>\n<figure class=\"nt nu nv nw nx ny nq nr paragraph-image\">\n<div class=\"nz oa eb ob bg oc\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg od oe c\" role=\"presentation\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*KnniUyPFUexKFz6MebJ84w.gif\" alt=\"\" width=\"700\" height=\"408\"><\/figure><div class=\"nq nr ns\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*KnniUyPFUexKFz6MebJ84w.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*KnniUyPFUexKFz6MebJ84w.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*KnniUyPFUexKFz6MebJ84w.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*KnniUyPFUexKFz6MebJ84w.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*KnniUyPFUexKFz6MebJ84w.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*KnniUyPFUexKFz6MebJ84w.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*KnniUyPFUexKFz6MebJ84w.gif 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*KnniUyPFUexKFz6MebJ84w.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*KnniUyPFUexKFz6MebJ84w.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*KnniUyPFUexKFz6MebJ84w.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*KnniUyPFUexKFz6MebJ84w.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*KnniUyPFUexKFz6MebJ84w.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*KnniUyPFUexKFz6MebJ84w.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*KnniUyPFUexKFz6MebJ84w.gif 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<h2 id=\"6aa5\" class=\"lp lq fo be lr ls lt lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm bj\">Visualize Chat History With Chains<\/h2>\n<p id=\"de2f\" class=\"pw-post-body-paragraph mn mo fo be b mp mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj fh bj\" data-selectable-paragraph=\"\">Prompt chains refer to a sequence of prompts that are used to guide a language model\u2019s response in a conversational or interactive setting. Instead of providing a single prompt, prompt chains involve sending multiple prompts in succession to maintain context and guide the model\u2019s behavior throughout a conversation.&nbsp;<a class=\"af nk\" href=\"https:\/\/www.comet.com\/docs\/v2\/guides\/large-language-models\/llm-project\/#logging-chains-to-llm-projects\" target=\"_blank\" rel=\"noopener ugc nofollow\">Log your interactions with chatbots like ChatGPT as chains&nbsp;<\/a>to quickly track and debug a LLM\u2019s train of thought.<\/p>\n<figure class=\"nt nu nv nw nx ny nq nr paragraph-image\">\n<div class=\"nz oa eb ob bg oc\" tabindex=\"0\" role=\"button\">\n<figure><img loading=\"lazy\" decoding=\"async\" class=\"bg od oe c\" src=\"https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*QYpwTCVvGLPnIxxmj2lWjg.gif\" alt=\"Comet_LLM user interface for prompt chains\" width=\"700\" height=\"375\"><\/figure><div class=\"nq nr oo\"><picture><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 1400w\" type=\"image\/webp\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\"><source srcset=\"https:\/\/miro.medium.com\/v2\/resize:fit:640\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 640w, https:\/\/miro.medium.com\/v2\/resize:fit:720\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 720w, https:\/\/miro.medium.com\/v2\/resize:fit:750\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 750w, https:\/\/miro.medium.com\/v2\/resize:fit:786\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 786w, https:\/\/miro.medium.com\/v2\/resize:fit:828\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 828w, https:\/\/miro.medium.com\/v2\/resize:fit:1100\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 1100w, https:\/\/miro.medium.com\/v2\/resize:fit:1400\/1*QYpwTCVvGLPnIxxmj2lWjg.gif 1400w\" sizes=\"(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px\" data-testid=\"og\"><\/picture><\/div>\n<\/div>\n<\/figure>\n<h2 id=\"9638\" class=\"lp lq fo be lr ls lt lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm bj\">Check-out the Github Repo<\/h2>\n<p id=\"71c4\" class=\"pw-post-body-paragraph mn mo fo be b mp mq mr ms mt mu mv mw mx my mz na nb nc nd ne nf ng nh ni nj fh bj\" data-selectable-paragraph=\"\">CometLLM\u2019s SDK is completely open-sourced and can be found on&nbsp;<a class=\"af nk\" href=\"https:\/\/github.com\/comet-ml\/comet-llm\" target=\"_blank\" rel=\"noopener ugc nofollow\">Github<\/a>! Give the project a star and make sure to keep following the repository for exciting product updates!<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Prompt Engineering&nbsp;is arguably the most critical aspect in harnessing the power of Large Language Models (LLMs) like ChatGPT. Whether you\u2019re building tools for content creation, question answering, translation, or coding assistance, well designed prompts act as the roadmap for LLMs to generate accurate, ethical, and contextually relevant outputs. However; current prompt engineering workflows are [&hellip;]<\/p>\n","protected":false},"author":21,"featured_media":7513,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[8,65,9],"tags":[40,14,64,30,15,52,31,32,56,34],"coauthors":[134],"class_list":["post-7426","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-comet-community-hub","category-llmops","category-product","tag-comet","tag-comet-ml","tag-cometllm","tag-deep-learning","tag-deep-learning-experiment-management","tag-llm","tag-llmops","tag-nlp","tag-optimization","tag-prompt-engineering"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Organize Your Prompt Engineering with CometLLM - Comet<\/title>\n<meta name=\"description\" content=\"Streamline your Prompt Engineering Workflows with CometLLM! Log, Debug, and Rate your LLM Prompts and Chains.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Organize Your Prompt Engineering with CometLLM\" \/>\n<meta property=\"og:description\" content=\"Streamline your Prompt Engineering Workflows with CometLLM! Log, Debug, and Rate your LLM Prompts and Chains.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-13T22:56:08+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:14:03+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/09\/comet_llm_screenshot-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1346\" \/>\n\t<meta property=\"og:image:height\" content=\"850\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Siddharth Mehta\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Siddharth Mehta\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Organize Your Prompt Engineering with CometLLM - Comet","description":"Streamline your Prompt Engineering Workflows with CometLLM! Log, Debug, and Rate your LLM Prompts and Chains.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/","og_locale":"en_US","og_type":"article","og_title":"Organize Your Prompt Engineering with CometLLM","og_description":"Streamline your Prompt Engineering Workflows with CometLLM! Log, Debug, and Rate your LLM Prompts and Chains.","og_url":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-09-13T22:56:08+00:00","article_modified_time":"2025-04-24T17:14:03+00:00","og_image":[{"width":1346,"height":850,"url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/09\/comet_llm_screenshot-1.png","type":"image\/png"}],"author":"Siddharth Mehta","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Siddharth Mehta","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/"},"author":{"name":"Siddharth Mehta","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/652eb7d782d18f295922f50ea3b9e54c"},"headline":"Organize Your Prompt Engineering with CometLLM","datePublished":"2023-09-13T22:56:08+00:00","dateModified":"2025-04-24T17:14:03+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/"},"wordCount":622,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/09\/comet_llm_screenshot-1.png","keywords":["Comet","Comet ML","CometLLM","Deep Learning","Deep Learning Experiment Management","LLM","LLMOps","NLP","Optimization","Prompt Engineering"],"articleSection":["Comet Community Hub","LLMOps","Product"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/","url":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/","name":"Organize Your Prompt Engineering with CometLLM - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/#primaryimage"},"thumbnailUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/09\/comet_llm_screenshot-1.png","datePublished":"2023-09-13T22:56:08+00:00","dateModified":"2025-04-24T17:14:03+00:00","description":"Streamline your Prompt Engineering Workflows with CometLLM! Log, Debug, and Rate your LLM Prompts and Chains.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/#primaryimage","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/09\/comet_llm_screenshot-1.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/09\/comet_llm_screenshot-1.png","width":1346,"height":850,"caption":"CometLLM platform screenshot"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/organize-your-prompt-engineering-with-cometllm\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Organize Your Prompt Engineering with CometLLM"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/652eb7d782d18f295922f50ea3b9e54c","name":"Siddharth Mehta","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/940c7280faea9e1b8b086c2ed7ec01db","url":"https:\/\/secure.gravatar.com\/avatar\/27a672e997fa7a66796e4be0503e0efeec6bd34daae185bb6de163227a5a0739?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/27a672e997fa7a66796e4be0503e0efeec6bd34daae185bb6de163227a5a0739?s=96&d=mm&r=g","caption":"Siddharth Mehta"},"description":"ML Growth Engineer @ Comet. Interested in Computer Vision, Robotics, and Reinforcement Learning","sameAs":["https:\/\/www.comet.com\/"],"url":"https:\/\/www.comet.com\/site\/blog\/author\/siddharthmcomet-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/7426","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/21"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=7426"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/7426\/revisions"}],"predecessor-version":[{"id":15540,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/7426\/revisions\/15540"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media\/7513"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=7426"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=7426"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=7426"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=7426"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}