{"id":8112,"date":"2023-11-03T11:16:50","date_gmt":"2023-11-03T19:16:50","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=8112"},"modified":"2025-04-24T17:04:37","modified_gmt":"2025-04-24T17:04:37","slug":"tracking-langchain-projects-with-comet","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/","title":{"rendered":"Tracking LangChain Projects with\u00a0Comet"},"content":{"rendered":"\n<section class=\"section section--body\">\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<figure class=\"graf graf--figure\">\n<\/figure><\/div><\/div><\/section>\n\n\n\n<figure class=\"wp-block-image alignnone graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*qTFhuOxxBrjAC7d4jYZZaA.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Image by author<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">The advent of Large Language Models (LLMs) has changed the Artificial Intelligence (AI) landscape. These powerful Natural Language Processing (NLP) models bring conversational AI to mainstream applications as business leaders move to integrate their products with chat-based capabilities.<\/p>\n\n\n\n<p class=\"graf graf--p\">During this language revolution, LangChain has been the pioneer in constructing production-grade LLM-based applications. It is an open-source framework that hosts Application Programming Interfaces (APIs) for several LLM and Chat models.<\/p>\n\n\n\n<p class=\"graf graf--p\">The models can be used out of the box or further fine-tuned for downstream tasks. Any LLM-related development requires proper experiment tracking and metric logging. These logs help track the model inputs and outputs and the results achieved with each iteration.<\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">What is LangChain?<\/h3>\n\n\n\n<p class=\"graf graf--p\">LangChain is presently the most popular library for application development using language models. The framework is available in prevalent languages, including Python and JavaScricpt, and provides access to popular models like <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/platform.openai.com\/docs\/models\/gpt-3-5\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/platform.openai.com\/docs\/models\/gpt-3-5\">OpenAIs GPT3.5<\/a>.<\/p>\n\n\n\n<p class=\"graf graf--p\">Besides GPT, LangChain also boasts support for various <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/python.langchain.com\/docs\/integrations\/providers\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/python.langchain.com\/docs\/integrations\/providers\">open-source models<\/a> from providers like HuggingFace and OpenLLM.Moreover, LangChain provides developers with various functionalities to construct prompts and use and fine-tune pre-trained LLMs.<\/p>\n\n\n\n<p class=\"graf graf--p\">Some key features include:<\/p>\n\n\n\n<ul class=\"wp-block-list postList\">\n<li><strong class=\"markup--strong markup--li-strong\">Creating Prompt Templates<\/strong>: Prompt Templates help structure the inputs and outputs of an LLM and help the model make better predictions.<\/li>\n\n\n\n<li><strong class=\"markup--strong markup--li-strong\">Module Chaining: <\/strong>The chaining feature allows users to stack up various modules to enhance the model\u2019s performance further. The output of one model becomes the input of the subsequent one.<\/li>\n\n\n\n<li><strong class=\"markup--strong markup--li-strong\">Agents: <\/strong>Agents help simulate complex steps for an LLM to help it reach specific results.<\/li>\n<\/ul>\n\n\n\n<p class=\"graf graf--p\">Every LLM project revolves around a few key components depending on the use case. Let\u2019s discuss these in detail.<\/p>\n\n\n\n<section class=\"section section--body\">\n<div class=\"section-divider\">\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<blockquote class=\"graf graf--pullquote\"><p>Want to learn how to build modern software with LLMs using the newest tools and techniques in the field? <a class=\"markup--anchor markup--pullquote-anchor\" href=\"https:\/\/www.comet.com\/production\/site\/llm-course\/?utm_source=Heartbeat&amp;utm_medium=referral&amp;utm_content=Medium&amp;utm_campaign=Heartbeat_LangChain_Series_HS\" target=\"_blank\" rel=\"noopener ugc nofollow\" data-href=\"https:\/\/www.comet.com\/production\/site\/llm-course\/?utm_source=Heartbeat&amp;utm_medium=referral&amp;utm_content=Medium&amp;utm_campaign=Heartbeat_LangChain_Series_HS\">Check out this free LLMOps course<\/a> from industry expert Elvis Saravia of&nbsp;DAIR.AI.<\/p><\/blockquote>\n<\/div>\n<\/div>\n<\/section>\n\n\n\n<section class=\"section section--body\">\n<div class=\"section-divider\">\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<h3 class=\"graf graf--h3\">Components of an LLM-based Project<\/h3>\n<p class=\"graf graf--p\">An LLM project may use a pre-trained model for generic use cases or fine-tune certain models for more specific tasks. Whatever may be the case, the project\u2019s overall structure mostly remains the same. Let\u2019s discuss a few components common to all LLM projects.<\/p>\n<h4 class=\"graf graf--h4\">Foundation Model<\/h4>\n<p class=\"graf graf--p\">A foundation model is the base model that is built on top of a highly complex architecture and is trained on massive text corpora covering various topics. Some popular foundation models include <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/openai.com\/research\/gpt-4\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/openai.com\/research\/gpt-4\">GPT4<\/a>, <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/bard.google.com\/\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/bard.google.com\/\">BARD<\/a>, and <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/ai.meta.com\/blog\/large-language-model-llama-meta-ai\/\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/ai.meta.com\/blog\/large-language-model-llama-meta-ai\/\">Llama<\/a>. Foundation models are trained on extensive datasets and can be used as it is or further fine-tuned for specific use cases.<\/p>\n<h4 class=\"graf graf--h4\">Training Corpus<\/h4>\n<p class=\"graf graf--p\">The training dataset is a collection of text documents used to fine-tune the foundation model. The corpus can be related to a single topic or cover various subjects depending on the LLM requirements.<\/p>\n<h4 class=\"graf graf--h4\">Prompts<\/h4>\n<p class=\"graf graf--p\">A prompt is a text the user puts to the LLM. It may be a statement, question, or a simple salutation such as \u201cHello.\u201d The LLM tokenizes and processes the prompt to output a relevant response.<\/p>\n<h4 class=\"graf graf--h4\">Template<\/h4>\n<p class=\"graf graf--p\">Prompt templates are a generic structure for the input prompts. They define a structure of the type of inputs that an LLM should expect and help the model reach a response more in line with the template. The templates help create reproducible prompts. An example would be:<\/p>\n<p class=\"graf graf--p graf--startsWithDoubleQuote\">\u201cTell me a story about {topic}\u201d<\/p>\n<p class=\"graf graf--p\">This template will remain consistent for all prompts, and users only need to enter a different <em class=\"markup--em markup--p-em\">topic<\/em> to get desirable results.<\/p>\n<h4 class=\"graf graf--h4\">Chaining<\/h4>\n<p class=\"graf graf--p\">Chaining is LangChain\u2019s key feature that allows users to chain together various modules to reach better results. Developers can chain various models, such as a Grammar correction LLM, in conjunction with a chat model to ensure accurate interpretation and results. The output of each module in the chain becomes the input of the subsequent one until the chain ends.<\/p>\n<p class=\"graf graf--p\">Chains can also include prompt templates to structure the response before processing it via an LLM. The modules within the chain are arranged to allow the overall model to output the most favorable results.<\/p>\n<h3 class=\"graf graf--h3\">Building a Model With LangChain and&nbsp;Comet<\/h3>\n<p class=\"graf graf--p\">Comet provides easy integration with the LangChain framework using the model\u2019s callbacks. The Comet documentation on <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/www.comet.com\/docs\/v2\/integrations\/third-party-tools\/langchain\/\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/www.comet.com\/docs\/v2\/integrations\/third-party-tools\/langchain\/\">third-party integration<\/a> provides a step-by-step guide on logging LangChain projects.<\/p>\n<p class=\"graf graf--p\">Let\u2019s see how we can set up our project. You can find the working code in our <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/colab.research.google.com\/drive\/1i2qQlIOYKZLvn-m2cRlvqPR6lMe4z2st?authuser=3#scrollTo=fh6uU9Fj-xyC\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/colab.research.google.com\/drive\/1i2qQlIOYKZLvn-m2cRlvqPR6lMe4z2st?authuser=3#scrollTo=fh6uU9Fj-xyC\">Google Colab Notebook<\/a>.<\/p>\n<h4 class=\"graf graf--h4\">Preliminary Setup<\/h4>\n<p class=\"graf graf--p\">Before moving forward, you must create free accounts on <a class=\"markup--anchor markup--p-anchor\" href=\"\/signup\" target=\"_blank\" rel=\"noopener\" data-href=\"\/signup\">Comet<\/a> and <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/huggingface.co\/join\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/huggingface.co\/join\">HuggingFace<\/a> and then install the relevant packages. These accounts will provide you with the API access keys to set up the required environment for the project.<\/p>\n<h4 class=\"graf graf--h4\">Project Setup<\/h4>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"yaml\"><span class=\"pre--content\"><span class=\"hljs-comment\"># Install required libraries<\/span>\n<span class=\"hljs-type\">!pip<\/span> <span class=\"hljs-string\">install<\/span> <span class=\"hljs-string\">langchain<\/span> <span class=\"hljs-string\">openai<\/span> <span class=\"hljs-string\">bitsandbytes<\/span> <span class=\"hljs-string\">accelerate<\/span> <span class=\"hljs-string\">transformers<\/span> <span class=\"hljs-string\">comet_ml<\/span> <span class=\"hljs-string\">comet-llm<\/span> <span class=\"hljs-string\">textstat<\/span><\/span><\/pre>\n<p class=\"graf graf--p\">Next, we import all relevant libraries.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"python\"><span class=\"pre--content\"><span class=\"hljs-comment\"># import required libraries<\/span>\n<span class=\"hljs-keyword\">import<\/span> comet_ml\n<span class=\"hljs-keyword\">from<\/span> langchain.llms <span class=\"hljs-keyword\">import<\/span> HuggingFacePipeline\n<span class=\"hljs-keyword\">from<\/span> langchain <span class=\"hljs-keyword\">import<\/span> PromptTemplate, HuggingFaceHub, LLMChain\n<span class=\"hljs-keyword\">from<\/span> langchain.prompts <span class=\"hljs-keyword\">import<\/span> PromptTemplate\n<span class=\"hljs-keyword\">from<\/span> transformers <span class=\"hljs-keyword\">import<\/span> AutoTokenizer, AutoModelForCausalLM, pipeline, AutoModelForSeq2SeqLM\n<span class=\"hljs-keyword\">from<\/span> langchain.callbacks <span class=\"hljs-keyword\">import<\/span> CometCallbackHandler\n<span class=\"hljs-keyword\">import<\/span> torch\n<span class=\"hljs-keyword\">import<\/span> os<\/span><\/pre>\n<p class=\"graf graf--p\">Once all the libraries are installed and imported, you must initialize your workspace with Comet and HuggingFace.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\"><span class=\"hljs-comment\"># Define API key for HuggingFace<\/span>\nos.environ[<span class=\"hljs-string\">'HUGGINGFACEHUB_API_TOKEN'<\/span>] = <span class=\"hljs-string\">\"YOUR_API_KEY\"<\/span>\n<span class=\"hljs-comment\"># Initialize the project on comet. The experiment initialization will prompt the user to paste their API token.<\/span>\ncomet_ml.login(project_name=<span class=\"hljs-string\">\"LangChain-Experiment\"<\/span>)<\/span><\/pre>\n<p class=\"graf graf--p\">Once the environment is set up, we will initialize Comet\u2019s callback handler. This handler will be passed to the HuggingFace model object. It will automatically log several aspects of the project. <em class=\"markup--em markup--p-em\">Remember to give the project a memorable and appropriate name<\/em>.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\"><span class=\"hljs-comment\"># define Comet Callbacks<\/span>\ncomet_callback = CometCallbackHandler(\n    project_name=<span class=\"hljs-string\">\"LangChain-Experiment\"<\/span>, <span class=\"hljs-comment\"># give the project an appropriate name<\/span>\n    complexity_metrics=<span class=\"hljs-literal\">True<\/span>,\n    stream_logs=<span class=\"hljs-literal\">True<\/span>,\n    tags=[<span class=\"hljs-string\">\"llm\"<\/span>],\n    visualizations=[<span class=\"hljs-string\">\"dep\"<\/span>, <span class=\"hljs-string\">\"ent\"<\/span>],\n)<\/span><\/pre>\n<p class=\"graf graf--p\">With everything set up, we can proceed to import models from HuggingFace in LangChain. LangChain allows users to access HuggingFace in two ways. You can directly access the HuggingFace Hub or use the HuggingFace Pipeline to download and use a model locally. We will work with the latter option.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"ini\"><span class=\"pre--content\"><span class=\"hljs-comment\">## Initiaize Model and Tokenizer<\/span>\n<span class=\"hljs-attr\">model_id<\/span> = <span class=\"hljs-string\">\"google\/flan-t5-large\"<\/span>\n<span class=\"hljs-attr\">tokenizer<\/span> = AutoTokenizer.from_pretrained(model_id)\n<span class=\"hljs-attr\">model<\/span> = AutoModelForSeq2SeqLM.from_pretrained(model_id)<\/span><\/pre>\n<p class=\"graf graf--p\">We have used a decent model for this demonstration, but you can choose to go for a bigger infrastructure if you have the VRAM and processing power. The entire HuggingFace model library can be browsed <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/huggingface.co\/models?pipeline_tag=text2text-generation&amp;sort=trending\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/huggingface.co\/models?pipeline_tag=text2text-generation&amp;sort=trending\">here<\/a>.<\/p>\n<p class=\"graf graf--p\">Now, we only have to create the pipeline with the imported Tokenizer and Model and pass the required parameters.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\"><span class=\"hljs-comment\"># Create a local text2text pipeline using initialized model<\/span>\npipe = pipeline(\n    <span class=\"hljs-string\">\"text2text-generation\"<\/span>,\n    model=model,\n    tokenizer=tokenizer,\n    max_length=<span class=\"hljs-number\">100<\/span>,\n    do_sample=<span class=\"hljs-literal\">True<\/span>,\n    temperature=<span class=\"hljs-number\">0.2<\/span>, <span class=\"hljs-comment\"># Response creativity<\/span>\n    device=<span class=\"hljs-number\">0<\/span>, <span class=\"hljs-comment\"># Use GPU<\/span>\n)\n\n\n<span class=\"hljs-comment\"># Initialize HuggingFacePipeline with the Comet Callback<\/span>\nlocal_llm = HuggingFacePipeline(pipeline=pipe, callbacks=[comet_callback], verbose = <span class=\"hljs-literal\">True<\/span>)<\/span><\/pre>\n<p class=\"graf graf--p\">Now, we can use the LLM as it is.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\"><span class=\"hljs-comment\"># Direct prediction from pre-trained LLM<\/span>\nlocal_llm.predict(<span class=\"hljs-string\">\"Q: What is 2 multiplied by 6\"<\/span>)<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"ruby\"><span class=\"pre--content\"><span class=\"hljs-meta prompt_\">&gt;&gt;<\/span> <span class=\"hljs-string\">'2'<\/span><\/span><\/pre>\n<p class=\"graf graf--p\">The initial results are not that great. The model outputs \u20182\u2019 in response to the mathematical question.<\/p>\n<p class=\"graf graf--p\">We can use prompt templates to steer the model toward a better response.<\/p>\n<h4 class=\"graf graf--h4\">Prompt Templates<\/h4>\n<p class=\"graf graf--p\">LangChain allows users to create prompt templates that help the LLM understand how it is supposed to generate a response. Let\u2019s see it in action.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\"><span class=\"hljs-comment\"># Create Prompt template to output polished results<\/span>\ntemplate = <span class=\"hljs-string\">\"\"\"Question: {question}\n\nAnswer: Let's think step by step.\"\"\"<\/span>\n\nprompt = PromptTemplate.from_template(template)<\/span><\/pre>\n<p class=\"graf graf--p\">Let\u2019s predict using this template.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">chain = prompt | local_llm\n<span class=\"hljs-comment\"># prompt LLM using the defined tempalte<\/span>\nquestion = <span class=\"hljs-string\">\"Q: What is 2 multiplied by 6\"<\/span>\nchain.invoke({<span class=\"hljs-string\">\"question\"<\/span>: question})<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"ruby\"><span class=\"pre--content\"><span class=\"hljs-meta prompt_\">&gt;&gt;<\/span> <span class=\"hljs-string\">'2 multiplied by 6 equals 6 * 2 = 12. The answer: 12.'<\/span><\/span><\/pre>\n<p class=\"graf graf--p\">Now we see the LLM has thought of the answer in a step-by-step approach and was able to divert itself to the correct response.<\/p>\n<p class=\"graf graf--p\">Let\u2019s try another template.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\"><span class=\"hljs-comment\"># Create Prompt template to output polished results<\/span>\ntranslation_template = <span class=\"hljs-string\">\"\"\"Translate the following sentence to {language}.\n\nSentence: {sentence}\n\nHere is the text translation in {language}:\n\"\"\"<\/span>\n\ntranslation_prompt = PromptTemplate.from_template(translation_template)<\/span><\/pre>\n<p class=\"graf graf--p\">This is a reusable template for users to enter any language and any sentence to make a language translator bot. Let\u2019s see it in action.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"makefile\"><span class=\"pre--content\">chain2 = translation_prompt | local_llm\n<span class=\"hljs-comment\"># prompt LLM using the defined tempalte<\/span>\nlanguage = <span class=\"hljs-string\">\"German\"<\/span>\nsentence = <span class=\"hljs-string\">\"Hello. I am Comet, a platform for tracking AI applications.\"<\/span>\n<span class=\"hljs-section\">chain2.invoke({\"language\": language,<\/span>\n               <span class=\"hljs-string\">\"sentence\"<\/span>: sentence})<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"ruby\"><span class=\"pre--content\"><span class=\"hljs-meta prompt_\">&gt;&gt;<\/span> <span class=\"hljs-string\">'Ich bin Comet, eine Plattform f\u00fcr die Beobachtung von AI-Anwendungen.'<\/span><\/span><\/pre>\n<p class=\"graf graf--p\">And now we have a language-translation tool.<\/p>\n<p class=\"graf graf--p\">Now, let\u2019s see how our experimentation is logged in Comet.<\/p>\n<h4 class=\"graf graf--h4\">Logging in&nbsp;Comet<\/h4>\n<p class=\"graf graf--p\">Comet logs multiple experiments under a <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/www.comet.com\/haziqa-sajid\/langchain-experiment\/view\/new\/panels\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/www.comet.com\/haziqa-sajid\/langchain-experiment\/view\/new\/panels\">single project<\/a>. This is helpful when you want to experiment with various parameters and want to compare the results across the board. The project panel displays comparison metrics of each experiment performed.<\/p>\n<figure class=\"graf graf--figure\">\n<\/figure><\/div><\/div><\/section>\n\n\n\n<figure class=\"wp-block-image alignnone graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*_q7g0boptFP9JJcC\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Project Dashboard<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">It has logged metrics for readability score and the difficult words against each iteration. Further drilling down to a single experiment provides more insight into the run. Comet has logged a variety of metrics, including the Readability Index, Coleman liau index, and Crawford score.<\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image alignnone graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*G8_o4DxB1kVN3aLA\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Metrics<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">Additionally, it has logged GPU and CPU metrics for system monitoring.<\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image alignnone graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*DlhZGEtlKmro3ksd\" alt=\"\"\/><figcaption class=\"wp-element-caption\">System Metrics<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">Lastly, Comet has logged all the text inputs and outputs generated during the experiment. These are logged in order of occurrence to keep track of whatever was passed to the model and when.<\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image alignnone graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*aDGMft1PENYXTI5C\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Prompt Sequences<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">Key Takeaways<\/h3>\n\n\n\n<p class=\"graf graf--p\">With technological advancements, LLMs have become more accessible than ever. They can be used to build various language-based applications, such as intuitive chatbots or language translation bots.<\/p>\n\n\n\n<p class=\"graf graf--p\">LangChain is an open-source framework that facilitates the construction of LLM-based applications. It hosts various open-source and paid models, including fine-tuning and model integration features.<\/p>\n\n\n\n<p class=\"graf graf--p\">It also supports easy experimentation logging with Comet using the Callback Handler. Comet automatically logs all experiment metrics and inputs and outputs received. It also compiles various experiments under a single project for easy comparison.<\/p>\n\n\n\n<p class=\"graf graf--p\">Comet is an AI-centric platform for tracking all machine-learning projects. It integrates with various Machine Learning (ML) service providers such as YOLO and HuggingFace and provides features like Data Versioning, Model Registration, and Automated Logging.<\/p>\n\n\n\n<p class=\"graf graf--p\">To learn more, <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/www.live-cometml.pantheonsite.io\/about-us\/contact-us\/\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/www.live-cometml.pantheonsite.io\/about-us\/contact-us\/\">schedule a demo<\/a> today.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The advent of Large Language Models (LLMs) has changed the Artificial Intelligence (AI) landscape. These powerful Natural Language Processing (NLP) models bring conversational AI to mainstream applications as business leaders move to integrate their products with chat-based capabilities. During this language revolution, LangChain has been the pioneer in constructing production-grade LLM-based applications. It is an [&hellip;]<\/p>\n","protected":false},"author":54,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[65,7],"tags":[40,14,64,70,71,52,31,16,32],"coauthors":[156],"class_list":["post-8112","post","type-post","status-publish","format-standard","hentry","category-llmops","category-tutorials","tag-comet","tag-comet-ml","tag-cometllm","tag-langchain","tag-language-models","tag-llm","tag-llmops","tag-ml-experiment-management","tag-nlp"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Tracking LangChain Projects with\u00a0Comet - Comet<\/title>\n<meta name=\"description\" content=\"LangChain is the pioneer in constructing production-grade LLM apps. Comet provides easy integration with LangChain using callback functions.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Tracking LangChain Projects with\u00a0Comet\" \/>\n<meta property=\"og:description\" content=\"LangChain is the pioneer in constructing production-grade LLM apps. Comet provides easy integration with LangChain using callback functions.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-03T19:16:50+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:04:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*qTFhuOxxBrjAC7d4jYZZaA.png\" \/>\n<meta name=\"author\" content=\"Haziqa Sajid\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Haziqa Sajid\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Tracking LangChain Projects with\u00a0Comet - Comet","description":"LangChain is the pioneer in constructing production-grade LLM apps. Comet provides easy integration with LangChain using callback functions.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/","og_locale":"en_US","og_type":"article","og_title":"Tracking LangChain Projects with\u00a0Comet","og_description":"LangChain is the pioneer in constructing production-grade LLM apps. Comet provides easy integration with LangChain using callback functions.","og_url":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-11-03T19:16:50+00:00","article_modified_time":"2025-04-24T17:04:37+00:00","og_image":[{"url":"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*qTFhuOxxBrjAC7d4jYZZaA.png","type":"","width":"","height":""}],"author":"Haziqa Sajid","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Haziqa Sajid","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/"},"author":{"name":"Haziqa Sajid","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/b8e568abee61cd8fd0c5d73b672779da"},"headline":"Tracking LangChain Projects with\u00a0Comet","datePublished":"2023-11-03T19:16:50+00:00","dateModified":"2025-04-24T17:04:37+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/"},"wordCount":1341,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/#primaryimage"},"thumbnailUrl":"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*qTFhuOxxBrjAC7d4jYZZaA.png","keywords":["Comet","Comet ML","CometLLM","LangChain","Language Models","LLM","LLMOps","ML Experiment Management","NLP"],"articleSection":["LLMOps","Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/","url":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/","name":"Tracking LangChain Projects with\u00a0Comet - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/#primaryimage"},"thumbnailUrl":"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*qTFhuOxxBrjAC7d4jYZZaA.png","datePublished":"2023-11-03T19:16:50+00:00","dateModified":"2025-04-24T17:04:37+00:00","description":"LangChain is the pioneer in constructing production-grade LLM apps. Comet provides easy integration with LangChain using callback functions.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/#primaryimage","url":"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*qTFhuOxxBrjAC7d4jYZZaA.png","contentUrl":"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*qTFhuOxxBrjAC7d4jYZZaA.png"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/tracking-langchain-projects-with-comet\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Tracking LangChain Projects with\u00a0Comet"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/b8e568abee61cd8fd0c5d73b672779da","name":"Haziqa Sajid","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/817879efa0771c090195dd1888fca759","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/07\/cropped-1585931859188-96x96.jpg","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2023\/07\/cropped-1585931859188-96x96.jpg","caption":"Haziqa Sajid"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/haziqa5122gmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8112","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/54"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=8112"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8112\/revisions"}],"predecessor-version":[{"id":15458,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8112\/revisions\/15458"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=8112"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=8112"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=8112"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=8112"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}