{"id":8888,"date":"2024-01-25T17:26:28","date_gmt":"2024-01-26T01:26:28","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=8888"},"modified":"2025-04-24T17:03:26","modified_gmt":"2025-04-24T17:03:26","slug":"enhance-conversational-agents-with-langchain-memory","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/","title":{"rendered":"Enhance Conversational Agents with LangChain Memory"},"content":{"rendered":"\n<section class=\"section section--body\">\n<div class=\"section-divider\"><span style=\"color: var(--wpex-heading-color); font-size: var(--wpex-text-2xl); font-weight: var(--wpex-heading-font-weight); font-family: var(--wpex-body-font-family, var(--wpex-font-sans));\">LangChain Conversation Memory Types: Pros &amp; Cons, and Code Examples<\/span><\/div>\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<p class=\"graf graf--p\">When it comes to chatbots and conversational agents, the ability to retain and remember information is critical to creating fluid, human-like interactions. This article describes the concept of memory in LangChain and explores its importance, implementation, and various strategies for optimizing conversation flow.<\/p>\n<blockquote class=\"graf graf--blockquote\"><p>\ud83d\udca1I write about Machine Learning on <a class=\"markup--anchor markup--blockquote-anchor\" href=\"https:\/\/medium.com\/@yennhi95zz\/subscribe\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/medium.com\/@yennhi95zz\/subscribe\">Medium<\/a> || <a class=\"markup--anchor markup--blockquote-anchor\" href=\"https:\/\/github.com\/yennhi95zz\" target=\"_blank\" rel=\"noopener ugc nofollow\" data-href=\"https:\/\/github.com\/yennhi95zz\">Github<\/a> || <a class=\"markup--anchor markup--blockquote-anchor\" href=\"https:\/\/www.kaggle.com\/nhiyen\/code\" target=\"_blank\" rel=\"noopener ugc nofollow\" data-href=\"https:\/\/www.kaggle.com\/nhiyen\/code\">Kaggle<\/a> || <a class=\"markup--anchor markup--blockquote-anchor\" href=\"https:\/\/www.linkedin.com\/in\/yennhi95zz\/\" target=\"_blank\" rel=\"noopener ugc nofollow\" data-href=\"https:\/\/www.linkedin.com\/in\/yennhi95zz\/\">Linkedin<\/a>. \ud83d\udd14 Follow \u201cNhi Yen\u201d for future updates!<\/p><\/blockquote>\n<figure class=\"graf graf--figure\">\n<\/figure><\/div><\/div><\/section>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*1pJAe7F36nNdJnT6\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Conversation Memory (Credit: <a href=\"http:\/\/unsplash.com\">Unsplash<\/a>)<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">\ud83d\udc49 <em class=\"markup--em markup--p-em\">I previously shared relevant articles on creating a basic chatbot without using Conversation Memory. You might find it interesting.<\/em><\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">Why Memory Matters in Conversational Agents<\/h3>\n\n\n\n<p class=\"graf graf--p\">When users interact with chatbots, they often expect a level of continuity and understanding similar to human conversations. This expectation includes the ability to refer to past information, which leads to the conversational agent\u2019s need for memory. Memory allows the system to remember previous interactions, process abbreviations, and perform co-reference resolution, ensuring consistent, context-aware conversations.<\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">The Memory Challenge in Large Language&nbsp;Models<\/h3>\n\n\n\n<p class=\"graf graf--p\">Large language models like LangChain do not have their own memory. Unlike humans, who naturally retain information during conversations, these models operate on a rapid response mechanism. Attempts have been made to integrate memory into its Transformer model, but large-scale practical implementation remains a challenge.<\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*cXoHnRgshtvrd6N-u77Ing.png\" alt=\"Enhance Conversational Agents with LangChain Memory\"\/><figcaption class=\"wp-element-caption\">LangChain (Image Credit: <a href=\"https:\/\/blog.bytebytego.com\/p\/how-to-build-a-smart-chatbot-in-10\">ALEX XU on blog.bytebytego.com<\/a>)<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">Memory Strategies in LangChain<\/h3>\n\n\n\n<p class=\"graf graf--p\">Given a context that when a customer inquires about the customer service of a fashion store and expresses a problem with the jeans. This issue involves a stuck zipper and is similar to a hardware issue. I\u2019ll ask the conversational agent bot a list of questions for each LangChain memory type:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">1. Hey! I am Nhi.\n2. How are you today?\n3. I'm doing well, thank you. I need your assistance.\n4. I bought this pair of jeans, and there's a problem.\n5. When I tried them on, the zipper got stuck, and now I can't unzip it.\n6. It seems to be a problem with the zipper.<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">In this experiment, I\u2019ll use Comet LLM to record prompts, responses, and metadata for each memory type for performance optimization purposes. This allows me to track response duration, tokens, and cost for each interaction. View the full project <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/www.comet.com\/yennhi95zz\/langchain-conversation-memory\/prompts\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/www.comet.com\/yennhi95zz\/langchain-conversation-memory\/prompts\"><strong class=\"markup--strong markup--p-strong\">HERE<\/strong><\/a>.<\/p>\n\n\n\n<p class=\"graf graf--p\">Comet LLM provides <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/www.comet.com\/docs\/v2\/\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/www.comet.com\/docs\/v2\/\">additional features<\/a> such as UI visualization, detailed chain execution logs, automatic tracking with OpenAI chat model, and user feedback analysis.<\/p>\n\n\n\n<p class=\"graf graf--p\">Find the complete code in this <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/github.com\/yennhi95zz\/langchain-conversation-memory-code-examples\" target=\"_blank\" rel=\"noopener\" data-href=\"https:\/\/github.com\/yennhi95zz\/langchain-conversation-memory-code-examples\"><strong class=\"markup--strong markup--p-strong\">GitHub Repository<\/strong><\/a>.<\/p>\n\n\n\n<section class=\"section section--body\">\n<div class=\"section-divider\">\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<p class=\"graf graf--p\">To start, bring in the common libraries needed for all 6 memory types. Make sure you\u2019ve installed the necessary Python packages in <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">requirements.txt<\/em><\/strong> and have your OpenAI API and Comet API keys ready. (Reference: <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/help.openai.com\/en\/articles\/4936850-where-do-i-find-my-api-key\" target=\"_blank\" rel=\"noopener ugc nofollow\" data-href=\"https:\/\/help.openai.com\/en\/articles\/4936850-where-do-i-find-my-api-key\"><em class=\"markup--em markup--p-em\">OpenAI Help Center\u200a\u2014\u200aWhere can I find my API key?<\/em><\/a>; <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/www.comet.com\/docs\/v2\/api-and-sdk\/rest-api\/overview\/\" target=\"_blank\" rel=\"noopener ugc nofollow\" data-href=\"https:\/\/www.comet.com\/docs\/v2\/api-and-sdk\/rest-api\/overview\/\"><em class=\"markup--em markup--p-em\">CometLLM\u200a\u2014\u200aObtaining your API key<\/em><\/a>)<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"python\"><span class=\"pre--content\">import os\nimport dotenv\nimport comet_llm\nfrom langchain.callbacks import get_openai_callback\nimport time\ndotenv.load_dotenv()\n\nMY_OPENAI_KEY = os.getenv(\"YOUR_OPENAI_KEY\")\nMY_COMET_KEY = os.getenv(\"YOUR_COMET_KEY\")\n\n# Initialize a Comet project\ncomet_llm.init(project=\"YOUR COMET PROJECT NAME\",\n               api_key=MY_COMET_KEY)<\/span><\/pre>\n<h3 class=\"graf graf--h3\">1. Conversation Buffer&nbsp;Memory<\/h3>\n<p class=\"graf graf--p\">The simplest form of memory involves the creation of a talk buffer. In this approach, the model keeps a record of ongoing conversations and accumulates each user-agent interaction into a message. While effective for limited interactions, scalability becomes an issue for long conversations.<\/p>\n<p class=\"graf graf--p\">\ud83c\udfaf Implementation: Include the entire conversation in the prompt.<\/p>\n<p class=\"graf graf--p\">\u2705 Pros: Simple and effective for short interactions.<\/p>\n<p class=\"graf graf--p\">\u274c Cons: Limited by token span; impractical for lengthy conversations.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"python\"><span class=\"pre--content\">from langchain.chains.conversation.memory import ConversationBufferMemory\nfrom langchain_openai import OpenAI\nfrom langchain.chains import ConversationChain\nfrom langchain.callbacks import get_openai_callback\n\nllm = OpenAI(openai_api_key=MY_OPENAI_KEY,\n             temperature=0,\n             max_tokens = 256)\n\nbuffer_memory = ConversationBufferMemory()\n\nconversation = ConversationChain(\n    llm=llm,\n    verbose=True,\n    memory=buffer_memory\n)<\/span><\/pre>\n<p class=\"graf graf--p\">Interact with the conversational agent bot by inputting each prompt in <code class=\"markup--code markup--p-code\">conversation.predict()<\/code>. Example:<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">conversation.predict(input=\"Hey! I am Nhi.\")<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">conversation.predict(input=\"How are you today?\")<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">conversation.predict(input=\"I'm doing well, thank you. I need your assistant.\")<\/span><\/pre>\n<p class=\"graf graf--p\">Output after the 3rd prompt:<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"yaml\"><span class=\"pre--content\">&gt; Entering new ConversationChain chain...\nPrompt after formatting:\nThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\nHuman: Hey! I am Nhi.\nAI:  Hi Nhi! My name is AI. It's nice to meet you. What can I do for you today?\nHuman: How are you today?\nAI:  I'm doing great, thanks for asking! I'm feeling very excited about the new projects I'm working on. How about you?\nHuman: I'm doing well, thank you. I need your assistant.\nAI:\n\n&gt; Finished chain.\n' Sure thing! What kind of assistance do you need?'<\/span><\/pre>\n<p class=\"graf graf--p\"><code class=\"markup--code markup--p-code\">ConversationBufferMemory<\/code> allows conversations to grow with each turn and allows users to see the entire conversation history at any time. This approach allows ongoing interactions to be monitored and maintained, providing a simple but powerful form of memory for language models, especially in scenarios where the number of interactions with the system is limited.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">print(conversation.memory.buffer)<\/span><\/pre>\n<p class=\"graf graf--p\">Output:<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"yaml\"><span class=\"pre--content\">Human: Hey! I am Nhi.\nAI:  Hi Nhi! My name is AI. It's nice to meet you. What can I do for you today?\nHuman: How are you today?\nAI:  I'm doing great, thanks for asking! I'm feeling very excited about the new projects I'm working on. How about you?\nHuman: I'm doing well, thank you. I need your assistant.\nAI:  Sure thing! What kind of assistance do you need?<\/span><\/pre>\n<p class=\"graf graf--p\">Use Comet LLM to complete the code and record relevant information for analysis purposes.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"python\"><span class=\"pre--content\">def conversation_memory_buffer(prompt):\n    with get_openai_callback() as cb:\n        start_time = time.time()\n        response = conversation.predict(input=prompt)\n        end_time = time.time()\n        print(f\"Response: {response}\")\n        print(f\"History: {conversation.memory.buffer}\")\n\n        # Log to comet_llm\n        comet_llm.log_prompt(\n            prompt=prompt,\n            output=response,\n            duration= end_time - start_time,\n            prompt_template = conversation.prompt.template,\n            metadata={\n            \"memory_type\": \"conversation_buffer_memory\",\n            \"history\": conversation.memory.buffer,\n            \"total_tokens\": cb.total_tokens,\n            \"prompt_tokens\": cb.prompt_tokens,\n            \"completion_tokens\": cb.completion_tokens,\n            \"total_cost_usd\": cb.total_cost\n            },\n        )<\/span><\/pre>\n<p class=\"graf graf--p\">Call the function <code class=\"markup--code markup--p-code\">conversation_memory_buffer()<\/code>:<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">Example:\nconversation_memory_buffer(\"Hey! I am Nhi.\")\nconversation_memory_buffer(\"How are you today?\")\nconversation_memory_buffer(\"I'm doing well, thank you. I need your assistant.\")<\/span><\/pre>\n<p class=\"graf graf--p\">How the data is logged in Comet LLM:<\/p>\n<figure class=\"graf graf--figure\">\n<\/figure><\/div><\/div><\/section>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*rA-zKN1cb4vEVhT2p_1oiA.png\" alt=\"Enhance Conversational Agents with LangChain Memory\"\/><figcaption class=\"wp-element-caption\">Input\/output are logged in Comet LLM (Image by the\u00a0Author)<\/figcaption><\/figure>\n\n\n\n<figcaption class=\"imageCaption\"><\/figcaption>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*d7KgwOIc6kMgg1QPZyaC4g.png\" alt=\"Enhance Conversational Agents with LangChain Memory\"\/><figcaption class=\"wp-element-caption\">Logging the metadata in Comet LLM (Image by the Author)<\/figcaption><\/figure>\n\n\n\n<p>&nbsp;\n<\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">2. Conversation Summary&nbsp;Memory<\/h3>\n\n\n\n<p class=\"graf graf--p\">To overcome the scalability challenge, Conversation Summary Memory provides a solution. Rather than accumulating each interaction, the model generates a condensed summary of the essence of the conversation. This reduces the number of tokens and increases the sustainability of long-term interactions.<\/p>\n\n\n\n<p class=\"graf graf--p\">\ud83c\udfaf Implementation: Summarize the conversation to save tokens.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u2705 Pros: Efficient token usage over time.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u274c Cons: May lose fine-grained details; suitable for concise interactions.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">from langchain.chains.conversation.memory import ConversationSummaryMemory\nfrom langchain import OpenAI\nfrom langchain.chains import ConversationChain\nfrom langchain.callbacks import get_openai_callback\n\n# Create an instance of the OpenAI class with specified parameters\nllm = OpenAI(openai_api_key=MY_OPENAI_KEY,\n            temperature=0,\n            max_tokens = 256)\n\n# Create an instance of the ConversationSummaryMemory class\nsummary_memory = ConversationSummaryMemory(llm=OpenAI(openai_api_key=MY_OPENAI_KEY))\n\n# Create an instance of the ConversationChain class, combining OpenAI, verbose mode, and memory\nconversation = ConversationChain(\n    llm=llm,\n    verbose=True,\n    memory=summary_memory\n)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Similar to ConversationBufferMemory, we\u2019ll also interact with the conversational agent bot by inputting each prompt in <code class=\"markup--code markup--p-code\">conversation.predict()<\/code>. Example:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"Hey! I am Nhi.\")<\/span><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"How are you today?\")<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Output:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">&gt; Entering new ConversationChain chain...\nPrompt after formatting:\nThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n\nNhi introduces themselves to the AI and the AI responds with a greeting, revealing its name to be AI. The AI offers to assist Nhi with any tasks and asks how it can help them today. The human then asks how the AI is doing, to which it responds that it is doing great and asks how it can be of assistance.\nHuman: I'm doing well, thank you. I need your assistant.\nAI:\n\n&gt; Finished chain.\n' Absolutely! How can I help you today?'<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">The <strong class=\"markup--strong markup--p-strong\"><em class=\"markup--em markup--p-em\">ConversationSummaryMemory<\/em><\/strong> generates the conversation summary over multiple interactions.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">Current conversation:\nNhi introduces themselves to the AI and the AI responds with a greeting, revealing its name to be AI. The AI offers to assist Nhi with any tasks and asks how it can help them today. The human then asks how the AI is doing, to which it responds that it is doing great and asks how it can be of assistance.<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">The summaries capture key points of the conversation, including introductions, queries, and responses. As the conversation progresses, the model generates summarized versions, emphasizing that it tends to use fewer tokens over time. The summary\u2019s ability is to retain essential information, making it useful for reviewing the conversation as a whole.<\/p>\n\n\n\n<p class=\"graf graf--p\">Additionally, we can access and print out the conversation summary.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">print(conversation.memory.buffer)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Output:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">Nhi introduces themselves to the AI and the AI responds with a greeting, revealing its name to be AI. The AI offers to assist Nhi with any tasks and asks how it can help them today. The human asks how the AI is doing, to which it responds that it is doing great and asks how it can be of assistance. The human informs the AI that they need its assistance and the AI eagerly offers to help.<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Use Comet LLM to log important information for analysis purposes in your code.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">def conversation_summary_memory(prompt):\n    with get_openai_callback() as cb:\n        start_time = time.time()\n        response = conversation.predict(input=prompt)\n        end_time = time.time()\n        print(f\"Response: {response}\")\n        print(f\"History: {conversation.memory.buffer}\")\n\n        # Log to comet_llm\n        comet_llm.log_prompt(\n            prompt=prompt,\n            output=response,\n            duration= end_time - start_time,\n            prompt_template = conversation.prompt.template,\n            metadata={\n            \"memory_type\": \"conversation_summary_memory\",\n            \"history\": conversation.memory.buffer,\n            \"total_tokens\": cb.total_tokens,\n            \"prompt_tokens\": cb.prompt_tokens,\n            \"completion_tokens\": cb.completion_tokens,\n            \"total_cost_usd\": cb.total_cost\n            },\n        )<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Call the function <code class=\"markup--code markup--p-code\">conversation_memory_buffer()<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">Example:\nconversation_summary_memory(\"Hey! I am Nhi.\")\nconversation_summary_memory(\"How are you today?\")\nconversation_summary_memory(\"I'm doing well, thank you. I need your assistant.\")<\/span><\/pre>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">3. Conversation Buffer Window&nbsp;Memory<\/h3>\n\n\n\n<p class=\"graf graf--p\">Conversation Buffer Window Memory is an alternative version of the conversation buffer approach, which involves setting a limit on the number of interactions considered within a memory buffer. This balances memory depth and token efficiency, and provides flexibility to adapt to conversation windows.<\/p>\n\n\n\n<p class=\"graf graf--p\">\ud83c\udfaf Implementation: Retain the last N interactions.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u2705 Pros: Balances memory and token constraints.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u274c Cons: Potential loss of early conversation context.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">from langchain.chains.conversation.memory import ConversationBufferWindowMemory\nfrom langchain import OpenAI\nfrom langchain.chains import ConversationChain\nfrom langchain.callbacks import get_openai_callback\n\n# Create an instance of the OpenAI class with specified parameters\nllm = OpenAI(openai_api_key=MY_OPENAI_KEY,\n            model_name='text-davinci-003',\n            temperature=0,\n            max_tokens = 256)\n\n# Create an instance of the ConversationBufferWindowMemory class\n# We set a low k=2, to only keep the last 2 interactions in memory\nwindow_memory = ConversationBufferWindowMemory(k=2)\n\n# Create an instance of the ConversationChain class, combining OpenAI, verbose mode, and memory\nconversation = ConversationChain(\n    llm=llm,\n    verbose=True,\n    memory=window_memory\n)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Note that in this demo we will set a low value <code class=\"markup--code markup--p-code\">k=2<\/code>&nbsp;, but you can adjust it depending on the memory depth you need. Four prompts are introduced into the model in the following order:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"Hey! I am Nhi.\")<\/span><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"I'm doing well, thank you. I need your assistant.\")<\/span><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"I bought this pair of jeans, and there's a problem.\")<\/span><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"When I tried them on, the zipper got stuck, and now I can't unzip it.\")<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">In the demo, the prompt contains an ongoing conversation, and the model tracks a set number of recent interactions. However, only the last 02 interactions are considered, so the previous interactions are not preserved.<\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*uUmaMrVdHRq4Zf4t3DNbjw.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Our output<\/figcaption><\/figure>\n\n\n\n<p class=\"graf graf--p\">Memory depth can be adjusted based on factors such as token usage and cost considerations. We compare our approach with previous approaches and emphasize selectively preserving recent interactions.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">print(conversation.memory.buffer)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Output:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">Human: I bought this pair of jeans, and there's a problem.\nAI:  Oh no! What kind of problem are you having with the jeans?\nHuman: When I tried them on, the zipper got stuck, and now I can't unzip it.\nAI:  That sounds like a tricky problem. Have you tried lubricating the zipper with a bit of oil or WD-40? That might help loosen it up.<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Use Comet LLM to log important information for analysis in your code:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">def conversation_buffer_window_memory(prompt):\n    with get_openai_callback() as cb:\n        start_time = time.time()\n        response = conversation.predict(input=prompt)\n        end_time = time.time()\n        print(f\"Response: {response}\")\n        print(f\"History: {conversation.memory.buffer}\")\n\n        # Log to comet_llm\n        comet_llm.log_prompt(\n            prompt=prompt,\n            output=response,\n            duration= end_time - start_time,\n            prompt_template = conversation.prompt.template,\n            metadata={\n            \"memory_type\": \"conversation_buffer_window_memory\",\n            \"history\": conversation.memory.buffer,\n            \"total_tokens\": cb.total_tokens,\n            \"prompt_tokens\": cb.prompt_tokens,\n            \"completion_tokens\": cb.completion_tokens,\n            \"total_cost_usd\": cb.total_cost\n            },\n        )<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Call the function <code class=\"markup--code markup--p-code\">conversation_buffer_window_memory()<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">Example:\nconversation_buffer_window_memory(\"Hey! I am Nhi.\")\nconversation_buffer_window_memory(\"How are you today?\")\nconversation_buffer_window_memory(\"I'm doing well, thank you. I need your assistant.\")<\/span><\/pre>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">4. Conversation Summary Buffer Memory: A Combination of Conversation Summary and Buffer&nbsp;Memory<\/h3>\n\n\n\n<p class=\"graf graf--p\">Conversation Summary Buffer Memory keeps a buffer of recent interactions in memory, but compiles them into a digest and uses both, rather than just removing old interactions completely.<\/p>\n\n\n\n<p class=\"graf graf--p\">\ud83c\udfaf Implementation: Merge summary and buffer for optimal memory usage.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u2705 Pros: Provides a comprehensive view of recent and summarized interactions.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u274c Cons: Requires careful tuning for specific use cases.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">from langchain.chains.conversation.memory import ConversationSummaryBufferMemory\nfrom langchain import OpenAI\nfrom langchain.chains import ConversationChain\nfrom langchain.callbacks import get_openai_callback\n\n\n# Create an instance of the OpenAI class with specified parameters\nllm = OpenAI(openai_api_key=MY_OPENAI_KEY,\n            temperature=0,\n            max_tokens = 512)\n\n# Create an instance of the ConversationSummaryBufferMemory class\n# Setting k=2: Retains only the last 2 interactions in memory.\n# max_token_limit=40: Requires the installation of transformers.\nsummary_buffer_memory = ConversationSummaryBufferMemory(llm=OpenAI(openai_api_key=MY_OPENAI_KEY), max_token_limit=40)\n\n# Create an instance of the ConversationChain class, combining OpenAI, verbose mode, and memory\nconversation = ConversationChain(\n    llm=llm,\n    memory=summary_buffer_memory,\n    verbose=True\n)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Feed the model with 4 prompts in an order:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"Hey! I am Nhi.\")<\/span><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"I bought this pair of jeans, and there's a problem.\")<\/span><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"When I tried them on, the zipper got stuck, and now I can't unzip it.\")<\/span><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation.predict(input=\"It seems to be a problem with the zipper.\")<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">The model retains the last 2 interactions and summarize the older ones as below:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">&gt; Entering new ConversationChain chain...\nPrompt after formatting:\nThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\nSystem:\nNhi introduces herself to the AI, and the AI responds by introducing itself and expressing pleasure in meeting Nhi. Nhi mentions a problem with a pair of jeans she bought, and the AI asks for more details. Nhi explains that the zipper got stuck and now she can't unzip it.\nAI:  That sounds like a tricky situation. Have you tried lubricating the zipper with a bit of oil or soap? That might help it move more smoothly.\nHuman: It seems to be a problem with the zipper.\nAI:\n\n&gt; Finished chain.\n' It sounds like the zipper is stuck. Have you tried lubricating it with a bit of oil or soap? That might help it move more smoothly.'<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Print out the result:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">print(conversation.memory.moving_summary_buffer)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Output:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">Nhi introduces themselves to the AI and the AI responds by introducing itself and offering assistance. Nhi explains a problem with a pair of jeans they bought, stating that the zipper is stuck and they can't unzip it. The AI suggests lubricating the zipper with oil or soap to help it move more smoothly or replacing it if that doesn't work.<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Use Comet LLM in your code to log important information for analysis:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">def conversation_summary_buffer_memory(prompt):\n    with get_openai_callback() as cb:\n        start_time = time.time()\n        response = conversation.predict(input=prompt)\n        end_time = time.time()\n        print(f\"Response: {response}\")\n        print(f\"History: {conversation.memory.moving_summary_buffer}\")\n\n        # Log to comet_llm\n        comet_llm.log_prompt(\n            prompt=prompt,\n            output=response,\n            duration= end_time - start_time,\n            prompt_template = conversation.prompt.template,\n            metadata={\n            \"memory_type\": \"conversation_summary_buffer_memory\",\n            \"history\": conversation.memory.moving_summary_buffer,\n            \"total_tokens\": cb.total_tokens,\n            \"prompt_tokens\": cb.prompt_tokens,\n            \"completion_tokens\": cb.completion_tokens,\n            \"total_cost_usd\": cb.total_cost\n            },\n        )<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Call the function <code class=\"markup--code markup--p-code\">conversation_summary_buffer_memory()<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">Example:\nconversation_summary_buffer_memory(\"Hey! I am Nhi.\")\nconversation_summary_buffer_memory(\"I bought this pair of jeans, and there's a problem.\")\nconversation_summary_buffer_memory(\"When I tried them on, the zipper got stuck, and now I can't unzip it.\")\nconversation_summary_buffer_memory(\"It seems to be a problem with the zipper.\")<\/span><\/pre>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">5. Conversation Knowledge Graph&nbsp;Memory<\/h3>\n\n\n\n<p class=\"graf graf--p\">LangChain goes beyond just conversation tracking and introduces knowledge graph memory. It builds a mini knowledge graph based on related information, creating nodes and connections to represent key entities. This method improves the model\u2019s ability to understand and respond to situations.<\/p>\n\n\n\n<p class=\"graf graf--p\">\ud83c\udfaf Implementation: Extract relevant information and construct a knowledge graph.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u2705 Pros: Enables structured information extraction.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u274c Cons: May require additional processing for complex scenarios.<\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*zRCHDlNCPXUUVuED9QjrXQ.png\" alt=\"Enhance Conversational Agents with LangChain Memory\"\/><figcaption class=\"wp-element-caption\">Knowledge Graph Memory in LLM (Image by the\u00a0Author)<\/figcaption><\/figure>\n\n\n\n<figcaption class=\"imageCaption\"><\/figcaption>\n\n\n\n<p class=\"graf graf--p\">Python Code example:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">from langchain.chains.conversation.memory import ConversationKGMemory\nfrom langchain import OpenAI\nfrom langchain.chains import ConversationChain\nfrom langchain.prompts.prompt import PromptTemplate\nfrom langchain.callbacks import get_openai_callback\n\n# Create an instance of the OpenAI class with specified parameters\nllm = OpenAI(openai_api_key=MY_OPENAI_KEY,\n            temperature=0,\n            max_tokens = 256)\n\n# Define a template for conversation prompts\ntemplate = \"\"\"\nThis is a friendly chat between a human and an AI. The AI shares details from its context. \\n\nIf it doesn't know the answer, it honestly admits it. The AI sticks to information in the 'Relevant Information' section and doesn't make things up. \\n\\n\n\nRelevant Information: \\n\n\n{history}\n\nConversation: \\n\nHuman: {input} \\n\nAI:\"\"\"\n\n# Create an instance of the PromptTemplate class with specified input variables and template\nprompt = PromptTemplate(input_variables=[\"history\", \"input\"], template=template)\n\n# Create an instance of the ConversationKGMemory class with the OpenAI instance\nkg_memory = ConversationKGMemory(llm=llm)\n\n# Create an instance of the ConversationChain class, combining OpenAI, verbose mode, prompt, and memory\nconversation = ConversationChain(\n    llm=llm,\n    verbose=True,\n    prompt=prompt,\n    memory=kg_memory\n)<\/span><\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">def conversation_kg_memory(prompt):\n    with get_openai_callback() as cb:\n        start_time = time.time()\n        response = conversation.predict(input=prompt)\n        end_time = time.time()\n        print(f\"Response: {response}\")\n        print(f\"History: {str(conversation.memory.kg.get_triples())}\")\n\n        # Log to comet_llm\n        comet_llm.log_prompt(\n            prompt=prompt,\n            output=response,\n            duration= end_time - start_time,\n            prompt_template = conversation.prompt.template,\n            metadata={\n            \"memory_type\": \"conversation_kg_memory\",\n            \"history\": str(conversation.memory.kg.get_triples()),\n            \"total_tokens\": cb.total_tokens,\n            \"prompt_tokens\": cb.prompt_tokens,\n            \"completion_tokens\": cb.completion_tokens,\n            \"total_cost_usd\": cb.total_cost\n            },\n        )<\/span><\/pre>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">6. Entity&nbsp;Memory<\/h3>\n\n\n\n<p class=\"graf graf--p\">Similar to Knowledge Graph memory, entity memory extracts specific entities from the conversation, such as names, objects, or locations. This focused approach aids in understanding and responding to user queries with greater precision.<\/p>\n\n\n\n<p class=\"graf graf--p\">\ud83c\udfaf Implementation: Identify and store specific entities from the conversation.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u2705 Pros: Facilitates extraction of key information for decision-making.<\/p>\n\n\n\n<p class=\"graf graf--p\">\u274c Cons: Sensitivity to entity recognition accuracy.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">from langchain import OpenAI, ConversationChain\nfrom langchain.chains.conversation.memory import ConversationEntityMemory\nfrom langchain.chains.conversation.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE\nfrom pydantic import BaseModel\nfrom typing import List, Dict, Any\nfrom langchain.callbacks import get_openai_callback\n\n# Print the template used for conversation prompts\nENTITY_MEMORY_CONVERSATION_TEMPLATE.template\nprint(ENTITY_MEMORY_CONVERSATION_TEMPLATE.template)\n\n# Create an instance of the OpenAI class with specified parameters\nllm = OpenAI(openai_api_key=MY_OPENAI_KEY,\n            temperature=0,\n            max_tokens = 256)\n\n# Create an instance of the ConversationEntityMemory class with the OpenAI instance\nentity_memory = ConversationEntityMemory(llm=llm)\n\n# Create an instance of the ConversationChain class, combining OpenAI, verbose mode, prompt, and memory\nconversation = ConversationChain(\n    llm=llm,\n    verbose=True,\n    prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE,\n    memory=entity_memory\n)<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Create a function to get the response and log important metadata using CometLLM:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">def conversation_entity_memory(prompt):\n    with get_openai_callback() as cb:\n        start_time = time.time()\n        response = conversation.predict(input=prompt)\n        end_time = time.time()\n        print(f\"Response: {response}\")\n        print(f\"History: {conversation.memory.entity_store.store}\")\n\n        # Log to comet_llm\n        comet_llm.log_prompt(\n            prompt=prompt,\n            output=response,\n            duration= end_time - start_time,\n            prompt_template = conversation.prompt.template,\n            metadata={\n            \"memory_type\": \"conversation_entity_memory\",\n            \"entity_cache\": conversation.memory.entity_cache,\n            \"history\": conversation.memory.entity_store.store,\n            \"total_tokens\": cb.total_tokens,\n            \"prompt_tokens\": cb.prompt_tokens,\n            \"completion_tokens\": cb.completion_tokens,\n            \"total_cost_usd\": cb.total_cost\n            },\n        )<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">Call the function:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><span class=\"pre--content\">conversation_entity_memory(\"Hi, this is Nhi. I bought this pair of jeans, and there's a problem. When I tried them on, the zipper got stuck, and now I can't unzip it.\")<\/span><\/pre>\n\n\n\n<p class=\"graf graf--p\">The function returns the history, the matching response using Entity memory, and also logs metadata in Comet LLM as shown below.<\/p>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*oJnvFtiyZpxKTv8hm9xEqg.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Response (Image by the\u00a0Author)<\/figcaption><\/figure>\n\n\n\n<figcaption class=\"imageCaption\"><\/figcaption>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*8a5vY15esK-7AT0i7bSplA.png\" alt=\"Enhance Conversational Agents with LangChain Memory\"\/><figcaption class=\"wp-element-caption\">Input\/Output logged in Comet\u00a0LLM<\/figcaption><\/figure>\n\n\n\n<figcaption class=\"imageCaption\"><\/figcaption>\n\n\n\n<figure class=\"graf graf--figure\">\n<\/figure>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*UqRO5JXKQtKpyK3HUiaw6A.png\" alt=\"Enhance Conversational Agents with LangChain Memory\"\/><figcaption class=\"wp-element-caption\">Metadata logged in Comet LLM<\/figcaption><\/figure>\n\n\n\n<p>&nbsp;\n<\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\"><strong class=\"markup--strong markup--h3-strong\">Reference<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list postList\">\n<li>Find the complete code on this <a class=\"markup--anchor markup--li-anchor\" data-href=\"https:\/\/github.com\/yennhi95zz\/langchain-conversation-memory-code-examples\" href=\"https:\/\/github.com\/yennhi95zz\/langchain-conversation-memory-code-examples\" target=\"_blank\" rel=\"noopener\"><strong class=\"markup--strong markup--li-strong\">GitHub repository<\/strong><\/a><\/li>\n\n\n\n<li>Experiment Tracking in <a class=\"markup--anchor markup--li-anchor\" data-href=\"https:\/\/www.comet.com\/yennhi95zz\/langchain-conversation-memory\/prompts\" href=\"https:\/\/www.comet.com\/yennhi95zz\/langchain-conversation-memory\/prompts\" target=\"_blank\" rel=\"noopener\"><strong class=\"markup--strong markup--li-strong\">Comet LLM Project<\/strong><\/a><\/li>\n<\/ul>\n\n\n\n<p class=\"graf graf--p\"><em class=\"markup--em markup--p-em\">\ud83d\udc49 Explore Comet in action by reviewing my past hands-on projects.<\/em><\/p>\n\n\n\n<ul class=\"wp-block-list postList\">\n<li><a class=\"markup--anchor markup--li-anchor\" data-href=\"https:\/\/heartbeat.comet.ml\/a-hands-on-project-enhancing-customer-churn-prediction-with-continuous-experiment-tracking-in-77aeaff242f7\" href=\"https:\/\/www.comet.com\/site\/blog\/customer-churn-with-continuous-experiment-tracking\/\" target=\"_blank\" rel=\"noopener ugc nofollow\"><em class=\"markup--em markup--li-em\">Enhancing Customer Churn Prediction with Continuous Experiment Tracking<\/em><\/a><\/li>\n\n\n\n<li><a class=\"markup--anchor markup--li-anchor\" data-href=\"https:\/\/heartbeat.comet.ml\/hyperparameter-tuning-in-machine-learning-a-key-to-optimize-model-performance-1d520934bb99\" href=\"https:\/\/www.comet.com\/site\/blog\/hyperparameter-tuning-a-key-for-optimizing-ml-performance\/\" target=\"_blank\" rel=\"noopener ugc nofollow\"><em class=\"markup--em markup--li-em\">Hyperparameter Tuning in Machine Learning: A Key to Optimize Model Performance<\/em><\/a><\/li>\n\n\n\n<li><a class=\"markup--anchor markup--li-anchor\" data-href=\"https:\/\/medium.com\/mlearning-ai\/the-magic-of-model-stacking-boosting-machine-learning-performance-2f6719a0bfd8\" href=\"https:\/\/medium.com\/mlearning-ai\/the-magic-of-model-stacking-boosting-machine-learning-performance-2f6719a0bfd8\" target=\"_blank\" rel=\"noopener\"><em class=\"markup--em markup--li-em\">The Magic of Model Stacking: Boosting Machine Learning Performance<\/em><\/a><\/li>\n\n\n\n<li><a class=\"markup--anchor markup--li-anchor\" data-href=\"https:\/\/medium.com\/p\/e1eb04e74eb5\" href=\"https:\/\/medium.com\/p\/e1eb04e74eb5\" target=\"_blank\" rel=\"noopener\"><em class=\"markup--em markup--li-em\">Logging\u200a\u2014\u200aThe Effective Management of Machine Learning Systems<\/em><\/a><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">Conclusion<\/h3>\n\n\n\n<p class=\"graf graf--p\">Memory plays a fundamental role in enhancing the capabilities of conversational agents. By understanding and implementing LangChain\u2019s different memory strategies, you can create more dynamic and context-sensitive chatbots. Whether you choose a buffer-based approach or leverage a knowledge graph, the key is to tailor your memory mechanisms to your specific use case and user expectations.<\/p>\n\n\n\n<p class=\"graf graf--p\">As the field of Natural Language Processing (NLP) continues to evolve, integrating memory into conversational agents remains a promising avenue for improving user experience.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>LangChain Conversation Memory Types: Pros &amp; Cons, and Code Examples When it comes to chatbots and conversational agents, the ability to retain and remember information is critical to creating fluid, human-like interactions. This article describes the concept of memory in LangChain and explores its importance, implementation, and various strategies for optimizing conversation flow. \ud83d\udca1I write [&hellip;]<\/p>\n","protected":false},"author":95,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[65,7],"tags":[40,14,64,70,71,52,31,33,34],"coauthors":[192],"class_list":["post-8888","post","type-post","status-publish","format-standard","hentry","category-llmops","category-tutorials","tag-comet","tag-comet-ml","tag-cometllm","tag-langchain","tag-language-models","tag-llm","tag-llmops","tag-openai","tag-prompt-engineering"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Enhance Conversational Agents with LangChain Memory - Comet<\/title>\n<meta name=\"description\" content=\"LangChain Memory supports the ability to retain information to create conversational agent interactions similar to human conversations\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Enhance Conversational Agents with LangChain Memory\" \/>\n<meta property=\"og:description\" content=\"LangChain Memory supports the ability to retain information to create conversational agent interactions similar to human conversations\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2024-01-26T01:26:28+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:03:26+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*1pJAe7F36nNdJnT6\" \/>\n<meta name=\"author\" content=\"Nhi Yen\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Nhi Yen\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"17 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Enhance Conversational Agents with LangChain Memory - Comet","description":"LangChain Memory supports the ability to retain information to create conversational agent interactions similar to human conversations","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/","og_locale":"en_US","og_type":"article","og_title":"Enhance Conversational Agents with LangChain Memory","og_description":"LangChain Memory supports the ability to retain information to create conversational agent interactions similar to human conversations","og_url":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2024-01-26T01:26:28+00:00","article_modified_time":"2025-04-24T17:03:26+00:00","og_image":[{"url":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*1pJAe7F36nNdJnT6","type":"","width":"","height":""}],"author":"Nhi Yen","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Nhi Yen","Est. reading time":"17 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/"},"author":{"name":"Nhi Yen","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/1a873c6cf609e07d582cd696f147609b"},"headline":"Enhance Conversational Agents with LangChain Memory","datePublished":"2024-01-26T01:26:28+00:00","dateModified":"2025-04-24T17:03:26+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/"},"wordCount":1387,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/#primaryimage"},"thumbnailUrl":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*1pJAe7F36nNdJnT6","keywords":["Comet","Comet ML","CometLLM","LangChain","Language Models","LLM","LLMOps","OpenAI","Prompt Engineering"],"articleSection":["LLMOps","Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/","url":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/","name":"Enhance Conversational Agents with LangChain Memory - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/#primaryimage"},"thumbnailUrl":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*1pJAe7F36nNdJnT6","datePublished":"2024-01-26T01:26:28+00:00","dateModified":"2025-04-24T17:03:26+00:00","description":"LangChain Memory supports the ability to retain information to create conversational agent interactions similar to human conversations","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/#primaryimage","url":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*1pJAe7F36nNdJnT6","contentUrl":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*1pJAe7F36nNdJnT6"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/enhance-conversational-agents-with-langchain-memory\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Enhance Conversational Agents with LangChain Memory"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/1a873c6cf609e07d582cd696f147609b","name":"Nhi Yen","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/cbe7005c33fc937d23d6bbbff99e5223","url":"https:\/\/secure.gravatar.com\/avatar\/ec9f8f996211d944f352679e89c48b4cdaf7a1609d7409846408ac93045893d9?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ec9f8f996211d944f352679e89c48b4cdaf7a1609d7409846408ac93045893d9?s=96&d=mm&r=g","caption":"Nhi Yen"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/nhi-yen\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8888","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/95"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=8888"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8888\/revisions"}],"predecessor-version":[{"id":15398,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8888\/revisions\/15398"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=8888"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=8888"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=8888"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=8888"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}