{"id":8476,"date":"2023-12-16T17:17:58","date_gmt":"2023-12-17T01:17:58","guid":{"rendered":"https:\/\/live-cometml.pantheonsite.io\/?p=8476"},"modified":"2025-04-24T17:03:51","modified_gmt":"2025-04-24T17:03:51","slug":"conversational-agents-in-langchain","status":"publish","type":"post","link":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/","title":{"rendered":"Conversational Agents in LangChain"},"content":{"rendered":"\n<section class=\"section section--body\">\n<div class=\"section-divider\"><\/div>\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<h3 class=\"graf graf--h4\">Both ways: off-the-shelf and using&nbsp;LCEL<\/h3>\n<figure class=\"graf graf--figure\">\n<\/figure><\/div><\/div><\/section>\n\n\n\n<figure class=\"wp-block-image aligncenter graf-image\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*W9HdTi44yeC9deaD\" alt=\"\"\/><figcaption class=\"wp-element-caption\">Photo by <a href=\"https:\/\/unsplash.com\/@etienneblg?utm_source=medium&amp;utm_medium=referral\">Etienne Boulanger<\/a> on\u00a0<a href=\"http:\/\/Unsplash.com\">Unsplash<\/a><\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">Conversational Agents<\/h3>\n\n\n\n<p class=\"graf graf--p\">Conversational agents in LangChain facilitate interactive and dynamic conversations with users.<\/p>\n\n\n\n<p class=\"graf graf--p\">Conversation agents are optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which could be better in a conversational setting where you may want the agent to be able to chat with the user as well.<\/p>\n\n\n\n<p class=\"graf graf--p\">Conversational agents can engage in back-and-forth conversations, remember previous interactions, and make contextually informed decisions.<\/p>\n\n\n\n<p class=\"graf graf--p\">On the other hand, non-conversational agents have different focuses and capabilities. While they can generate text based on input, they may have different interactivity, memory, and context than conversational agents. They may be more suitable for tasks such as text generation, language translation, or sentiment analysis rather than engaging in interactive conversations.<\/p>\n\n\n\n<p class=\"graf graf--p\">Conversational agents in LangChain offer distinct features and functionalities that make them unique and tailored for interactive and dynamic conversations with users.<\/p>\n\n\n\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\">They\u2019re different from other agents in LangChain in a few ways:<\/strong><\/p>\n\n\n\n<p class=\"graf graf--p\">1) <strong class=\"markup--strong markup--p-strong\">Focus on Conversation:<\/strong> Conversational agents are designed to facilitate interactive and dynamic conversations with users. They are optimized for conversation and can engage in back-and-forth interactions, remember previous interactions, and make contextually informed decisions.<\/p>\n\n\n\n<p class=\"graf graf--p\">2) <strong class=\"markup--strong markup--p-strong\">Multi-turn Interactions:<\/strong> Conversational agents excel in handling multi-turn conversations, where users can ask follow-up questions or provide additional information. They can maintain the context of the conversation and provide coherent and meaningful responses based on the entire conversation history.<\/p>\n\n\n\n<p class=\"graf graf--p\">3)<strong class=\"markup--strong markup--p-strong\"> Dynamic Decision-making:<\/strong> Conversational agents can make dynamic decisions based on the current conversation context and available information. They can retrieve and integrate real-time data from external systems through APIs, enabling them to provide up-to-date and accurate responses.<\/p>\n\n\n\n<h3 class=\"wp-block-heading graf graf--h3\">Use Cases<\/h3>\n\n\n\n<ul class=\"wp-block-list postList\">\n<li><strong class=\"markup--strong markup--li-strong\">Interactive Communication:<\/strong> If your application requires interactive communication with users, where they can ask questions, provide inputs, and receive responses, a conversational agent can be beneficial.<\/li>\n\n\n\n<li><strong class=\"markup--strong markup--li-strong\">Complex Workflows:<\/strong> A conversational agent can help automate and streamline the workflow if your tasks involve multiple steps or interactions.<\/li>\n\n\n\n<li><strong class=\"markup--strong markup--li-strong\">Real-time Information:<\/strong> If your application requires access to real-time information or external systems through APIs, a conversational agent can facilitate the retrieval and integration of such data.<\/li>\n\n\n\n<li><strong class=\"markup--strong markup--li-strong\">Personalized Assistance: <\/strong>If you need to provide personalized assistance or support to users, a conversational agent can understand their needs and provide tailored responses.<\/li>\n<\/ul>\n\n\n\n<section class=\"section section--body\">\n<div class=\"section-divider\">\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<blockquote class=\"graf graf--pullquote\"><p>Want to learn how to build modern software with LLMs using the newest tools and techniques in the field? <a class=\"markup--anchor markup--pullquote-anchor\" href=\"https:\/\/www.comet.com\/production\/site\/llm-course\/?utm_source=Heartbeat&amp;utm_medium=referral&amp;utm_content=Medium&amp;utm_campaign=Heartbeat_LangChain_Series_HS\" target=\"_blank\" rel=\"noopener ugc nofollow\" data-href=\"https:\/\/www.comet.com\/production\/site\/llm-course\/?utm_source=Heartbeat&amp;utm_medium=referral&amp;utm_content=Medium&amp;utm_campaign=Heartbeat_LangChain_Series_HS\">Check out this free LLMOps course<\/a> from industry expert Elvis Saravia of&nbsp;DAIR.AI!<\/p><\/blockquote>\n<\/div>\n<\/div>\n<\/section>\n\n\n\n<section class=\"section section--body\">\n<div class=\"section-divider\">\n<hr class=\"section-divider\">\n<\/div>\n<div class=\"section-content\">\n<div class=\"section-inner sectionLayout--insetColumn\">\n<h3 class=\"graf graf--h3\">How to set up a conversational agent<\/h3>\n<ol class=\"postList\">\n<li class=\"graf graf--li\">Create the toolkit<\/li>\n<li class=\"graf graf--li\">Set up <code class=\"markup--code markup--li-code\">ConversationBufferMemory<\/code><\/li>\n<li class=\"graf graf--li\">Initialize a LLM<\/li>\n<li class=\"graf graf--li\">Initialize the agent and equip it with the tools, LLM, and memory.<\/li>\n<li class=\"graf graf--li\">Run queries<\/li>\n<\/ol>\n<h4 class=\"graf graf--h4\">Preliminaries<\/h4>\n<p class=\"graf graf--p\">Ensure you take care of imports, setting your OpenAI key, etc.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">%%capture\n!pip install langchain openai duckduckgo-search youtube_search wikipedia langchainhub\n\n<span class=\"hljs-keyword\">import<\/span> os\n<span class=\"hljs-keyword\">import<\/span> getpass\n\nos.environ[<span class=\"hljs-string\">\"OPENAI_API_KEY\"<\/span>] = getpass.getpass(<span class=\"hljs-string\">\"Enter Your OpenAI API Key:\"<\/span>)\n\n<span class=\"hljs-keyword\">from<\/span> langchain.agents <span class=\"hljs-keyword\">import<\/span> Tool, AgentType, initialize_agent\n<span class=\"hljs-keyword\">from<\/span> langchain.memory <span class=\"hljs-keyword\">import<\/span> ConversationBufferMemory\n<span class=\"hljs-keyword\">from<\/span> langchain.chat_models <span class=\"hljs-keyword\">import<\/span> ChatOpenAI\n<span class=\"hljs-keyword\">from<\/span> langchain.utilities <span class=\"hljs-keyword\">import<\/span> DuckDuckGoSearchAPIWrapper\n<span class=\"hljs-keyword\">from<\/span> langchain.agents <span class=\"hljs-keyword\">import<\/span> AgentExecutor\n<span class=\"hljs-keyword\">from<\/span> langchain <span class=\"hljs-keyword\">import<\/span> hub\n<span class=\"hljs-keyword\">from<\/span> langchain.agents.format_scratchpad <span class=\"hljs-keyword\">import<\/span> format_log_to_str\n<span class=\"hljs-keyword\">from<\/span> langchain.agents.output_parsers <span class=\"hljs-keyword\">import<\/span> ReActSingleInputOutputParser\n<span class=\"hljs-keyword\">from<\/span> langchain.tools.render <span class=\"hljs-keyword\">import<\/span> render_text_description<\/span><\/pre>\n<p class=\"graf graf--p\">Now, let\u2019s get started.<\/p>\n<h3 class=\"graf graf--h3\">\ud83d\udee0\ufe0f Create&nbsp;toolkit<\/h3>\n<p class=\"graf graf--p\">I like using DuckDuckGo for search. It doesn\u2019t require an API, which is great.<\/p>\n<p class=\"graf graf--p\">One less token to track and to worry about.<\/p>\n<p class=\"graf graf--p\">You can set up the toolkit as follows:<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">search = DuckDuckGoSearchAPIWrapper()\n\nsearch_tool = Tool(name=<span class=\"hljs-string\">\"Current Search\"<\/span>,\n                   func=search.run,\n                   description=<span class=\"hljs-string\">\"Useful when you need to answer questions about nouns, current events or the current state of the world.\"<\/span>\n                   )\n\ntools = [search_tool]<\/span><\/pre>\n<h3 class=\"graf graf--h3\">\ud83e\udde0 Set up&nbsp;memory<\/h3>\n<p class=\"graf graf--p\">The initial step involves creating a <code class=\"markup--code markup--p-code\">chat_history<\/code> component within the prompt. This feature will prevent &#8220;memory loss&#8221;, enabling the agent to retain context from previous interaction, enhancing its effectiveness.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">memory = ConversationBufferMemory(memory_key=<span class=\"hljs-string\">\"chat_history\"<\/span>)<\/span><\/pre>\n<h3 class=\"graf graf--h3\">\u2b1b\ufe0f Initialize LLM<\/h3>\n<p class=\"graf graf--p\">Let\u2019s use a chat model, in this case, GPT-4-Turbo!<\/p>\n<p class=\"graf graf--p\">You want to make sure that the temperature is set to zero. The agent needs to ensure the ReAct framework, and it must output its responses as we specify. If you set the temperature higher, the LLM will start taking liberties with the prompt and won\u2019t conform its outputs to how we specified.<\/p>\n<p class=\"graf graf--p\">I suspect that\u2019s less of an issue with GPT-4, since it\u2019s already pretty smart, and more so something to worry about when building agents with smaller LLMs.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">llm = ChatOpenAI(model = <span class=\"hljs-string\">\"gpt-4-1106-preview\"<\/span>, temperature=<span class=\"hljs-number\">0<\/span>)<\/span><\/pre>\n<h3 class=\"graf graf--h3\">\ud83d\udd75\ud83c\udffb Initialize agent<\/h3>\n<p class=\"graf graf--p\">You\u2019ll also equip it with the LLM, tools, and memory<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"graphql\"><span class=\"pre--content\">agent_chain <span class=\"hljs-punctuation\">=<\/span> initialize_agent<span class=\"hljs-punctuation\">(<\/span>tools,\n                               llm,\n                               agent<span class=\"hljs-punctuation\">=<\/span>AgentType.CONVERSATIONAL_REACT_DESCRIPTION,\n                               memory<span class=\"hljs-punctuation\">=<\/span>memory,\n                               verbose<span class=\"hljs-punctuation\">=<\/span><span class=\"hljs-literal\">True<\/span><span class=\"hljs-punctuation\">)<\/span>\n\nagent_chain.run<span class=\"hljs-punctuation\">(<\/span><span class=\"hljs-keyword\">input<\/span><span class=\"hljs-punctuation\">=<\/span><span class=\"hljs-string\">\"What it do, nephew!\"<\/span><span class=\"hljs-punctuation\">)<\/span><\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"plaintext\"><span class=\"pre--content\">&gt; Entering new AgentExecutor chain...\n```\nThought: Do I need to use a tool? No\nAI: \"What it do, nephew!\" is another informal and friendly greeting, similar in use to \"What's up?\" or \"How's it going?\" It's all good here! How can I help you today?\n```\n\n&gt; Finished chain.\n\"What it do, nephew!\" is another informal and friendly greeting, similar in use to \"What\\'s up?\" or \"How\\'s it going?\" It\\'s all good here! How can I help you today?\\n```<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">agent_chain.run(<span class=\"hljs-built_in\">input<\/span>=<span class=\"hljs-string\">\"I'm Harpreet Sahota, the Data Scientist, search me up bruv.\"<\/span>)<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"plaintext\"><span class=\"pre--content\">&gt; Entering new AgentExecutor chain...\n```\nThought: Do I need to use a tool? Yes\nAction: Current Search\nAction Input: Harpreet Sahota Data Scientist\n```\nObservation: Harpreet Sahota, a data science expert and deep learning developer at Deci AI, joins Jon Krohn to explore the fascinating realm of object detection and the revolutionary YOLO-NAS model architecture. Discover how machine vision models have evolved and the techniques driving compute-efficient edge device applications. A new generation of data scientists is emerging\u2014those who understand and masterfully leverage Generative AI. Harpreet Sahota is the host of The Artists of Data Science podcast; the only personal growth and development podcast for Data Scientists. A proud data science generalist with strong business acumen, Harpreet works by day to define and execute strategies that demonstrate the value of the data. Recommended Content Security Harpreet Sahota joins us from Deci today to detail YOLO-NAS as well as where Computer Vision is going next. Harpreet: ... \u2022 Through prolific data science content creation, including The Artists of Data Science podcast and his LinkedIn live streams, Harpreet has amassed a social-media following in excess of 70,000 followers. ... Plus upcoming panel discussion, text-guided image-to-image generation with Stable Diffusion, and a framework for generating synthetic data for LLMs The Generative Generation Subscribe\nThought:Do I need to use a tool? No\nAI: Harpreet Sahota is recognized as a data science expert and deep learning developer at Deci AI. He has appeared on a podcast with Jon Krohn discussing object detection and the YOLO-NAS model architecture, which is relevant to machine vision models and their applications on edge devices. Harpreet is also the host of The Artists of Data Science podcast, which focuses on personal growth and development for data scientists. He is known for his strong business acumen and strategy execution in demonstrating the value of data. Additionally, Harpreet has a significant social media presence, with over 70,000 followers, and is involved in content creation related to data science, including LinkedIn live streams. He has also been involved in discussions about text-guided image-to-image generation with Stable Diffusion and generating synthetic data for large language models (LLMs).\n\n&gt; Finished chain.\nHarpreet Sahota is recognized as a data science expert and deep learning developer at Deci AI. He has appeared on a podcast with Jon Krohn discussing object detection and the YOLO-NAS model architecture, which is relevant to machine vision models and their applications on edge devices. Harpreet is also the host of The Artists of Data Science podcast, which focuses on personal growth and development for data scientists. He is known for his strong business acumen and strategy execution in demonstrating the value of data. Additionally, Harpreet has a significant social media presence, with over 70,000 followers, and is involved in content creation related to data science, including LinkedIn live streams. He has also been involved in discussions about text-guided image-to-image generation with Stable Diffusion and generating synthetic data for large language models (LLMs).<\/span><\/pre>\n<p class=\"graf graf--p\">You can confirm that the agent has memory, like so:<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">agent_chain.run(<span class=\"hljs-built_in\">input<\/span>=<span class=\"hljs-string\">\"Who were we just talking about?\"<\/span>)<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">&gt; Entering new AgentExecutor chain...\n```\nThought: Do I need to use a tool? No\nAI: We were just talking about Harpreet Sahota, the Data Scientist.\n```\n\n&gt; Finished chain.\nWe were just talking about Harpreet Sahota, the Data Scientist.\\n```<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">agent_chain.run(<span class=\"hljs-built_in\">input<\/span>=<span class=\"hljs-string\">\"Seems like a pretty cool dude to me.\"<\/span>)<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">&gt; Entering new AgentExecutor chain...\n```\nThought: Do I need to use a tool? No\nAI: Harpreet Sahota certainly has made a name <span class=\"hljs-keyword\">for<\/span> himself <span class=\"hljs-keyword\">in<\/span> the data science community <span class=\"hljs-keyword\">and<\/span> has contributed to various discussions <span class=\"hljs-keyword\">and<\/span> educational content <span class=\"hljs-keyword\">in<\/span> the field. It<span class=\"hljs-string\">'s great to hear that you think he'<\/span>s a cool dude!\n```\n\n&gt; Finished chain.\nHarpreet Sahota certainly has made a name <span class=\"hljs-keyword\">for<\/span> himself <span class=\"hljs-keyword\">in<\/span> the data science community <span class=\"hljs-keyword\">and<\/span> has contributed to various discussions <span class=\"hljs-keyword\">and<\/span> educational content <span class=\"hljs-keyword\">in<\/span> the field. It<span class=\"hljs-string\">'s great to hear that you think he'<\/span>s a cool dude!\\n```<\/span><\/pre>\n<p class=\"graf graf--p\">Awesome! LangChain wants us to start moving to LCEL, so let me show you how to create an agent.<\/p>\n<h3 class=\"graf graf--h3\">\ud83e\udd16 Setting up an agent using LangChain Expression Language&nbsp;(LCEL)<\/h3>\n<p class=\"graf graf--p\">This follows the same flow as above, just using the expression language.<\/p>\n<p class=\"graf graf--p\">You can inspect the REACT prompt template below and observe that it is, in fact, a partial prompt template.<\/p>\n<p class=\"graf graf--p\">A what? Partial. Prompt. Template.<\/p>\n<p class=\"graf graf--p\">Allow me to explain\u2026<\/p>\n<h3 class=\"graf graf--h3\">Partial Prompt Templates<\/h3>\n<p class=\"graf graf--p\">Partial prompt templates in LangChain offer a flexible way to work with prompt templates by allowing users to predefine a subset of required values. This is especially beneficial when some values are known beforehand, enabling a more streamlined approach to formatting the remaining values later.<\/p>\n<h3 class=\"graf graf--h3\">Methods to Create Partial Prompt Templates<\/h3>\n<p class=\"graf graf--p\">LangChain provides two primary methods for creating partial prompt templates:<\/p>\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\">Partial with Strings:<\/strong><\/p>\n<ul class=\"postList\">\n<li class=\"graf graf--li\">Allows users to input string values for specific variables while creating the partial prompt template.<\/li>\n<li class=\"graf graf--li\">Ideal for scenarios where certain variable values are obtained earlier than others.<\/li>\n<\/ul>\n<p class=\"graf graf--p\"><strong class=\"markup--strong markup--p-strong\">Partial with Functions:<\/strong><\/p>\n<ul class=\"postList\">\n<li class=\"graf graf--li\">Enables users to input functions that return specific variable values.<\/li>\n<li class=\"graf graf--li\">Particularly useful for dynamic variables, such as date\/time, which need to be fetched in real-time.<\/li>\n<\/ul>\n<h3 class=\"graf graf--h3\">Real-World Application<\/h3>\n<p class=\"graf graf--p\">Consider a complex prompt template that necessitates multiple variables. If certain values, like name and location, are already known, a partial template can be crafted with these preset values. This partial template can then be used more efficiently, requiring only the input of the remaining variables, such as time.<\/p>\n<p class=\"graf graf--p\">For instance, a personalized story prompt might need variables like name, location, and time. A partial template can be created with these values if the name and location are predetermined. This simpler partial template can gather only the outstanding variables, like time.<\/p>\n<blockquote class=\"graf graf--blockquote\"><p><em class=\"markup--em markup--blockquote-em\">Partial prompt templates in LangChain enhance the reusability of prompt templates and diminish complexity. By allowing users to preset specific values, they maintain the original template structure while simplifying the formatting process.<\/em><\/p><\/blockquote>\n<h3 class=\"graf graf--h3\">Partial with&nbsp;Strings<\/h3>\n<p class=\"graf graf--p\">Using partial prompt templates with strings is particularly useful when you receive some variables earlier than others. This method streamlines the process and enhances efficiency.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">agent_prompt = hub.pull(<span class=\"hljs-string\">\"hwchase17\/react-chat\"<\/span>)\n\n<span class=\"hljs-built_in\">print<\/span>(agent_prompt.template)<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"plaintext\"><span class=\"pre--content\">Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:\n\n{tools}\n\nTo use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer: [your response here]\n```\n\nBegin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}<\/span><\/pre>\n<p class=\"graf graf--p\">Now, let\u2019s go ahead and instantiate the text as a partial prompt template like so:<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">prompt = agent_prompt.partial(\n    tools=render_text_description(tools),\n    tool_names=<span class=\"hljs-string\">\", \"<\/span>.join([t.name <span class=\"hljs-keyword\">for<\/span> t <span class=\"hljs-keyword\">in<\/span> tools]),\n)\nllm_with_stop = llm.bind(stop=[<span class=\"hljs-string\">\"\\nObservation\"<\/span>])<\/span><\/pre>\n<h3 class=\"graf graf--h3\">Set up the agent using&nbsp;LCEL<\/h3>\n<p class=\"graf graf--p\">You may notice that when using an off-the-shelf agent you set up the chain with <code class=\"markup--code markup--p-code\">initialize_agent<\/code>, but below you&#8217;re using the <code class=\"markup--code markup--p-code\">AgentExecutor<\/code>.<\/p>\n<p class=\"graf graf--p\">The key differences between <code class=\"markup--code markup--p-code\">AgentExecutor<\/code> and <code class=\"markup--code markup--p-code\">initialize_agent<\/code> are:<\/p>\n<p class=\"graf graf--p\">\u2022 <code class=\"markup--code markup--p-code\">AgentExecutor<\/code> is a class that is used to execute actions from tools sequentially in a chain. It is part of the lower-level agent infrastructure.<\/p>\n<p class=\"graf graf--p\">\u2022 <code class=\"markup--code markup--p-code\">initialize_agent<\/code> is a convenience function to create an agent with tools and an LLM. It handles constructing an AgentExecutor under the hood and provides a simple interface to create different agent types.<\/p>\n<p class=\"graf graf--p\">In summary:<\/p>\n<p class=\"graf graf--p\"><code class=\"markup--code markup--p-code\">AgentExecutor<\/code> is lower-level, handles executing a chain of actions from tools.<\/p>\n<p class=\"graf graf--p\"><code class=\"markup--code markup--p-code\">initialize_agent<\/code> is higher-level, creates an agent with tools that uses <code class=\"markup--code markup--p-code\">AgentExecutor<\/code> under the hood. It provides a simple way to construct different agent types.<\/p>\n<p class=\"graf graf--p\">So <code class=\"markup--code markup--p-code\">initialize_agent<\/code> uses <code class=\"markup--code markup--p-code\">AgentExecutor<\/code>, but provides a more convenient interface for creating agents. <code class=\"markup--code markup--p-code\">AgentExecutor<\/code> is used directly when more control over the agent execution chain is needed.<\/p>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"1\" data-code-block-lang=\"python\"><span class=\"pre--content\">agent = (\n    {\n        <span class=\"hljs-string\">\"input\"<\/span>: <span class=\"hljs-keyword\">lambda<\/span> x: x[<span class=\"hljs-string\">\"input\"<\/span>],\n        <span class=\"hljs-string\">\"agent_scratchpad\"<\/span>: <span class=\"hljs-keyword\">lambda<\/span> x: format_log_to_str(x[<span class=\"hljs-string\">\"intermediate_steps\"<\/span>]),\n        <span class=\"hljs-string\">\"chat_history\"<\/span>: <span class=\"hljs-keyword\">lambda<\/span> x: x[<span class=\"hljs-string\">\"chat_history\"<\/span>],\n    }\n    | prompt\n    | llm_with_stop\n    | ReActSingleInputOutputParser()\n)\n\nmemory = ConversationBufferMemory(memory_key=<span class=\"hljs-string\">\"chat_history\"<\/span>)\n\nagent_executor = AgentExecutor(agent=agent, tools=tools, verbose=<span class=\"hljs-literal\">True<\/span>, memory=memory)\n\nagent_executor.invoke({<span class=\"hljs-string\">\"input\"<\/span>: <span class=\"hljs-string\">\"What's the forecase for snow looking like in Winnipeg today?\"<\/span>})[<span class=\"hljs-string\">\"output\"<\/span>]<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"plaintext\"><span class=\"pre--content\">&gt; Entering new AgentExecutor chain...\nThought: Do I need to use a tool? Yes\nAction: Current Search\nAction Input: Winnipeg snow forecast todayWinnipeg, MB - 7 Day Forecast - Environment Canada C. Sunrise: 8:12 CST Sunset: 16:28 CST Averages and extremes 03 Dec Average high -6.8 \u00b0C Average low -15.6 \u00b0C Highest temperature (1938-2007) 6.7 \u00b0C 1941 Lowest temperature (1938-2007) -33.3 \u00b0C 1964 Greatest precipitation (1938-2007) 6.6 mm 1961 Greatest rainfall (1938-2006) 5.6 mm 1941 Winnipeg is also expected to see some of the white stuff today. Environment Canada's forecast for Friday says periods of rain will begin early in the morning then changing to snow in the... Warnings for wintry weather are in effect. It did, however, make a mark in the record books on Thursday, when it reached a high of 8.6 C. The old record of 5 C was set in 1939. The normal high for ... \u00b0F Observed at: Winnipeg Richardson Int'l Airport Date: 8:00 AM CDT Friday 20 October 2023 Condition: Mostly Cloudy Pressure: 100.6 kPa Tendency: Rising Temperature: 2.8\u00b0C Dew point: 2.1\u00b0C Humidity: 95% Wind: SSE 3 km\/h Visibility: 18 km Forecast Hourly Forecast Air Quality Alerts Jet Stream Fri 20 Oct 17\u00b0C Detailed forecast for the next 24 hours - temperature, weather conditions, likelihood of precipitation and winds ... Hourly Forecast - Winnipeg . No alerts in effect. Date\/Time (CST) Temp. (\u00b0C) Weather Conditions Likelihood of precip (%) ... Periods of rain mixed with snow. 100: N 20 : 10:00 : 1 : Periods of rain mixed with snow. 100: N 20 ...Do I need to use a tool? No\nFinal Answer: The forecast for Winnipeg today includes periods of rain early in the morning changing to snow later on. Environment Canada has issued warnings for wintry weather. As of the last observation at Winnipeg Richardson International Airport, the condition was mostly cloudy with a temperature of 2.8\u00b0C, and there is a 100% likelihood of precipitation with periods of rain mixed with snow expected. Please note that weather conditions can change rapidly, so it's always a good idea to check the latest forecast if you're planning to go out.\n\n&gt; Finished chain.\nThe forecast for Winnipeg today includes periods of rain early in the morning changing to snow later on. Environment Canada has issued warnings for wintry weather. As of the last observation at Winnipeg Richardson International Airport, the condition was mostly cloudy with a temperature of 2.8\u00b0C, and there is a 100% likelihood of precipitation with periods of rain mixed with snow expected. Please note that weather conditions can change rapidly, so it's always a good idea to check the latest forecast if you're planning to go out.<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">agent_executor.invoke({<span class=\"hljs-string\">\"input\"<\/span>: <span class=\"hljs-string\">\"What did I just ask you about?\"<\/span>})[<span class=\"hljs-string\">\"output\"<\/span>]<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"plaintext\"><span class=\"pre--content\">&gt; Entering new AgentExecutor chain...\n```\nThought: Do I need to use a tool? No\nFinal Answer: You just asked about the forecast for snow in Winnipeg today.\n```\n\n&gt; Finished chain.\nYou just asked about the forecast for snow in Winnipeg today.\\n```<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"python\"><span class=\"pre--content\">agent_executor.invoke({<span class=\"hljs-string\">\"input\"<\/span>: <span class=\"hljs-string\">\"Will it look like a white Christmas there?\"<\/span>})[<span class=\"hljs-string\">\"output\"<\/span>]<\/span><\/pre>\n<pre class=\"graf graf--pre graf--preV2\" spellcheck=\"false\" data-code-block-mode=\"2\" data-code-block-lang=\"plaintext\"><span class=\"pre--content\">&gt; Entering new AgentExecutor chain...\nThought: Do I need to use a tool? Yes\nAction: Current Search\nAction Input: Winnipeg white Christmas forecast 2023With 7 C in forecast, white Christmas may be dream By: Nicole Buffie Posted: 5:42 PM CST Monday, Dec. 4, 2023 Last Modified: 7:38 AM CST Tuesday, Dec. 5, 2023 Updates Winnipeggers hoping... Published Dec. 7, 2023 1:52 p.m. PST Share If you're anything like Michael Bubl\u00e9 or Bing Crosby before him, you're dreaming of a white Christmas, just like the ones you used to know. But as... WATCH: 2023-2024 winter weather forecast -- here's what Canadians can expect - Dec 1, 2023 After three consecutive La Ni\u00f1a winters, a moderate El Ni\u00f1o is now well established in the central... November 30, 2023 Share Facebook Email For daily wit &amp; wisdom, sign up for the Almanac newsletter. Are you dreaming of a White Christmas? In 2023, your dreams might come true! Of course, lots of snow can also affect travel plans. As always, The Old Farmer's Almanac looks ahead with our special Christmas Forecast 2023. Christmas! December is here: Will there be a white Christmas in 2023? Experts weigh in. Doyle Rice USA TODAY 0:00 1:39 It's about time to turn our attention to the December holidays, including...Do I need to use a tool? No\nFinal Answer: Based on the information available, it seems that the chances of a white Christmas in Winnipeg are uncertain, with a forecast of 7\u00b0C suggesting that a traditional snowy Christmas may not be guaranteed. Weather conditions can change, so it's always best to check closer to the date for the most accurate forecast.\n\n&gt; Finished chain.\nBased on the information available, it seems that the chances of a white Christmas in Winnipeg are uncertain, with a forecast of 7\u00b0C suggesting that a traditional snowy Christmas may not be guaranteed. Weather conditions can change, so it's always best to check closer to the date for the most accurate forecast.<\/span><\/pre>\n<p class=\"graf graf--p\">And there you have it\u200a\u2014\u200ahow to set up an agent both ways!<\/p>\n<h3 class=\"graf graf--h3\">Conclusion<\/h3>\n<p class=\"graf graf--p\">In summary, the blog discusses the setup and utilization of Conversational Agents in LangChain, emphasizing their capabilities to engage in interactive dialogues and remember past interactions for contextually informed decision-making.<\/p>\n<p class=\"graf graf--p\">Key aspects include:<\/p>\n<ol class=\"postList\">\n<li class=\"graf graf--li\">Toolkit Creation: Setting up tools like DuckDuckGoSearchAPIWrapper for enhanced search capabilities without the need for additional API tokens.<\/li>\n<li class=\"graf graf--li\">Memory Setup: Implementing a <code class=\"markup--code markup--li-code\">ConversationBufferMemory<\/code> with a <code class=\"markup--code markup--li-code\">chat_history<\/code> component to retain context from previous interactions.<\/li>\n<li class=\"graf graf--li\">LLM Initialization: Using GPT-4-Turbo with a specific temperature setting to ensure accurate response generation based on the ReAct framework.<\/li>\n<li class=\"graf graf--li\">Agent Initialization: Combining tools, the LLM, and memory to initialize the agent, capable of handling conversational tasks and maintaining context.<\/li>\n<li class=\"graf graf--li\">Execution and Testing: Running queries to confirm the agent\u2019s memory and conversational capabilities, including recalling previous discussion topics and handling new queries effectively.<\/li>\n<\/ol>\n<p class=\"graf graf--p\">These steps provide a comprehensive guide for creating and operating a Conversational Agent capable of handling complex interactions and tasks in a dynamic environment.<\/p>\n<\/div>\n<\/div>\n<\/section>\n","protected":false},"excerpt":{"rendered":"<p>Both ways: off-the-shelf and using&nbsp;LCEL Conversational Agents Conversational agents in LangChain facilitate interactive and dynamic conversations with users. Conversation agents are optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which could be better in a conversational setting where you may want the agent to be able [&hellip;]<\/p>\n","protected":false},"author":68,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"customer_name":"","customer_description":"","customer_industry":"","customer_technologies":"","customer_logo":"","footnotes":""},"categories":[65,7],"tags":[70,71,52,31,34],"coauthors":[166],"class_list":["post-8476","post","type-post","status-publish","format-standard","hentry","category-llmops","category-tutorials","tag-langchain","tag-language-models","tag-llm","tag-llmops","tag-prompt-engineering"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Conversational Agents in LangChain - Comet<\/title>\n<meta name=\"description\" content=\"Conversational Agents can engage in back-and-forth conversations, remember previous interactions and make contextually-informed decisions.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Conversational Agents in LangChain\" \/>\n<meta property=\"og:description\" content=\"Conversational Agents can engage in back-and-forth conversations, remember previous interactions and make contextually-informed decisions.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/\" \/>\n<meta property=\"og:site_name\" content=\"Comet\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cometdotml\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-17T01:17:58+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-04-24T17:03:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*W9HdTi44yeC9deaD\" \/>\n<meta name=\"author\" content=\"Harpreet Sahota\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@Cometml\" \/>\n<meta name=\"twitter:site\" content=\"@Cometml\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Harpreet Sahota\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"16 minutes\" \/>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Conversational Agents in LangChain - Comet","description":"Conversational Agents can engage in back-and-forth conversations, remember previous interactions and make contextually-informed decisions.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/","og_locale":"en_US","og_type":"article","og_title":"Conversational Agents in LangChain","og_description":"Conversational Agents can engage in back-and-forth conversations, remember previous interactions and make contextually-informed decisions.","og_url":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/","og_site_name":"Comet","article_publisher":"https:\/\/www.facebook.com\/cometdotml","article_published_time":"2023-12-17T01:17:58+00:00","article_modified_time":"2025-04-24T17:03:51+00:00","og_image":[{"url":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*W9HdTi44yeC9deaD","type":"","width":"","height":""}],"author":"Harpreet Sahota","twitter_card":"summary_large_image","twitter_creator":"@Cometml","twitter_site":"@Cometml","twitter_misc":{"Written by":"Harpreet Sahota","Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/#article","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/"},"author":{"name":"Harpreet Sahota","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/46036ab474aa916e2873daece26a28d6"},"headline":"Conversational Agents in LangChain","datePublished":"2023-12-17T01:17:58+00:00","dateModified":"2025-04-24T17:03:51+00:00","mainEntityOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/"},"wordCount":1324,"publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/#primaryimage"},"thumbnailUrl":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*W9HdTi44yeC9deaD","keywords":["LangChain","Language Models","LLM","LLMOps","Prompt Engineering"],"articleSection":["LLMOps","Tutorials"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/","url":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/","name":"Conversational Agents in LangChain - Comet","isPartOf":{"@id":"https:\/\/www.comet.com\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/#primaryimage"},"image":{"@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/#primaryimage"},"thumbnailUrl":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*W9HdTi44yeC9deaD","datePublished":"2023-12-17T01:17:58+00:00","dateModified":"2025-04-24T17:03:51+00:00","description":"Conversational Agents can engage in back-and-forth conversations, remember previous interactions and make contextually-informed decisions.","breadcrumb":{"@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/#primaryimage","url":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*W9HdTi44yeC9deaD","contentUrl":"https:\/\/cdn-images-1.medium.com\/max\/1600\/0*W9HdTi44yeC9deaD"},{"@type":"BreadcrumbList","@id":"https:\/\/www.comet.com\/site\/blog\/conversational-agents-in-langchain\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.comet.com\/site\/"},{"@type":"ListItem","position":2,"name":"Conversational Agents in LangChain"}]},{"@type":"WebSite","@id":"https:\/\/www.comet.com\/site\/#website","url":"https:\/\/www.comet.com\/site\/","name":"Comet","description":"Build Better Models Faster","publisher":{"@id":"https:\/\/www.comet.com\/site\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.comet.com\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.comet.com\/site\/#organization","name":"Comet ML, Inc.","alternateName":"Comet","url":"https:\/\/www.comet.com\/site\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/","url":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","contentUrl":"https:\/\/www.comet.com\/site\/wp-content\/uploads\/2025\/01\/logo_comet_square.png","width":310,"height":310,"caption":"Comet ML, Inc."},"image":{"@id":"https:\/\/www.comet.com\/site\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/cometdotml","https:\/\/x.com\/Cometml","https:\/\/www.youtube.com\/channel\/UCmN63HKvfXSCS-UwVwmK8Hw"]},{"@type":"Person","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/46036ab474aa916e2873daece26a28d6","name":"Harpreet Sahota","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.comet.com\/site\/#\/schema\/person\/image\/2d21512be19ba7e19a71a803309e2a88","url":"https:\/\/secure.gravatar.com\/avatar\/a6ca5a533fc9f143a0a7428037ff652aa0633d66bf27e76ae89b955ae72a0f2d?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a6ca5a533fc9f143a0a7428037ff652aa0633d66bf27e76ae89b955ae72a0f2d?s=96&d=mm&r=g","caption":"Harpreet Sahota"},"url":"https:\/\/www.comet.com\/site\/blog\/author\/theartistsofdatasciencegmail-com\/"}]}},"_links":{"self":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8476","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/users\/68"}],"replies":[{"embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/comments?post=8476"}],"version-history":[{"count":1,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8476\/revisions"}],"predecessor-version":[{"id":15420,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/posts\/8476\/revisions\/15420"}],"wp:attachment":[{"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/media?parent=8476"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/categories?post=8476"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/tags?post=8476"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.comet.com\/site\/wp-json\/wp\/v2\/coauthors?post=8476"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}