skip to Main Content

Introduction to LangChain for Including AI from Large Language Models (LLMs) Inside Data Applications and Data Pipelines

language logo and blog title "Introduction to LangChain for Including AI from Large Language Models (LLMs) Inside Data Applications and Data Pipelines"
Image by Author

Large Language Models (LLMs) entered the spotlight with the release of OpenAI’s GPT-3 in 2020. We have seen exploding interest in LLMs and in a broader discipline, Generative AI. There have been great leaps in the LLMs space, from the introduction of Google’s “sentient” LaMDA chatbot, BLOOM, the first high-performance and open-source LLM, and the release of next-generation GPT-3.5 models by OpenAI. However, after all these milestones, the release of ChatGPT thrust the LLMs into the spotlight. LangChain appeared around the same time as the ChatGPT release and was already packed with excellent tools for building tools based on LLMs. This article will provide an overview of the LangChain library, what it can do, the problems it solves, and its use cases.

In the following article, we will look at how we can build applications while leveraging various concepts of LangChain.

Introduction to LangChain

Langchain is an open-source framework that provides tools, components, and interfaces that simplify the development of LLM-powered applications. It has a collection of APIs that developers can embed in their applications, empowering them to infuse language processing capabilities without building everything from the ground up. LangChain efficiently simplifies the process of crafting LLM-based applications.

Applications like chatbots, Generative-Question-Answering (GQA), summarization, virtual assistants, and language translation utilities represent LLM-powered applications. Developers leverage LangChain to create tailored language model-based applications that cater to specific needs.

Its core idea is that we can “chain” together different components to create more refined LLM apps. Features include:

  • Models: LLMs, chat models, and text embedding models.
  • Prompts: Inputs to a model.
  • Memory: Storing and retrieving data while conversing.
  • Indexes: An interface for querying large datasets, enabling LLMs to interact with different document types for retrieval purposes.
  • Chains: Allows developers to combine components for advanced use cases and integration with other tools.
  • Agents: Involves LLM decision-making, action execution, observation, and repetition until completion.
  • Callbacks: Introduce examination and introspection as an additional layer of support.

The following article explains each of these in more detail with code.

LangChain supports using two programming languages:

  • JavaScript: Great for embedding AI into web applications with the node package.
  • Python: Great for including AI in Python-based software or data pipelines.

What Does LangChain Address?

LangChain implements two major workflows for interacting with LLMs: chatting and embedding. These workflows have problems that LangChain aims to address. Let’s discuss some of them:

Memory Constraints On LLMs

LLMs generate responses based on previous conversations. However, these models have relatively short memory. For instance, GPT-4 has a memory constraint of 8000 tokens. That means that if the conversation exceeds the memory limit, the responses become inconsistent since it might lose track of the beginning of the conversation.

You want your applications, for instance, a chatbot, to remember the entire conversation with a customer. LangChain makes this possible by providing a chat memory tool that enables LLMs to reflect on previous interactions.

Structured Response Formats

When provided with a prompt, you may require a model to consistently generate a response in a specific format (e.g., CSV, JSON, datatime format, etc.) other than text. To handle the output requirement formats, LangChain provides output parser classes that are responsible for that.

You will provide the parser with instructions on how the model’s output should be formatted. When presented with the model’s response, the parser will use these instructions to parse the response onto the specified structure.

Prompting

LLMs are trained on a simple concept — you input a text sequence, and the model outputs a sequence of text. In this case, the one crucial variable is the input sequence — the prompt.

With LLMs, prompts are vital. Bad prompts will generate poor responses, while good ones will generate exceptional ones; thus, constructing good ones can be problematic when working with LLMs. While prompting may involve defining the task to complete, it also includes determining the AI’s personality and writing style and including instructions to encourage factual precision.

LangChain recognizes the power of prompts and has built great classes just for that. A good prompt may consist of the following components:

  • Instructions: Instruct the model on what to do and how to structure the response.
  • Context: Additional source of knowledge for the model, which can be manually given, pulled from APIs, retrieved from vector stores, or pulled from other sources.
  • Query: A user input to the system by a human user.

For instance, take the following prompt:

prompt = """ Answer the question based on the context below. If the information provided cannot answer the question, respond with "Can't find an answer."
Context: Langchain is an open-source framework that provides tools, components, and interfaces that simplify the development of LLM-powered applications. Its key features include Prompts, Memory, Indexes, Chains, Agents, and callbacks. LangChain is provided in two programming languages: Python and JavaScript.
Question: What are the main features of LangChain, and what programming languages can I implement these components with?
Answer: 
"""

Well, it’s unlikely that we will hardcode the context and the question in most cases. We would feed them through a template. LangChain makes this possible by providing prompt template classes. These classes are developed to make prompting with dynamic inputs easier. One such template class is the PromptTemplate class.

Switching Between LLM Models

There are other LLM providers available aside from OpenAI. Building software on one provider or API can lock the software only onto that ecosystem. In the future, you may realize that the software requires more excellent capabilities to enhance your product. Switching to a more capable model can sometimes be a hustle.

However, LangChain provides the LLM class that provides an interface for interacting with many LLMs. This class makes an abstraction easier to swap between LLMs or use multiple LLMs in your software.

Integration of LLMs Into Pipelines

LangChain implements chains and agents that provide pipeline-type workflows. For instance, we may extract data from sources like databases, which we then pass into an LLM and send a processed output to another system.

Chains are objects that wrap multiple individual components together. A chain integrates an LLM with a prompt, forming modules that can execute operations on our text or other datasets. These chains are designed to process multiple inputs simultaneously. The most commonly used chain is the LLM Chain.

Agents are more sophisticated, allowing business reasoning to let you decide how the components should interact. Some applications will require a predetermined chain of calls to LLMs/other tools and potentially an unknown chain that depends on the user’s input. An agent has access to a suite of tools such that, depending on the user input, the agent can then decide which, if any, of these tools to call.

Passing Data to LLMs

LLMs are text-based, so it is not always clear how to pass data into the model. First, you may need to store the data in a particular format, like an SQL table or DataFrame, which lets you control the portions of data sent to the LLM. LangChain implements indexes that provide the functionality to retrieve data in various formats so the LLM can best interact with them.

Secondly, you can pass the data to a prompt through Prompt Stuffing, the more straightforward way provided by LangChain’s indexing category. However, passing the dataset to the prompt as context is simple and efficient but only applicable when you have a small dataset.

Index-related chains have three more techniques apart from Prompt stuffing:

  • Map-reduce: Splits data into chunks, calls the LLM with an initial prompt on each chunk, then runs a different prompt to combine responses from the initial prompt.
  • Refine: Runs an initial prompt on the first chunk of data, generating some output. Then, for each additional chunk of data, you use a prompt asking the LLM to refine the result based on the new dataset. It can be ideal when the output should converge on a specific output.
  • Map-Rerank: Runs a prompt for each chunk of data and provides a confidence score for its response, which you can use to rank the outputs. That can be ideal for recommendation-like tasks.

Use Cases of LangChain

LangChain offers a wide range of applications. It shines in some of the following use cases:

  • API Integration: LangChain can integrate with various APIs, enabling the development of applications that interact with other platforms and services. For instance, you may use an API to retrieve exchange rate data or to interact with a cloud platform. LangChain’s chains and agentsfeatures enable you to create data pipelines for this use case.
  • Chatbot Development: LangChain provides the necessary tools to build chatbots. Chatbots are one of the most popular AI use cases. They need to have a communication style and recall previous interactions/context during a conversation. LangChain prompt templates can give you control over a chatbot’s responses. Additionally, message history tools ensure conversation consistency by providing the chatbot with adequate memory.
  • Intelligent Question Answering: We can use LangChain to develop question-answering applications based on specific documents.
  • Creating Interactive Agents: Developers can create agents that interact with users, make decisions based on their input, and continue their tasks until completion. These agents can be helpful in customer service, data collection, and other interactive applications.
  • Data Summarization: LangChain can create applications that summarize long documents. This feature can help process lengthy texts, articles, or reports.
  • Document Retrieval and Clustering: LangChain can simplify retrieval and clustering using embedding models. That can also be important when grouping similar documents or retrieving specific documents based on certain criteria.

What Are the Limitations of LangChain?

LangChain offers exceptional capabilities but has some limitations:

  • Constraint to train only on text data: It does not support training on non-textual data. That may hinder its application in domains where data extends beyond text.
  • Lacks support for multi-task learning: Simultaneously training language models on multiple related tasks improves performance. Without this technique, the efficiency and effectiveness of specific applications may not benefit from leveraging multi-task learning approaches.
  • Extra steps to deploying LangChain-based models: LangChain does not provide straightforward deployment methods, which may cause problems when deploying such applications in production environments.

Final Thoughts

This article has introduced LangChain in a relatively basic manner. You have learned about LangChain and some of its use cases. Despite its limitations, LangChain is a robust framework that developers can use to harness the capabilities of language models. With its vast benefits and ongoing development, LangChain holds excellent assurance for the future of AI-powered solutions.

Brian Mutea, Heartbeat author

Brian Mutea

Back To Top