October 12, 2023 • 6 min read

A 3-Step Guide For Beginner LangChain AI Developers To Build Custom LLM Assistants That Automate Tedious Work While You Focus On What Matters

Rédigé par Maria Vexlard

Maria Vexlard


In a world full of quick AI-generated answers, what if you had an assistant that could go an extra mile?

Imagine one that doesn't simply write generic, one-size -fits-all replies to your questions. Instead, it gathers information from various sources, including your own personal knowledge base, and produces truly insightful decisions to automate your daily life.

In this step-by-step guide, you'll develop an intelligent, context-aware, and personalized Large Language Model (LLM)-based personal assistant using LangChain Agents.

The goal is to provide you with a versatile framework that can be adapted to various needs and contexts. As a relatable example, we'll walk through creating a chatbot that can recommend outfits based on real-time weather and user’s wardrobe.

However, the principles you'll learn here can be extended to numerous other use-cases.

Why Go Beyond Simple Prompts?

You may wonder, "Why not just chat with a language model?"

You could, but here's what you'd be missing:

  • Access to External Resources: Sure, a simple text prompt can translate text or answer trivia, but what about pulling real-time stock market data or your latest health metrics from a wearable device? Those tasks require the ability to interact with external APIs.
  • User-Specific Knowledge Base: While an LLM can provide generic responses, it can't dig into a personal knowledge base to offer advice or recommendations based on your specific documents, notes or databases, which can be hard to insert in one prompt.

By the end of this guide, you will learn the following:

  • Level 1: Starting with the basics, you'll learn how to interact with an LLM using simple prompt templates.
  • Level 2: Advancing further, you'll equip your assistant with the ability to run custom code and access external resources.
  • Level 3: Finally, you'll personalize the assistant by giving it access to an individual user knowledge base.

I’m using OpenAI API as base LLM in this tutorial, but I’m intentionally not using any OpenAI-specific functionality (i.e. the OpenAI Functions) to make the example as LLM-agnostic as possible so that you aren’t restricted to using OpenAI if you don’t want to.

If you’re unsure whether an open-source LLM might be a better fit for your project, check out this blog post comparing OpenAI and open source models.

Let’s get started!

The Building Blocks of a Smart Assistant

Setting Up Your Development Environment

Before diving in, let’s setup the development environment.

Run the following command to install the langchain library, which provides essential tools for working with LLMs:

pip install langchain

Your First LLM-Powered Assistant - The Simplest Version

We will start with the simplest, most basic version of the assistant which is basically a set of simple instructions in a prompt.

We’ll use the LangChain OpenAI class to use the GPT-3.5 API in this example, but you can specify another LLM you want to use. Meanwhile, the PromptTemplate class allows you to define specific questions or tasks that you want the LLM to handle, via a parameterized prompt (which is pretty much an f-string). This combination forms the basis of your assistant's capabilities.

Prompt Template

Now that the basic setup is complete, let's put it to the test.

The run method executes the LLM chain, which takes the prompt template and returns the assistant's output. This will give you a firsthand look at how your assistant performs in a controlled scenario.

suggestion_chain.run(weather="sunny", occasion="work")

Here’s the result:

For a sunny workday, I recommend a light-colored blouse or shirt with slacks or trousers. Layer with a cardigan, blazer or light-weight jacket for a professional look. Add a belt or scarf to finish off the ensemble. Have a great day!

Not bad, but you still need to first check the weather online and then feed it to the chatbot manually, which is annoying.

Is there a way to automate this?

Making Your Assistant Context-Aware

Why Context Matters

We’ve created an assistant that's good for basic, general recommendations.

It's a start, but it doesn't have its finger on the pulse of real-world conditions. To enhance its capabilities, we'll infuse it with real-time data. This upgrade hinges on understanding two pivotal concepts in LangChain:

  • Agents: These are the decision-makers, mini-programs within LangChain that act based on their knowledge and the inputs they receive from you.
  • Tools: These are user-specified functions that agents call upon for more deterministic tasks. Whether it's accessing external APIs or running Python code, tools are your go-to for specialized actions.

LangChain provides a variety of built-in tools, but its true power lies in its flexibility—you can create custom tools tailored to your specific needs.

To illustrate, we'll make our assistant aware of real-time weather conditions by integrating a weather tool. This tool, built on an OpenWeatherMap API wrapper, will pull in current weather data.

But before it can do that, it needs to know the date of your event to look up specific weather forecasts and offer genuinely context-aware recommendations.

Understanding Natural Language Dates

As a language model, GPT-3 does not have access to the current date information, and asking the user to provide the date manually might not result in the best user experience.

However, we can overcome this drawback by adding a date inference tool. This tool is designed to parse natural language date expressions and convert them into a standardized format, which allows the assistant to understand and plan around specific dates mentioned by the user. The most simple way to create a tool in LangChain is by using the @tool decorator around a function that takes a text string as input:

Date Parser Tool

Notice how we include a detailed description of what the tool does in the function docstring, which will be later passed to the agent executor to help the agent decide when to use the tool.

Adding Weather Awareness

Now we can add a tool that would allow the assistant to retrieve weather info using the OpenWeather API.

LangChain already has a lot of tools included in the library, which can be tweaked and/or used as examples to create new custom tools. I’ve updated the existing OpenWeatherMapAPIWrapper class to enable it to use a date as input and look up future weather forecasts. We can then pass the new run method into the Tool dataclass’ from_function method, which is another way to define a custom tool:

Weather Tool

(Here’s how to setup OpenWeatherMap API access if you need to add a similar tool to your assistant).

Multi-input Tools

So far we’ve only created tools that take one string as input.

What if our agent needs to answer a question depending on multiple parameters? In this case, we’ll need to create a StructuredTool:

Structured Tool

The agent will read the function signature to figure out which parameter corresponds to which input, so try to make the argument names clear and intuitive!

The Assistant in Action: Now with Real-Time Data

We’ll use the ReAct agent framework, which is the most general purpose action agent type in LangChain.

To put it simply, ReAct combines natural language processing and decision-making to solve complex tasks. It uses a language model to understand and generate human-like language, and a decision-making model to generate a plan of action based on the task description.

Here's how it works:

  1. ReAct receives a task description in natural language.
  2. The language model processes the task description and generates a plan of action.
  3. The decision-making model executes the plan of action and interacts with an external environment (such as a Wikipedia API) to retrieve information and update the plan as needed.
  4. As the plan is executed, the language model generates reasoning traces to help the decision-making model make decisions and adjust the plan if needed.
  5. ReAct continues to iterate through this process until the task is completed.

The STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION LangChain agent type is the implementation of the ReAct method that allows for tools with multiple inputs (StructuredTool).

The Agent (V0)

With the integration of dynamic data sources, the assistant is now far more context-aware. It can understand specific dates and use real-time weather data to make more accurate and useful recommendations.

Let’s see if it works by running a free-form query:

Great, our chatbot now applies a chain of reasoning before giving us an informed recommendation!

Tailoring the Assistant to Individual Users

The Importance of Personalization

The assistant is now more context-aware but lacks the ability to access user-specific details (unless manually provided by the user every time).

To make the assistant's recommendations more personalized, we'll provide it with access to a knowledge base that can store and retrieve user-specific information, such as items in a user's wardrobe. This can be done using a vector database that stores documents in the form of embeddings, which allows us to carry out similarity search on these documents.

For illustration purposes, I’ve generated a JSONL list of somewhat diverse clothing items a 30-year-old woman might own, using ChatGPT. I’ve then loaded it with LangChain’s JSONLoader and split in chunks:

JSON Loader

Now these documents are ready to be injected in a vectorstore in the form of embeddings. I’m using OpenAI embeddings and FAISS vectorstore, but you can use anything you prefer - this does not change the general logic. (Read more about vector databases here.)

We can then create a retriever tool - a function to search the documents - that we’ll add to the agent’s toolkit.

Retriever Tool

Now let’s give our agent access to the retriever and teach it how to use it.

We will use a new type of agent called Conversational Retrieval Agent, which is best suited for chatbots that need to look up information in a (vector) database.

You can provide a custom system prompt to initialize the agent to provide it with a better understanding of what they are supposed to do with all the tools at their disposal:

The Final Agent

The Final Product: A Personalized, Context-Aware Assistant

With the knowledge base in place, the assistant is now a fully-equipped, context-aware, and personalized tool. It can make recommendations based on both real-time data and individual user preferences, offering a tailored user experience.

Let’s see how it works:

Final Agent Results

That’s it!

Congratulations, you've successfully built a virtual assistant that is both context-aware and personalized!

The modular design of the code allows for easy adaptability, making it a versatile solution for a multitude of applications. You're now equipped with the knowledge to get started with creating your own custom agents (and the official documentation has always got your back should you need any extra info)!
Happy coding!

P.S. DISCLAIMER: This blog post is a result of my collaboration with a brilliant team consisting of ChatGPT, MidJourney and Copilot, as well as multiple amazing human reviewers at Sicara (thanks a lot guys <3).

P.P.S. Looking for GenAI/LLM experts for your company’s projects? We would be happy to get in touch! 🧡 Contact us

Cet article a été écrit par

Maria Vexlard

Maria Vexlard