By: Samuel Chan · October 16, 2024
Table of Content
runnables
) and the older implementations featuring LLMChain
and ConversationChain
.LLMChain
or ConversationChain
, and
the simplicity of these classes made it easy to showcase the memory system. I will first demonstrate how that is
done before moving on to the newer, more flexible RunnableWithMessageHistory
class as recommended in the
latest version of LangChain (0.3.2).
LLMChain
and ConversationChain
LLMChain
and ConversationChain
classes.As of LangChain 0.3.0 (mid-October ‘24), these two will yield a LangChainDeprecationWarning
warning.LLMChain
was deprecated in LangChain 0.1.17 and will be removed in 1.0. Use :meth:~RunnableSequence, e.g.,
prompt | llm“ instead.ConversationChain
was deprecated in LangChain 0.2.7 and will be removed in 1.0. Use :meth:~RunnableWithMessageHistory: https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html
instead.PromptTemplate
class, which is used to define the template for the prompt. How we name the variables in the template is important, as it will be used to match the keys in the memory system.
ConversationBufferMemory
class, which is a simple memory system that stores the conversation history in a buffer.
It requires a memory_key
to match the key in the prompt template.
{history}
in the prompt template, the memory system will store the conversation history under the key history
, which will be used to augment the prompt before passing it to the model.
If desired, one can also manipulate the memory system by adding user or AI messages to the conversation history through the chat_memory
attribute.
ConversationChain
or LLMChain
set up, you can interact with it as you would with any other chain. The memory system will automatically update the conversation history with each turn,
and the model will be able to access this history in subsequent turns.
RunnableWithMessageHistory
RunnableWithMessageHistory
class
along with the LCEL (LangChain Expression Language) to build your conversational AI agents. ReAct agents and LCEL
are topics covered in Chapter 4: Tool-Use ReAct Agents of the series.
The key changes with LangChain 0.3.2 and above are the use of RunnableWithMessageHistory
to construct a
runnable
— consistent with what we’ve learned in previous chapters of this series — and a more explictly
way of handling message history through InMemoryChatMessageHistory
. RunnableWithMessageHistory
wraps around a runnable (like
the ones we’ve seen before) but with the added capability of working with chat message history, thus allowing this
runnable to read and update the message history in a conversation.
Unlike other runnables, RunnableWithMessageHistory
must always be invoked with a config
that contains the parameters
for the chat message history.
Let’s start with the imports and set up a runnable chain much like you’ve done in the previous chapters.
history
and question
, but your use-case may vary.
The big picture idea isn’t much different from the previous examples, where we are creating these variables to allow the memory system to augment the prompt before passing it to the model.
Set aside syntactic differences, the key idea is to inject, or “copy-paste”, into the prompt past conversational rounds so the prompt is contextually informative.
RedisChatMessageHistory
or MongoDBChatMessageHistory
.View the full list of integration packages and providers on LangChain Providers.chain
set up, let’s now:
chain
with RunnableWithMessageHistory
to handle the message history through matching the variables in the prompt template.get_session_history_by_id
function retrieves the message history based on a unique session id.
If the session_id
is not found in the store, it means the user has not interacted with the agent before, and
so a new InMemoryChatMessageHistory
object is created and stored in the dictionary.
with_memory
runnable to see how it performs in a conversation.
supertype
is not present in store
, a new InMemoryChatMessageHistory
object is created on our memory store under the supertype
key.
Subsequent interactions with the agent using this session_id
will refer to this key (pointing to an object containing the conversation history).
Just as how we initialized store
as an empty dictionary, print(store)
will show you that the structure of this dictionary is as follows:
store
has been updated with this new key, let’s also print out the content of this new key-value pair:
session_id
for different Conversationssession_id
, it was able to identify which companies were being referred to and inject the right
context from our memory store.
Now that our conversation has grown a little longer, let’s see if it still maintains context in the next question.
session_id
, the agent will not be able to retrieve the conversation history
from the store
dictionary and will promptly create a new InMemoryChatMessageHistory
object for that session_id
, as implemented in the get_session_history_by_id
function.
history_factory_config
that expects a list of ConfigurableFieldSpec
objects.
get_session_history
to this new function that I have yet to create, so let’s go ahead
and create it:
prompt
for this example, even though it’s not necessary for the
history_factory_config
to work.
history_factory_config
, your config
will have to match
the specifications constructed with the ConfigurableFieldSpec
objects.
_get_stocks_of_user
and _get_user_settings_preferences
:
001
), and then user Anonymous (id 002
).
conversation_id
, it will default to 1
. This is verified
by printing the store
dictionary after the first chat:
conversation_id
of 1
explicitly, but due to
the implementations of get_session_history_by_uid_and_convoid
, it will still create a new InMemoryChatMessageHistory
object.
Let’s verify that asking the AI for the name (user 1 introduces himself as Sam) will not work for user 2.
conversation_id
is the same, our function is implemented in such a way that the
AI agent will treat it as a separate conversation.
conversation_id
of 1
:
SQLChatMessageHistory
SQLChatMessageHistory
with SQLite.
Start with installing the langchain-community
package, which contains the SQLChatMessageHistory
class. As always, I
recommend doing this in a virtual environment.
SQLChatMessageHistory
class and modify your get_session_history_by_uid_and_convoid
function to use it,
swapping out InMemoryChatMessageHistory
for SQLChatMessageHistory
.
chat(user_id, input)
for the
first time, it will create a new memory.db
file in the same directory as your script.
message_store
being created for us, identical to
the following schema:
SELECT * FROM message_store
will show you the conversation history stored in the database:
ReAct
agentsReAct
agents in the previous chapter. Adding in-memory
capabilities to these agents is actually fairly straightforward, so let’s see a bare minimum example of how to do this.
MemorySaver
class, which LangChain describes as an in-memory checkpoint saver.
Just like the store={}
dictionary we used in the previous examples, this class also
stores its checkpoints using a defaultdict
in memory.
I’ve mentioned that create_react_agent
really requires two arguments: the llm
model and the tools
list, but
accept additional keyword arguments. If you want to, you can also pass in a state_modifier
that acts almost like
a prompt (we’ve also seen this earlier):
@tool
decorator, but
this serves as a sufficient example to demonstrate the use of a tool-using (“function calling”) ReAct
agent with memory capabilities.
get_company_overview
tool.
In fact, if we so desire, we can also break down each intermediary message contained in the out['messages']
list for inspection:
HumanMessage
object, which is the user’s input (e.g. “Give me an overview of ADRO”).AIMessage
, which reads the user’s input and decides on the right tools to callToolMessage
, which is the tool call itself (e.g. get_company_overview
)AIMessage
, which is the AI agent’s response to the user’s input, in plain human languagechat()
and we will now proceed to ask a few follow up questions to see the memory in action.