tl  tr
  Home | Tutorials | Articles | Videos | Products | Tools | Search
Interviews | Open Source | Tag Cloud | Follow Us | Bookmark | Contact   
 Generative AI > LangChain > LangChain Conversation Memory

LangChain Conversation Memory

Author: Venkata Sudhakar

By default an LLM has no memory - every call is independent. If a customer tells your support chatbot "my order number is 4821" in one message and then asks "when will it arrive?" in the next, the bot has no idea which order they mean. Conversation memory solves this by maintaining the message history and including it in every subsequent call to the LLM. The model can then refer back to anything said earlier in the session, making the conversation feel natural and coherent rather than like talking to someone with amnesia.

LangChain provides several memory classes. ConversationBufferMemory keeps every message in full - simple and reliable for short conversations. ConversationBufferWindowMemory keeps only the last K message pairs - useful when you want to limit token usage on long chats. ConversationSummaryMemory uses the LLM itself to compress old messages into a running summary, keeping costs low while preserving the key facts from earlier in the conversation. For a typical customer support chatbot, buffer memory with a window of 10 pairs covers most sessions comfortably.

The below example builds a shopping assistant that remembers the customer name, their product interest, and their budget across multiple turns - no repetition needed from the customer.


It gives the following output,

Customer: Hi, my name is Priya
Bot: Hello Priya! Welcome to FreshMart. How can I help you today?

Customer: I am looking for organic products
Bot: Great choice, Priya! We have a wide range of certified organic fruits,
vegetables, dairy, and grains. Any particular category you have in mind?

Customer: My budget is around Rs 500 per week
Bot: Perfect - Rs 500 a week works well for organic basics. You could cover
seasonal vegetables, one or two fruits, and some pantry staples comfortably.

Customer: What would you recommend for me?
Bot: Based on your interest in organic products and a Rs 500 weekly budget,
here is a good starting basket for you, Priya:
- Organic seasonal vegetables (spinach, carrots, tomatoes) - Rs 180
- Organic bananas or apples - Rs 80
- Organic whole wheat flour or rice - Rs 120
- Organic dal or pulses - Rs 120
Total: Rs 500

# Notice: the bot remembered "Priya", "organic", and "Rs 500"
# without the customer repeating any of it in the final question

It gives the following output,

Messages stored in session: 8
Customer: Hi, my name is Priya
Bot: Hello Priya! Welcome to FreshMart. How can I help you today?
Customer: I am looking for organic products
Bot: Great choice, Priya! We have a wide range of certified organ...
Customer: My budget is around Rs 500 per week
Bot: Perfect - Rs 500 a week works well for organic basics. You c...
Customer: What would you recommend for me?
Bot: Based on your interest in organic products and a Rs 500 weekl...

# Full conversation stored - 4 turns = 8 messages
# Each new message is automatically appended by RunnableWithMessageHistory

Each session_id creates a completely isolated conversation - customer A and customer B never see each other messages. In production, generate a unique session ID when a customer starts a chat (UUID tied to their login session), store the ChatMessageHistory in Redis so it survives server restarts, and set a TTL so old sessions are cleaned up automatically. For very long conversations (more than 20 turns), switch to ConversationSummaryMemory to keep costs manageable while preserving key facts like the customer name, order number, and stated preferences.


 
  


  
bl  br