Skip to main content

对话摘要记忆

现在让我们来看一下使用稍微复杂的记忆类型 - ConversationSummaryMemory。这种类型的记忆会随着时间的推移创建对话的摘要。这对于从对话中概括信息非常有用。对话摘要记忆会在对话进行时总结对话,并将当前摘要存储在记忆中。然后可以将该记忆用于将迄今为止的对话摘要注入到提示/链中。这种记忆对于较长的对话非常有用,如果将过去的消息历史原样保留在提示中会占用太多的标记。

让我们首先探索这种记忆类型的基本功能。

from langchain.memory import ConversationSummaryMemory, ChatMessageHistory  
from langchain.llms import OpenAI

memory = ConversationSummaryMemory(llm=OpenAI(temperature=0))
memory.save_context({"input": "hi"}, {"output": "whats up"})

memory.load_memory_variables({})

{'history': '\nThe human greets the AI, to which the AI responds.'}

我们还可以将历史记录作为消息列表获取(如果您正在与聊天模型一起使用,这将非常有用)。

memory = ConversationSummaryMemory(llm=OpenAI(temperature=0), return_messages=True)  
memory.save_context({"input": "hi"}, {"output": "whats up"})

memory.load_memory_variables({})

{'history': [SystemMessage(content='\nThe human greets the AI, to which the AI responds.', additional_kwargs={})]}

我们还可以直接使用predict_new_summary方法。

messages = memory.chat_memory.messages  
previous_summary = ""
memory.predict_new_summary(messages, previous_summary)

'\nThe human greets the AI, to which the AI responds.'

使用消息/现有摘要进行初始化

如果您有此类之外的消息,可以使用ChatMessageHistory轻松初始化该类。在加载过程中,将计算摘要。

history = ChatMessageHistory()  
history.add_user_message("hi")
history.add_ai_message("hi there!")

memory = ConversationSummaryMemory.from_messages(
llm=OpenAI(temperature=0),
chat_memory=history,
return_messages=True
)

memory.buffer

'\nThe human greets the AI, to which the AI responds with a friendly greeting.'

您还可以使用先前生成的摘要来加快初始化,并通过直接初始化来避免重新生成摘要。

memory = ConversationSummaryMemory(  
llm=OpenAI(temperature=0),
buffer="The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.",
chat_memory=history,
return_messages=True
)

在链中使用

让我们通过一个示例来演示在链中使用它,再次设置verbose=True以便我们可以看到提示。

from langchain.llms import OpenAI  
from langchain.chains import ConversationChain

llm = OpenAI(temperature=0)
conversation_with_summary = ConversationChain(
llm=llm,
memory=ConversationSummaryMemory(llm=OpenAI()),
verbose=True
)

conversation_with_summary.predict(input="Hi, what's up?")

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi, what's up?
AI:

> Finished chain.

" Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?"
conversation_with_summary.predict(input="Tell me more about it!")  

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue.
Human: Tell me more about it!
AI:

> Finished chain.

" Sure! The customer is having trouble with their computer not connecting to the internet. I'm helping them troubleshoot the issue and figure out what the problem is. So far, we've tried resetting the router and checking the network settings, but the issue still persists. We're currently looking into other possible solutions."
conversation_with_summary.predict(input="Very cool -- what is the scope of the project?")  

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue where their computer was not connecting to the internet. The AI was troubleshooting the issue and had already tried resetting the router and checking the network settings, but the issue still persisted and they were looking into other possible solutions.
Human: Very cool -- what is the scope of the project?
AI:

> Finished chain.

" The scope of the project is to troubleshoot the customer's computer issue and find a solution that will allow them to connect to the internet. We are currently exploring different possibilities and have already tried resetting the router and checking the network settings, but the issue still persists."