如何向多输入链添加内存
大多数内存对象都假设只有一个输入。在这个笔记本中,我们将介绍如何向具有多个输入的链中添加内存。作为这样一个链的示例,我们将向一个问题/回答链添加内存。该链接受相关文档和用户问题作为输入。
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings.cohere import CohereEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores import Chroma
from langchain.docstore.document import Document
API 参考:
- OpenAIEmbeddings 来自
langchain.embeddings.openai
- CohereEmbeddings 来自
langchain.embeddings.cohere
- CharacterTextSplitter 来自
langchain.text_splitter
- ElasticVectorSearch 来自
langchain.vectorstores.elastic_vector_search
- Chroma 来自
langchain.vectorstores
- Document 来自
langchain.docstore.document
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(
texts, embeddings, metadatas=[{"source": i} for i in range(len(texts))]
)
print("Running Chroma using direct local API.")
print("Using DuckDB in-memory for database. Data will be transient.")
query = "What did the president say about Justice Breyer"
docs = docsearch.similarity_search(query)
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
API 参考:
- load_qa_chain 来自
langchain.chains.question_answering
- OpenAI 来自
langchain.llms
- PromptTemplate 来自
langchain.prompts
- ConversationBufferMemory 来自
langchain.memory
template = """You are a chatbot having a conversation with a human.
Given the following extracted parts of a long document and a question, create a final answer.
{context}
{chat_history}
Human: {human_input}
Chatbot:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input", "context"], template=template
)
memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input")
chain = load_qa_chain(
OpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt
)
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "human_input": query}, return_only_outputs=True)
print(chain.memory.buffer)
Human: What did the president say about Justice Breyer
AI: Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.