Skip to main content

自定义回调处理程序

您还可以创建一个自定义处理程序并将其设置在对象上。在下面的示例中,我们将使用自定义处理程序实现流式处理。

from langchain.callbacks.base import BaseCallbackHandler  
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage

class MyCustomHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"My custom handler, token: {token}")

# 要启用流式处理,我们在ChatModel构造函数中传入`streaming=True`
# 此外,我们还传入一个包含自定义处理程序的列表
chat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomHandler()])

chat([HumanMessage(content="Tell me a joke")])

API 参考:

My custom handler, token:   
My custom handler, token: Why
My custom handler, token: don
My custom handler, token: 't
My custom handler, token: scientists
My custom handler, token: trust
My custom handler, token: atoms
My custom handler, token: ?
My custom handler, token:

My custom handler, token: Because
My custom handler, token: they
My custom handler, token: make
My custom handler, token: up
My custom handler, token: everything
My custom handler, token: .
AIMessage(content="Why don't scientists trust atoms? \n\nBecause they make up everything.", additional_kwargs={}, example=False)