Skip to main content

多个回调处理程序

在之前的示例中,我们通过使用callbacks=在创建对象时传递回调处理程序。在这种情况下,回调将被限定在该特定对象范围内。

然而,在许多情况下,当运行对象时,将处理程序传递而不是在创建时传递是有优势的。当我们通过在执行运行时使用callbacks关键字参数传递CallbackHandlers时,这些回调将由执行中涉及的所有嵌套对象发出。例如,当将处理程序传递给Agent时,它将用于与代理相关的所有回调以及代理执行中涉及的所有对象,即ToolsLLMChainLLM

这样可以避免我们手动将处理程序附加到每个单独的嵌套对象上。

from typing import Dict, Union, Any, List

from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import tracing_enabled
from langchain.llms import OpenAI


# 首先,定义自定义的回调处理程序实现
class MyCustomHandlerOne(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start {serialized['name']}")

def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
print(f"on_new_token {token}")

def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""当LLM出错时运行。"""

def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
print(f"on_chain_start {serialized['name']}")

def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
print(f"on_tool_start {serialized['name']}")

def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
print(f"on_agent_action {action}")


class MyCustomHandlerTwo(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start (我是第二个处理程序!!) {serialized['name']}")


# 实例化处理程序
handler1 = MyCustomHandlerOne()
handler2 = MyCustomHandlerTwo()

# 设置代理。只有`llm`会为handler2发出回调
llm = OpenAI(temperature=0, streaming=True, callbacks=[handler2])
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)

# handler1的回调将由参与代理执行的每个对象发出(llm、llmchain、tool、agent executor)
agent.run("2的0.235次方是多少?", callbacks=[handler1])

API 参考:

on_chain_start AgentExecutor
on_chain_start LLMChain
on_llm_start OpenAI
on_llm_start (我是第二个处理程序!!) OpenAI
on_new_token I
on_new_token need
on_new_token to
on_new_token use
on_new_token a
on_new_token calculator
on_new_token to
on_new_token solve
on_new_token this
on_new_token .
on_new_token
Action
on_new_token :
on_new_token Calculator
on_new_token
Action
on_new_token Input
on_new_token :
on_new_token 2
on_new_token ^
on_new_token 0
on_new_token .
on_new_token 235
on_new_token
on_agent_action AgentAction(tool='Calculator', tool_input='2^0.235', log=' I need to use a calculator to solve this.\nAction: Calculator\nAction Input: 2^0.235')
on_tool_start Calculator
on_chain_start LLMMathChain
on_chain_start LLMChain
on_llm_start OpenAI
on_llm_start (我是第二个处理程序!!) OpenAI
on_new_token
on_new_token ```text
on_new_token

on_new_token 2
on_new_token **
on_new_token 0
on_new_token .
on_new_token 235
on_new_token

on_new_token ```

on_new_token ...
on_new_token num
on_new_token expr
on_new_token .
on_new_token evaluate
on_new_token ("
on_new_token 2
on_new_token **
on_new_token 0
on_new_token .
on_new_token 235
on_new_token ")
on_new_token ...
on_new_token

on_new_token
on_chain_start LLMChain
on_llm_start OpenAI
on_llm_start (我是第二个处理程序!!) OpenAI
on_new_token I
on_new_token now
on_new_token know
on_new_token the
on_new_token final
on_new_token answer
on_new_token .
on_new_token
Final
on_new_token Answer
on_new_token :
on_new_token 1
on_new_token .
on_new_token 17
on_new_token 690
on_new_token 67
on_new_token 372
on_new_token 187
on_new_token 674
on_new_token

'1.1769067372187674'