人工干预的工具验证
本教程演示了如何为任何工具添加人工验证。我们将使用HumanApprovalCallbackhandler
来实现这一点。
假设我们需要使用ShellTool。将此工具添加到自动化流程中明显存在风险。让我们看看如何强制对进入此工具的输入进行人工手动审批。
注意:我们通常不建议使用ShellTool。滥用它的方法有很多,而且大多数用例都不需要它。我们在此只是为了演示目的使用它。
from langchain.callbacks import HumanApprovalCallbackHandler
from langchain.tools import ShellTool
API Reference:
- HumanApprovalCallbackHandler from
langchain.callbacks
- ShellTool from
langchain.tools
tool = ShellTool()
print(tool.run("echo Hello World!"))
Hello World!
添加人工审批
将默认的HumanApprovalCallbackHandler
添加到工具中将确保用户必须手动批准工具的每一个输入,然后命令才会真正执行。
tool = ShellTool(callbacks=[HumanApprovalCallbackHandler()])
同意的案例
print(tool.run("ls /usr"))
Do you approve of the following input? Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.
ls /usr
yes
X11
X11R6
bin
lib
libexec
local
sbin
share
standalone
不同意的案例:
print(tool.run("ls /private"))
Do you approve of the following input? Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.
ls /private
no
---------------------------------------------------------------------------
HumanRejectedException Traceback (most recent call last)
Cell In[17], line 1
----> 1 print(tool.run("ls /private"))
File ~/langchain/langchain/tools/base.py:257, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, **kwargs)
255 # TODO: maybe also pass through run_manager is _run supports kwargs
256 new_arg_supported = signature(self._run).parameters.get("run_manager")
--> 257 run_manager = callback_manager.on_tool_start(
258 {"name": self.name, "description": self.description},
259 tool_input if isinstance(tool_input, str) else str(tool_input),
260 color=start_color,
261 **kwargs,
262 )
263 try:
264 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
配置人工审批
假设我们有一个代理,它接受多个工具,我们希望它只在某些工具和某些输入上触发人工审批请求。我们可以配置回调处理器来实现这一点。
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
API Reference:
- load_tools from
langchain.agents
- initialize_agent from
langchain.agents
- AgentType from
langchain.agents
- OpenAI from
langchain.llms
def _should_check(serialized_obj: dict) -> bool:
# Only require approval on ShellTool.
return serialized_obj.get("name") == "terminal"
def _approve(_input: str) -> bool:
if _input == "echo 'Hello World'":
return True
msg = (
"Do you approve of the following input? "
"Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no."
)
msg += "\n\n" + _input + "\n"
resp = input(msg)
return resp.lower() in ("yes", "y")
callbacks = [HumanApprovalCallbackHandler(should_check=_should_check, approve=_approve)]
llm = OpenAI(temperature=0)
tools = load_tools(["wikipedia", "llm-math", "terminal"], llm=llm)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent.run(
"It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany.",
callbacks=callbacks,
)
'Konrad Adenauer became Chancellor of Germany in 1949, 74 years ago.'
agent.run("print 'Hello World' in the terminal", callbacks=callbacks)
'Hello World'
agent.run("list all directories in /private", callbacks=callbacks)
Do you approve of the following input? Anything except 'Y'/'Yes' (case-insensitive) will be treated as a no.
ls /private
no
---------------------------------------------------------------------------
HumanRejectedException Traceback (most recent call last)
Cell In[39], line 1
----> 1 agent.run("list all directories in /private", callbacks=callbacks)
File ~/langchain/langchain/chains/base.py:236, in Chain.run(self, callbacks, *args, **kwargs)
234 if len(args) != 1:
235 raise ValueError("`run` supports only one positional argument.")
--> 236 return self(args[0], callbacks=callbacks)[self.output_keys[0]]
238 if kwargs and not args:
239 return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
HumanRejectedException: Inputs ls /private to tool {'name': 'terminal', 'description': 'Run shell commands on this MacOS machine.'} were rejected.