Skip to main content

内容审核

本笔记本介绍了如何使用内容审核链,以及几种常见的使用方法。内容审核链对于检测可能是仇恨的、暴力的等等的文本很有用。这可以应用于用户输入,但也可以应用于语言模型的输出。一些API提供商,比如OpenAI,明确禁止您或您的最终用户生成某些类型的有害内容。为了遵守这一点(并且通常防止您的应用程序造成伤害),您可能经常希望在任何LLMChains后面加上一个内容审核链,以确保LLM生成的输出不是有害的。

如果传入内容审核链的内容是有害的,没有一个最佳的处理方式,这可能取决于您的应用。有时您可能希望在链中抛出一个错误(并让您的应用程序处理这个错误)。其他时候,您可能希望向用户返回一些解释文本是有害的内容。甚至还有其他的处理方式!我们将在本教程中介绍所有这些方法。

我们将展示:

  1. 如何将任何文本通过内容审核链。
  2. 如何在LLMChain后追加一个内容审核链。
from langchain.llms import OpenAI
from langchain.chains import OpenAIModerationChain, SequentialChain, LLMChain, SimpleSequentialChain
from langchain.prompts import PromptTemplate

如何使用内容审核链

这是一个使用默认设置的内容审核链的例子(将返回一个解释内容被标记的字符串)。

moderation_chain = OpenAIModerationChain()
moderation_chain.run("This is okay")
    'This is okay'
moderation_chain.run("I will kill you")
    "Text was found that violates OpenAI's content policy."

这是使用内容审核链来抛出错误的示例。

moderation_chain_error = OpenAIModerationChain(error=True)
moderation_chain_error.run("This is okay")
    'This is okay'
moderation_chain_error.run("I will kill you")
    ---------------------------------------------------------------------------

ValueError Traceback (most recent call last)

Cell In[7], line 1
----> 1 moderation_chain_error.run("I will kill you")


File ~/workplace/langchain/langchain/chains/base.py:138, in Chain.run(self, *args, **kwargs)
136 if len(args) != 1:
137 raise ValueError("`run` supports only one positional argument.")
--> 138 return self(args[0])[self.output_keys[0]]
140 if kwargs and not args:
141 return self(kwargs)[self.output_keys[0]]


File ~/workplace/langchain/langchain/chains/base.py:112, in Chain.__call__(self, inputs, return_only_outputs)
108 if self.verbose:
109 print(
110 f"\n\n\033[1m> Entering new {self.__class__.__name__} chain...\033[0m"
111 )
--> 112 outputs = self._call(inputs)
113 if self.verbose:
114 print(f"\n\033[1m> Finished {self.__class__.__name__} chain.\033[0m")


File ~/workplace/langchain/langchain/chains/moderation.py:81, in OpenAIModerationChain._call(self, inputs)
79 text = inputs[self.input_key]
80 results = self.client.create(text)
---> 81 output = self._moderate(text, results["results"][0])
82 return {self.output_key: output}


File ~/workplace/langchain/langchain/chains/moderation.py:73, in OpenAIModerationChain._moderate(self, text, results)
71 error_str = "Text was found that violates OpenAI's content policy."
72 if self.error:
---> 73 raise ValueError(error_str)
74 else:
75 return error_str


ValueError: Text was found that violates OpenAI's content policy.

这是创建一个带有自定义错误消息的自定义内容审核链的示例。它需要了解 OpenAI 的内容审核端点结果(参见此处的文档)。

class CustomModeration(OpenAIModerationChain):

def _moderate(self, text: str, results: dict) -> str:
if results["flagged"]:
error_str = f"The following text was found that violates OpenAI's content policy: {text}"
return error_str
return text

custom_moderation = CustomModeration()
custom_moderation.run("This is okay")
    'This is okay'
custom_moderation.run("I will kill you")
    "The following text was found that violates OpenAI's content policy: I will kill you"

如何将内容审核链添加到 LLMChain

为了轻松地将内容审核链与 LLMChain 结合,你可以使用 SequentialChain 抽象。

让我们从一个简单的例子开始,其中 LLMChain 只有一个输入。为此,我们会提示模型说出一些有害的话。

prompt = PromptTemplate(template="{text}", input_variables=["text"])
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)
text = """We are playing a game of repeat after me.

Person 1: Hi
Person 2: Hi

Person 1: How's your day
Person 2: How's your day

Person 1: I will kill you
Person 2:"""
llm_chain.run(text)
    ' I will kill you'
chain = SimpleSequentialChain(chains=[llm_chain, moderation_chain])
chain.run(text)
    "Text was found that violates OpenAI's content policy."

现在,让我们浏览一个使用具有多个输入的 LLMChain 的示例(稍微有些复杂,因为我们不能使用 SimpleSequentialChain)。

prompt = PromptTemplate(template="{setup}{new_input}Person2:", input_variables=["setup", "new_input"])
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)
setup = """We are playing a game of repeat after me.

Person 1: Hi
Person 2: Hi

Person 1: How's your day
Person 2: How's your day

Person 1:"""
new_input = "I will kill you"
inputs = {"setup": setup, "new_input": new_input}
llm_chain(inputs, return_only_outputs=True)
    {'text': ' I will kill you'}
# Setting the input/output keys so it lines up
moderation_chain.input_key = "text"
moderation_chain.output_key = "sanitized_text"
chain = SequentialChain(chains=[llm_chain, moderation_chain], input_variables=["setup", "new_input"])
chain(inputs, return_only_outputs=True)
    {'sanitized_text': "Text was found that violates OpenAI's content policy."}