Skip to main content

MLflow

MLflow是一个多功能、可扩展、开源的平台,用于管理机器学习生命周期中的工作流和工件。它内置了与许多流行的ML库的集成,但也可以与任何库、算法或部署工具一起使用。它被设计为可扩展的,因此您可以编写插件来支持新的工作流、库和工具。

本笔记本介绍了如何将您的LangChain实验跟踪到您的MLflow Server

外部示例

MLflowLangChain集成提供了几个示例

示例

pip install azureml-mlflow
pip install pandas
pip install textstat
pip install spacy
pip install openai
pip install google-search-results
python -m spacy download en_core_web_sm
import os

os.environ["MLFLOW_TRACKING_URI"] = ""
os.environ["OPENAI_API_KEY"] = ""
os.environ["SERPAPI_API_KEY"] = ""
from langchain.callbacks import MlflowCallbackHandler
from langchain.llms import OpenAI
"""主函数。

此函数用于尝试回调处理程序。
场景:
1. OpenAI LLM
2. 多代链上的多个子链
3. 带工具的代理
"""
mlflow_callback = MlflowCallbackHandler()
llm = OpenAI(
model_name="gpt-3.5-turbo", temperature=0, callbacks=[mlflow_callback], verbose=True
)
# 场景1 - LLM
llm_result = llm.generate(["Tell me a joke"])

mlflow_callback.flush_tracker(llm)
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# 场景2 - 链
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=[mlflow_callback])

test_prompts = [
{
"title": "documentary about good video games that push the boundary of game design"
},
]
synopsis_chain.apply(test_prompts)
mlflow_callback.flush_tracker(synopsis_chain)
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# 场景3 - 带工具的代理
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=[mlflow_callback])
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callbacks=[mlflow_callback],
verbose=True,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
mlflow_callback.flush_tracker(agent, finish=True)