Skip to main content

Beam (光束)

调用Beam API包装器以在云部署中部署和进行后续调用gpt2 LLM实例。需要安装Beam库并注册Beam客户端ID和客户端密钥。通过调用包装器创建并运行模型实例,并返回与提示相关的文本。然后可以通过直接调用Beam API进行其他调用。

如果您还没有账户,请创建一个账户。从仪表板获取您的API密钥。

安装Beam CLI

curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh

注册API密钥并设置您的Beam客户端ID和密钥环境变量:

import os
import subprocess

beam_client_id = "<您的Beam客户端ID>"
beam_client_secret = "<您的Beam客户端密钥>"

# 设置环境变量
os.environ["BEAM_CLIENT_ID"] = beam_client_id
os.environ["BEAM_CLIENT_SECRET"] = beam_client_secret

# 运行beam configure命令
beam configure --clientId={beam_client_id} --clientSecret={beam_client_secret}

安装Beam SDK:

pip install beam-sdk

直接从langchain部署和调用Beam!

请注意,冷启动可能需要几分钟才能返回响应,但后续调用将更快!

from langchain.llms.beam import Beam

llm = Beam(
model_name="gpt2",
name="langchain-gpt2-test",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",
],
max_length="50",
verbose=False,
)

llm._deploy()

response = llm._call("在远程GPU上运行机器学习")

print(response)