Skip to main content

Helicone (螺旋锥)

This page covers how to use the Helicone (螺旋锥) ecosystem within LangChain.

What is Helicone? (什么是螺旋锥?)

Helicone is an open source (开源) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.

Helicone

Quick start (快速入门)

With your LangChain environment you can just add the following parameter.

export OPENAI_API_BASE="https://oai.hconeai.com/v1"

Now head over to helicone.ai to create your account, and add your OpenAI API key within our dashboard to view your logs.

Helicone

How to enable Helicone caching (如何启用螺旋锥缓存)

from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"

llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"})
text = "What is a helicone?"
print(llm(text))

Helicone caching docs (螺旋锥缓存文档)

How to use Helicone custom properties (如何使用螺旋锥自定义属性)

from langchain.llms import OpenAI
import openai
openai.api_base = "https://oai.hconeai.com/v1"

llm = OpenAI(temperature=0.9, headers={
"Helicone-Property-Session": "24",
"Helicone-Property-Conversation": "support_issue_2",
"Helicone-Property-App": "mobile",
})
text = "What is a helicone?"
print(llm(text))

Helicone property docs (螺旋锥属性文档)