Skip to main content

Activeloop的Deep Lake

Activeloop的Deep Lake是一个多模态向量存储库,用于存储嵌入和它们的元数据,包括文本、json、图像、音频、视频等。它可以将数据保存在本地、云端或Activeloop存储中。它可以执行嵌入和属性的混合搜索。

这个笔记本展示了与Activeloop的Deep Lake相关的基本功能。虽然Deep Lake可以存储嵌入,但它也可以存储任何类型的数据。它是一个具有版本控制、查询引擎和流式数据加载器的无服务器数据湖,可用于深度学习框架。

有关更多信息,请参阅Deep Lake的文档API参考

pip install openai 'deeplake[enterprise]' tiktoken
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DeepLake
import os
import getpass

os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
activeloop_token = getpass.getpass("activeloop token:")
embeddings = OpenAIEmbeddings()
from langchain.document_loaders import TextLoader

loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)

embeddings = OpenAIEmbeddings()

在本地创建一个名为./deeplake/的数据集,然后运行相似性搜索。Deep Lake+LangChain集成在内部使用Deep Lake数据集,因此datasetvector store可以互换使用。要在自己的云端或Deep Lake存储中创建数据集,请根据需要调整路径

db = DeepLake(
dataset_path="./my_deeplake/", embedding_function=embeddings, overwrite=True
)
db.add_documents(docs)
# 或者更简短的方式
# db = DeepLake.from_documents(docs, dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)

稍后,您可以重新加载数据集而无需重新计算嵌入

db = DeepLake(
dataset_path="./my_deeplake/", embedding_function=embeddings, read_only=True
)
docs = db.similarity_search(query)

Deep Lake目前是单写多读的。设置read_only=True有助于避免获取写入锁。

检索问题/回答

from langchain.chains import RetrievalQA
from langchain.llms import OpenAIChat

qa = RetrievalQA.from_chain_type(
llm=OpenAIChat(model="gpt-3.5-turbo"),
chain_type="stuff",
retriever=db.as_retriever(),
)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)

基于元数据的属性过滤

让我们创建另一个包含文档创建年份元数据的向量存储库。

import random

for d in docs:
d.metadata["year"] = random.randint(2012, 2014)

db = DeepLake.from_documents(
docs, embeddings, dataset_path="./my_deeplake/", overwrite=True
)
db.similarity_search(
"What did the president say about Ketanji Brown Jackson",
filter={"metadata": {"year": 2013}},
)

选择距离函数

距离函数L2表示欧几里德距离,L1表示核范数距离,Max表示L∞距离,cos表示余弦相似度,dot表示点积

db.similarity_search(
"What did the president say about Ketanji Brown Jackson?", distance_metric="cos"
)

最大边际相关性

使用最大边际相关性

db.max_marginal_relevance_search(
"What did the president say about Ketanji Brown Jackson?"
)

删除数据集

db.delete_dataset()
    

如果删除失败,您还可以强制删除

DeepLake.force_delete_by_path("./my_deeplake")
    

在云端(Activeloop、AWS、GCS等)或内存中的Deep Lake数据集

默认情况下,Deep Lake数据集存储在本地。要将它们存储在内存中、Deep Lake托管的数据库中或任何对象存储中,可以在创建向量存储库时提供相应的路径和凭据。某些路径需要在Activeloop注册并创建API令牌,可以在此处检索

os.environ["ACTIVELOOP_TOKEN"] = activeloop_token
# 嵌入并存储文本
username = "<username>" # 在app.activeloop.ai上的用户名
dataset_path = f"hub://{username}/langchain_testing_python" # 也可以是./local/path(在本地更快),s3://bucket/path/to/dataset,gcs://path/to/dataset等

docs = text_splitter.split_documents(documents)

embedding = OpenAIEmbeddings()
db = DeepLake(dataset_path=dataset_path, embedding_function=embeddings, overwrite=True)
db.add_documents(docs)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)

tensor_db执行选项

为了利用Deep Lake的托管张量数据库,需要在创建向量存储库时指定运行时参数为{'tensor_db': True}。这个配置使得查询在托管的张量数据库上执行,而不是在客户端上执行。需要注意的是,这个功能不适用于本地或内存中存储的数据集。如果向量存储库已经在托管的张量数据库之外创建,可以按照规定的步骤将其转移到托管的张量数据库中。

# 嵌入并存储文本
username = "adilkhan" # 在app.activeloop.ai上的用户名
dataset_path = f"hub://{username}/langchain_testing"

docs = text_splitter.split_documents(documents)

embedding = OpenAIEmbeddings()
db = DeepLake(
dataset_path=dataset_path,
embedding_function=embeddings,
overwrite=True,
runtime={"tensor_db": True},
)
db.add_documents(docs)

TQL搜索

此外,还支持在similarity_search方法中执行查询,可以使用Deep Lake的张量查询语言(TQL)指定查询。

search_id = db.vectorstore.dataset.id[0].numpy()
docs = db.similarity_search(
query=None,
tql_query=f"SELECT * WHERE id == '{search_id[0]}'",
)
docs

在AWS S3上创建向量存储库

dataset_path = f"s3://BUCKET/langchain_test"  # 也可以是./local/path(在本地更快),hub://bucket/path/to/dataset,gcs://path/to/dataset等

embedding = OpenAIEmbeddings()
db = DeepLake.from_documents(
docs,
dataset_path=dataset_path,
embedding=embeddings,
overwrite=True,
creds={
"aws_access_key_id": os.environ["AWS_ACCESS_KEY_ID"],
"aws_secret_access_key": os.environ["AWS_SECRET_ACCESS_KEY"],
"aws_session_token": os.environ["AWS_SESSION_TOKEN"], # 可选
},
)
    s3://hub-2.0-datasets-n/langchain_test loaded successfully.


Evaluating ingest: 100%|██████████| 1/1 [00:10<00:00
\

Dataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])

tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None



Deep Lake API

您可以通过db.vectorstore访问Deep Lake数据集

# 获取数据集的结构
db.vectorstore.summary()
    Dataset(path='hub://adilkhan/langchain_testing', tensors=['embedding', 'id', 'metadata', 'text'])

tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding embedding (42, 1536) float32 None
id text (42, 1) str None
metadata json (42, 1) str None
text text (42, 1) str None
# 获取嵌入的numpy数组
embeds = db.vectorstore.dataset.embedding.numpy()

将本地数据集转移到云端

将已经创建的数据集复制到云端。您也可以从云端复制到本地。

import deeplake

username = "davitbun" # 在app.activeloop.ai上的用户名
source = f"hub://{username}/langchain_test" # 可以是本地、s3、gcs等
destination = f"hub://{username}/langchain_test_copy" # 可以是本地、s3、gcs等

deeplake.deepcopy(src=source, dest=destination, overwrite=True)
    Copying dataset: 100%|██████████| 56/56 [00:38<00:00


This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy
Your Deep Lake dataset has been successfully created!
The dataset is private so make sure you are logged in!





Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
db = DeepLake(dataset_path=destination, embedding_function=embeddings)
db.add_documents(docs)
     

This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy



/

hub://davitbun/langchain_test_copy loaded successfully.



Deep Lake Dataset in hub://davitbun/langchain_test_copy already exists, loading from the storage


Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])

tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None


Evaluating ingest: 100%|██████████| 1/1 [00:31<00:00
-

Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])

tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (8, 1536) float32 None
ids text (8, 1) str None
metadata json (8, 1) str None
text text (8, 1) str None







['ad42f3fe-e188-11ed-b66d-41c5f7b85421',
'ad42f3ff-e188-11ed-b66d-41c5f7b85421',
'ad42f400-e188-11ed-b66d-41c5f7b85421',
'ad42f401-e188-11ed-b66d-41c5f7b85421']