url
stringlengths 34
116
| markdown
stringlengths 0
150k
⌀ | screenshotUrl
null | crawl
dict | metadata
dict | text
stringlengths 0
147k
|
---|---|---|---|---|---|
https://python.langchain.com/docs/integrations/providers/tencent/ | ## Tencent
> [Tencent Holdings Ltd. (Wikipedia)](https://en.wikipedia.org/wiki/Tencent) (Chinese: 腾讯; pinyin: Téngxùn) is a Chinese multinational technology conglomerate and holding company headquartered in Shenzhen. `Tencent` is one of the highest grossing multimedia companies in the world based on revenue. It is also the world's largest company in the video game industry based on its equity investments.
## Chat model[](#chat-model "Direct link to Chat model")
> [Tencent's hybrid model API](https://cloud.tencent.com/document/product/1729) (`Hunyuan API`) implements dialogue communication, content generation, analysis and understanding, and can be widely used in various scenarios such as intelligent customer service, intelligent marketing, role playing, advertising, copyrighting, product description, script creation, resume generation, article writing, code generation, data analysis, and content analysis.
For more information, see [this notebook](https://python.langchain.com/docs/integrations/chat/tencent_hunyuan/)
```
from langchain_community.chat_models import ChatHunyuan
```
## Document Loaders[](#document-loaders "Direct link to Document Loaders")
### Tencent COS[](#tencent-cos "Direct link to Tencent COS")
> [Tencent Cloud Object Storage (COS)](https://www.tencentcloud.com/products/cos) is a distributed storage service that enables you to store any amount of data from anywhere via HTTP/HTTPS protocols. `COS` has no restrictions on data structure or format. It also has no bucket size limit and partition management, making it suitable for virtually any use case, such as data delivery, data processing, and data lakes. COS provides a web-based console, multi-language SDKs and APIs, command line tool, and graphical tools. It works well with Amazon S3 APIs, allowing you to quickly access community tools and plugins.
Install the Python SDK:
```
pip install cos-python-sdk-v5
```
#### Tencent COS Directory[](#tencent-cos-directory "Direct link to Tencent COS Directory")
For more information, see [this notebook](https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_directory/)
```
from langchain_community.document_loaders import TencentCOSDirectoryLoaderfrom qcloud_cos import CosConfig
```
#### Tencent COS File[](#tencent-cos-file "Direct link to Tencent COS File")
For more information, see [this notebook](https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file/)
```
from langchain_community.document_loaders import TencentCOSFileLoaderfrom qcloud_cos import CosConfig
```
## Vector Store[](#vector-store "Direct link to Vector Store")
### Tencent VectorDB[](#tencent-vectordb "Direct link to Tencent VectorDB")
> [Tencent Cloud VectorDB](https://www.tencentcloud.com/products/vdb) is a fully managed, self-developed enterprise-level distributed database service dedicated to storing, retrieving, and analyzing multidimensional vector data. The database supports a variety of index types and similarity calculation methods, and a single index supports 1 billion vectors, millions of QPS, and millisecond query latency. `Tencent Cloud Vector Database` can not only provide an external knowledge base for large models and improve the accuracy of large models' answers, but also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service.
Install the Python SDK:
For more information, see [this notebook](https://python.langchain.com/docs/integrations/vectorstores/tencentvectordb/)
```
from langchain_community.vectorstores import TencentVectorDB
```
## Chat loader[](#chat-loader "Direct link to Chat loader")
### WeChat[](#wechat "Direct link to WeChat")
> [WeChat](https://www.wechat.com/) or `Weixin` in Chinese is a Chinese instant messaging, social media, and mobile payment app developed by `Tencent`.
See a [usage example](https://python.langchain.com/docs/integrations/chat_loaders/wechat/). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:41:58.478Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/tencent/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/tencent/",
"description": "Tencent Holdings Ltd. (Wikipedia) (Chinese Téngxùn)",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3572",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tencent\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:57 GMT",
"etag": "W/\"10ca03ff287cd14fc5f7b0df77c83f9d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::v2ch6-1713753717648-f6f2007adecf"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/tencent/",
"property": "og:url"
},
{
"content": "Tencent | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Tencent Holdings Ltd. (Wikipedia) (Chinese Téngxùn)",
"property": "og:description"
}
],
"title": "Tencent | 🦜️🔗 LangChain"
} | Tencent
Tencent Holdings Ltd. (Wikipedia) (Chinese: 腾讯; pinyin: Téngxùn) is a Chinese multinational technology conglomerate and holding company headquartered in Shenzhen. Tencent is one of the highest grossing multimedia companies in the world based on revenue. It is also the world's largest company in the video game industry based on its equity investments.
Chat model
Tencent's hybrid model API (Hunyuan API) implements dialogue communication, content generation, analysis and understanding, and can be widely used in various scenarios such as intelligent customer service, intelligent marketing, role playing, advertising, copyrighting, product description, script creation, resume generation, article writing, code generation, data analysis, and content analysis.
For more information, see this notebook
from langchain_community.chat_models import ChatHunyuan
Document Loaders
Tencent COS
Tencent Cloud Object Storage (COS) is a distributed storage service that enables you to store any amount of data from anywhere via HTTP/HTTPS protocols. COS has no restrictions on data structure or format. It also has no bucket size limit and partition management, making it suitable for virtually any use case, such as data delivery, data processing, and data lakes. COS provides a web-based console, multi-language SDKs and APIs, command line tool, and graphical tools. It works well with Amazon S3 APIs, allowing you to quickly access community tools and plugins.
Install the Python SDK:
pip install cos-python-sdk-v5
Tencent COS Directory
For more information, see this notebook
from langchain_community.document_loaders import TencentCOSDirectoryLoader
from qcloud_cos import CosConfig
Tencent COS File
For more information, see this notebook
from langchain_community.document_loaders import TencentCOSFileLoader
from qcloud_cos import CosConfig
Vector Store
Tencent VectorDB
Tencent Cloud VectorDB is a fully managed, self-developed enterprise-level distributed database service dedicated to storing, retrieving, and analyzing multidimensional vector data. The database supports a variety of index types and similarity calculation methods, and a single index supports 1 billion vectors, millions of QPS, and millisecond query latency. Tencent Cloud Vector Database can not only provide an external knowledge base for large models and improve the accuracy of large models' answers, but also be widely used in AI fields such as recommendation systems, NLP services, computer vision, and intelligent customer service.
Install the Python SDK:
For more information, see this notebook
from langchain_community.vectorstores import TencentVectorDB
Chat loader
WeChat
WeChat or Weixin in Chinese is a Chinese instant messaging, social media, and mobile payment app developed by Tencent.
See a usage example. |
https://python.langchain.com/docs/integrations/providers/supabase/ | We need to install `supabase` python package.
```
from langchain_community.vectorstores import SupabaseVectorStore
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:41:59.007Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/supabase/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/supabase/",
"description": "Supabase is an open-source Firebase alternative.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3574",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"supabase\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:58 GMT",
"etag": "W/\"c69e1f31827999a4d29d86e54bcf2cce\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::c8dx6-1713753718776-09e49aa52adb"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/supabase/",
"property": "og:url"
},
{
"content": "Supabase (Postgres) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Supabase is an open-source Firebase alternative.",
"property": "og:description"
}
],
"title": "Supabase (Postgres) | 🦜️🔗 LangChain"
} | We need to install supabase python package.
from langchain_community.vectorstores import SupabaseVectorStore |
https://python.langchain.com/docs/integrations/providers/symblai_nebula/ | This page covers how to use [Nebula](https://symbl.ai/nebula), [Symbl.ai](https://symbl.ai/)'s LLM, ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Nebula wrappers.
```
from langchain_community.llms import Nebulallm = Nebula()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:41:59.207Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/symblai_nebula/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/symblai_nebula/",
"description": "This page covers how to use Nebula, Symbl.ai's LLM, ecosystem within LangChain.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4622",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"symblai_nebula\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:58 GMT",
"etag": "W/\"8b6a62763f9227d1218d76fc81213545\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::xhgjf-1713753718949-894365243d5f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/symblai_nebula/",
"property": "og:url"
},
{
"content": "Nebula | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This page covers how to use Nebula, Symbl.ai's LLM, ecosystem within LangChain.",
"property": "og:description"
}
],
"title": "Nebula | 🦜️🔗 LangChain"
} | This page covers how to use Nebula, Symbl.ai's LLM, ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Nebula wrappers.
from langchain_community.llms import Nebula
llm = Nebula() |
https://python.langchain.com/docs/integrations/providers/tigergraph/ | To utilize the `TigerGraph InquiryAI` functionality, you can import `TigerGraph` from `langchain_community.graphs`.
```
import pyTigerGraph as tgconn = tg.TigerGraphConnection(host="DATABASE_HOST_HERE", graphname="GRAPH_NAME_HERE", username="USERNAME_HERE", password="PASSWORD_HERE")### ==== CONFIGURE INQUIRYAI HOST ====conn.ai.configureInquiryAIHost("INQUIRYAI_HOST_HERE")from langchain_community.graphs import TigerGraphgraph = TigerGraph(conn)result = graph.query("How many servers are there?")print(result)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:41:59.280Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/tigergraph/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/tigergraph/",
"description": "What is TigerGraph?",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3573",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tigergraph\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:59 GMT",
"etag": "W/\"42eae32e709be2204a3a111701541a03\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dgnz9-1713753719215-f5f0064ee3af"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/tigergraph/",
"property": "og:url"
},
{
"content": "TigerGraph | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "What is TigerGraph?",
"property": "og:description"
}
],
"title": "TigerGraph | 🦜️🔗 LangChain"
} | To utilize the TigerGraph InquiryAI functionality, you can import TigerGraph from langchain_community.graphs.
import pyTigerGraph as tg
conn = tg.TigerGraphConnection(host="DATABASE_HOST_HERE", graphname="GRAPH_NAME_HERE", username="USERNAME_HERE", password="PASSWORD_HERE")
### ==== CONFIGURE INQUIRYAI HOST ====
conn.ai.configureInquiryAIHost("INQUIRYAI_HOST_HERE")
from langchain_community.graphs import TigerGraph
graph = TigerGraph(conn)
result = graph.query("How many servers are there?")
print(result) |
https://python.langchain.com/docs/integrations/providers/tensorflow_datasets/ | You need to install `tensorflow` and `tensorflow-datasets` python packages.
```
pip install tensorflow-dataset
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:41:59.614Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/tensorflow_datasets/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/tensorflow_datasets/",
"description": "TensorFlow Datasets is a collection of datasets ready to use,",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4622",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tensorflow_datasets\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:59 GMT",
"etag": "W/\"7a71837dbea63d79f46cc4cdd4a9d9e5\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::sgxwt-1713753719306-b9abc5ffd4ae"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/tensorflow_datasets/",
"property": "og:url"
},
{
"content": "TensorFlow Datasets | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "TensorFlow Datasets is a collection of datasets ready to use,",
"property": "og:description"
}
],
"title": "TensorFlow Datasets | 🦜️🔗 LangChain"
} | You need to install tensorflow and tensorflow-datasets python packages.
pip install tensorflow-dataset |
https://python.langchain.com/docs/integrations/providers/tidb/ | [TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. `TiDB Serverless` is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using `TiDB Serverless` without the need for a new database or additional technical stacks. Be among the first to experience it by joining the [waitlist for the private beta](https://tidb.cloud/ai).
You have to get the connection details for the TiDB database. Visit the [TiDB Cloud](https://tidbcloud.com/) to get the connection details.
```
## Document loader```pythonfrom langchain_community.document_loaders import TiDBLoader
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:41:59.566Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/tidb/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/tidb/",
"description": "TiDB Cloud, is a comprehensive Database-as-a-Service (DBaaS) solution,",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3574",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tidb\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:59 GMT",
"etag": "W/\"890cd8ee82615f8dd5d1adfa205e1fe5\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::5wljk-1713753719293-5aec2ccb81f6"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/tidb/",
"property": "og:url"
},
{
"content": "TiDB | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "TiDB Cloud, is a comprehensive Database-as-a-Service (DBaaS) solution,",
"property": "og:description"
}
],
"title": "TiDB | 🦜️🔗 LangChain"
} | TiDB Cloud, is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. TiDB Serverless is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using TiDB Serverless without the need for a new database or additional technical stacks. Be among the first to experience it by joining the waitlist for the private beta.
You have to get the connection details for the TiDB database. Visit the TiDB Cloud to get the connection details.
## Document loader
```python
from langchain_community.document_loaders import TiDBLoader |
https://python.langchain.com/docs/integrations/providers/tigris/ | [Tigris](https://tigrisdata.com/) is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications. `Tigris` eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.
```
pip install tigrisdb openapi-schema-pydantic
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:41:59.690Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/tigris/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/tigris/",
"description": "Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tigris\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:59 GMT",
"etag": "W/\"caa61c2d84eb73d571331ae3b39ffb34\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::9vmcv-1713753719457-2fea7e8fe6a3"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/tigris/",
"property": "og:url"
},
{
"content": "Tigris | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.",
"property": "og:description"
}
],
"title": "Tigris | 🦜️🔗 LangChain"
} | Tigris is an open-source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications. Tigris eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.
pip install tigrisdb openapi-schema-pydantic |
https://python.langchain.com/docs/integrations/providers/tomarkdown/ | ## 2Markdown
> [2markdown](https://2markdown.com/) service transforms website content into structured markdown files.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
We need the `API key`. See [instructions how to get it](https://2markdown.com/login).
## Document Loader[](#document-loader "Direct link to Document Loader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/tomarkdown/).
```
from langchain_community.document_loaders import ToMarkdownLoader
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:00.063Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/tomarkdown/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/tomarkdown/",
"description": "2markdown service transforms website content into structured markdown files.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4622",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tomarkdown\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:59 GMT",
"etag": "W/\"9b29e3a160892c9fbf6d86d71185f299\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::5fbxs-1713753719794-1c0055c680ca"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/tomarkdown/",
"property": "og:url"
},
{
"content": "2Markdown | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "2markdown service transforms website content into structured markdown files.",
"property": "og:description"
}
],
"title": "2Markdown | 🦜️🔗 LangChain"
} | 2Markdown
2markdown service transforms website content into structured markdown files.
Installation and Setup
We need the API key. See instructions how to get it.
Document Loader
See a usage example.
from langchain_community.document_loaders import ToMarkdownLoader
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/providers/trello/ | ## Trello
> [Trello](https://www.atlassian.com/software/trello) is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. The TrelloLoader allows us to load cards from a `Trello` board.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
```
pip install py-trello beautifulsoup4
```
See [setup instructions](https://python.langchain.com/docs/integrations/document_loaders/trello/).
## Document Loader[](#document-loader "Direct link to Document Loader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/trello/).
```
from langchain_community.document_loaders import TrelloLoader
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:00.170Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/trello/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/trello/",
"description": "Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a \"board\" where users can create lists and cards to represent their tasks and activities.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3574",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"trello\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:59 GMT",
"etag": "W/\"8c995d7696935377ffce6655f9cae7ff\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::h6m2t-1713753719970-8b8e87182e8c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/trello/",
"property": "og:url"
},
{
"content": "Trello | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a \"board\" where users can create lists and cards to represent their tasks and activities.",
"property": "og:description"
}
],
"title": "Trello | 🦜️🔗 LangChain"
} | Trello
Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities. The TrelloLoader allows us to load cards from a Trello board.
Installation and Setup
pip install py-trello beautifulsoup4
See setup instructions.
Document Loader
See a usage example.
from langchain_community.document_loaders import TrelloLoader
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/providers/together/ | [Together AI](https://together.ai/) is a cloud platform for building and running generative AI.
It makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including `Llama-2`, `RedPajama`, `Falcon`, `Alpaca`, `Stable Diffusion XL`, and more. Read mo
API key can be passed in as init param `together_api_key` or set as environment variable `TOGETHER_API_KEY`.
```
%pip install --upgrade --quiet langchain-together
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:00.542Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/together/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/together/",
"description": "Together AI is a cloud platform for building",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4717",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"together\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:41:59 GMT",
"etag": "W/\"531f19569dfc5e3b5b7df671d7994fdc\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::c9jwb-1713753719986-330359f2b23e"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/together/",
"property": "og:url"
},
{
"content": "Together AI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Together AI is a cloud platform for building",
"property": "og:description"
}
],
"title": "Together AI | 🦜️🔗 LangChain"
} | Together AI is a cloud platform for building and running generative AI.
It makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the world’s leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read mo
API key can be passed in as init param together_api_key or set as environment variable TOGETHER_API_KEY.
%pip install --upgrade --quiet langchain-together |
https://python.langchain.com/docs/integrations/providers/trulens/ | ## TruLens
> [TruLens](https://trulens.org/) is an [open-source](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications.
This page covers how to use [TruLens](https://trulens.org/) to evaluate and track LLM apps built on langchain.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
Install the `trulens-eval` python package.
## Quickstart[](#quickstart "Direct link to Quickstart")
See the integration details in the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/).
### Tracking[](#tracking "Direct link to Tracking")
Once you've created your LLM chain, you can use TruLens for evaluation and tracking. TruLens has a number of [out-of-the-box Feedback Functions](https://www.trulens.org/trulens_eval/evaluation/feedback_functions/), and is also an extensible framework for LLM evaluation.
Create the feedback functions:
```
from trulens_eval.feedback import Feedback, Huggingface, # Initialize HuggingFace-based feedback function collection class:hugs = Huggingface()openai = OpenAI()# Define a language match feedback function using HuggingFace.lang_match = Feedback(hugs.language_match).on_input_output()# By default this will check language match on the main app input and main app# output.# Question/answer relevance between overall question and answer.qa_relevance = Feedback(openai.relevance).on_input_output()# By default this will evaluate feedback on main app input and main app output.# Toxicity of inputtoxicity = Feedback(openai.toxicity).on_input()
```
### Chains[](#chains "Direct link to Chains")
After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with TruChain to get detailed tracing, logging and evaluation of your LLM app.
Note: See code for the `chain` creation is in the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/).
```
from trulens_eval import TruChain# wrap your chain with TruChaintruchain = TruChain( chain, app_id='Chain1_ChatApplication', feedbacks=[lang_match, qa_relevance, toxicity])# Note: any `feedbacks` specified here will be evaluated and logged whenever the chain is used.truchain("que hora es?")
```
### Evaluation[](#evaluation "Direct link to Evaluation")
Now you can explore your LLM-based application!
Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record.
```
from trulens_eval import Trutru = Tru()tru.run_dashboard() # open a Streamlit app to explore
```
For more information on TruLens, visit [trulens.org](https://www.trulens.org/)
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:00.716Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/trulens/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/trulens/",
"description": "TruLens is an open-source package that provides instrumentation and evaluation tools for large language model (LLM) based applications.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3574",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"trulens\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:00 GMT",
"etag": "W/\"10c3726447eab1e19053ab0386b22e6c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::5czlr-1713753720286-d0201f27db15"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/trulens/",
"property": "og:url"
},
{
"content": "TruLens | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "TruLens is an open-source package that provides instrumentation and evaluation tools for large language model (LLM) based applications.",
"property": "og:description"
}
],
"title": "TruLens | 🦜️🔗 LangChain"
} | TruLens
TruLens is an open-source package that provides instrumentation and evaluation tools for large language model (LLM) based applications.
This page covers how to use TruLens to evaluate and track LLM apps built on langchain.
Installation and Setup
Install the trulens-eval python package.
Quickstart
See the integration details in the TruLens documentation.
Tracking
Once you've created your LLM chain, you can use TruLens for evaluation and tracking. TruLens has a number of out-of-the-box Feedback Functions, and is also an extensible framework for LLM evaluation.
Create the feedback functions:
from trulens_eval.feedback import Feedback, Huggingface,
# Initialize HuggingFace-based feedback function collection class:
hugs = Huggingface()
openai = OpenAI()
# Define a language match feedback function using HuggingFace.
lang_match = Feedback(hugs.language_match).on_input_output()
# By default this will check language match on the main app input and main app
# output.
# Question/answer relevance between overall question and answer.
qa_relevance = Feedback(openai.relevance).on_input_output()
# By default this will evaluate feedback on main app input and main app output.
# Toxicity of input
toxicity = Feedback(openai.toxicity).on_input()
Chains
After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with TruChain to get detailed tracing, logging and evaluation of your LLM app.
Note: See code for the chain creation is in the TruLens documentation.
from trulens_eval import TruChain
# wrap your chain with TruChain
truchain = TruChain(
chain,
app_id='Chain1_ChatApplication',
feedbacks=[lang_match, qa_relevance, toxicity]
)
# Note: any `feedbacks` specified here will be evaluated and logged whenever the chain is used.
truchain("que hora es?")
Evaluation
Now you can explore your LLM-based application!
Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record.
from trulens_eval import Tru
tru = Tru()
tru.run_dashboard() # open a Streamlit app to explore
For more information on TruLens, visit trulens.org
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/providers/trubrics/ | ## Trubrics
> [Trubrics](https://trubrics.com/) is an LLM user analytics platform that lets you collect, analyse and manage user prompts & feedback on AI models.
>
> Check out [Trubrics repo](https://github.com/trubrics/trubrics-sdk) for more information on `Trubrics`.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
We need to install the `trubrics` Python package:
## Callbacks[](#callbacks "Direct link to Callbacks")
See a [usage example](https://python.langchain.com/docs/integrations/callbacks/trubrics/).
```
from langchain.callbacks import TrubricsCallbackHandler
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:00.615Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/trubrics/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/trubrics/",
"description": "Trubrics is an LLM user analytics platform that lets you collect, analyse and manage user",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3574",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"trubrics\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:00 GMT",
"etag": "W/\"91829bc9c90d69337552e1e085cacd8c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::t9fbx-1713753720300-626f57d5f771"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/trubrics/",
"property": "og:url"
},
{
"content": "Trubrics | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Trubrics is an LLM user analytics platform that lets you collect, analyse and manage user",
"property": "og:description"
}
],
"title": "Trubrics | 🦜️🔗 LangChain"
} | Trubrics
Trubrics is an LLM user analytics platform that lets you collect, analyse and manage user prompts & feedback on AI models.
Check out Trubrics repo for more information on Trubrics.
Installation and Setup
We need to install the trubrics Python package:
Callbacks
See a usage example.
from langchain.callbacks import TrubricsCallbackHandler |
https://python.langchain.com/docs/integrations/providers/typesense/ | ## Typesense
> [Typesense](https://typesense.org/) is an open-source, in-memory search engine, that you can either [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run on [Typesense Cloud](https://cloud.typesense.org/). `Typesense` focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
```
pip install typesense openapi-schema-pydantic
```
## Vector Store[](#vector-store "Direct link to Vector Store")
See a [usage example](https://python.langchain.com/docs/integrations/vectorstores/typesense/).
```
from langchain_community.vectorstores import Typesense
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:01.114Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/typesense/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/typesense/",
"description": "Typesense is an open-source, in-memory search engine, that you can either",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"typesense\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:00 GMT",
"etag": "W/\"89917761e8c0ba6af5fd4d976df68db5\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::njn2b-1713753720640-565d6c3a7ced"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/typesense/",
"property": "og:url"
},
{
"content": "Typesense | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Typesense is an open-source, in-memory search engine, that you can either",
"property": "og:description"
}
],
"title": "Typesense | 🦜️🔗 LangChain"
} | Typesense
Typesense is an open-source, in-memory search engine, that you can either self-host or run on Typesense Cloud. Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.
Installation and Setup
pip install typesense openapi-schema-pydantic
Vector Store
See a usage example.
from langchain_community.vectorstores import Typesense
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/providers/twitter/ | We must initialize the loader with the `Twitter API` token, and we need to set up the Twitter `username`.
```
from langchain_community.document_loaders import TwitterTweetLoader
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:01.290Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/twitter/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/twitter/",
"description": "Twitter is an online social media and social networking service.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4623",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"twitter\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:01 GMT",
"etag": "W/\"83cb503991d6bb4a32e370616a0f97ce\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::cvhgj-1713753721141-8c15239209f1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/twitter/",
"property": "og:url"
},
{
"content": "Twitter | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Twitter is an online social media and social networking service.",
"property": "og:description"
}
],
"title": "Twitter | 🦜️🔗 LangChain"
} | We must initialize the loader with the Twitter API token, and we need to set up the Twitter username.
from langchain_community.document_loaders import TwitterTweetLoader |
https://python.langchain.com/docs/integrations/providers/unstructured/ | ## Unstructured
> The `unstructured` package from [Unstructured.IO](https://www.unstructured.io/) extracts clean text from raw source documents like PDFs and Word documents. This page covers how to use the [`unstructured`](https://github.com/Unstructured-IO/unstructured) ecosystem within LangChain.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
If you are using a loader that runs locally, use the following steps to get `unstructured` and its dependencies running locally.
* Install the Python SDK with `pip install unstructured`.
* You can install document specific dependencies with extras, i.e. `pip install "unstructured[docx]"`.
* To install the dependencies for all document types, use `pip install "unstructured[all-docs]"`.
* Install the following system dependencies if they are not already available on your system. Depending on what document types you're parsing, you may not need all of these.
* `libmagic-dev` (filetype detection)
* `poppler-utils` (images and PDFs)
* `tesseract-ocr`(images and PDFs)
* `libreoffice` (MS Office docs)
* `pandoc` (EPUBs)
If you want to get up and running with less set up, you can simply run `pip install unstructured` and use `UnstructuredAPIFileLoader` or `UnstructuredAPIFileIOLoader`. That will process your document using the hosted Unstructured API.
The `Unstructured API` requires API keys to make requests. You can request an API key [here](https://unstructured.io/api-key-hosted) and start using it today! Checkout the README [here](https://github.com/Unstructured-IO/unstructured-api) here to get started making API calls. We'd love to hear your feedback, let us know how it goes in our [community slack](https://join.slack.com/t/unstructuredw-kbe4326/shared_invite/zt-1x7cgo0pg-PTptXWylzPQF9xZolzCnwQ). And stay tuned for improvements to both quality and performance! Check out the instructions [here](https://github.com/Unstructured-IO/unstructured-api#dizzy-instructions-for-using-the-docker-image) if you'd like to self-host the Unstructured API or run it locally.
## Data Loaders[](#data-loaders "Direct link to Data Loaders")
The primary usage of the `Unstructured` is in data loaders.
### UnstructuredAPIFileIOLoader[](#unstructuredapifileioloader "Direct link to UnstructuredAPIFileIOLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/unstructured_file/#unstructured-api).
```
from langchain_community.document_loaders import UnstructuredAPIFileIOLoader
```
### UnstructuredAPIFileLoader[](#unstructuredapifileloader "Direct link to UnstructuredAPIFileLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/unstructured_file/#unstructured-api).
```
from langchain_community.document_loaders import UnstructuredAPIFileLoader
```
### UnstructuredCHMLoader[](#unstructuredchmloader "Direct link to UnstructuredCHMLoader")
`CHM` means `Microsoft Compiled HTML Help`.
See a usage example in the API documentation.
```
from langchain_community.document_loaders import UnstructuredCHMLoader
```
### UnstructuredCSVLoader[](#unstructuredcsvloader "Direct link to UnstructuredCSVLoader")
A `comma-separated values` (`CSV`) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/csv/#unstructuredcsvloader).
```
from langchain_community.document_loaders import UnstructuredCSVLoader
```
### UnstructuredEmailLoader[](#unstructuredemailloader "Direct link to UnstructuredEmailLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/email/).
```
from langchain_community.document_loaders import UnstructuredEmailLoader
```
### UnstructuredEPubLoader[](#unstructuredepubloader "Direct link to UnstructuredEPubLoader")
[EPUB](https://en.wikipedia.org/wiki/EPUB) is an `e-book file format` that uses the “.epub” file extension. The term is short for electronic publication and is sometimes styled `ePub`. `EPUB` is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/epub/).
```
from langchain_community.document_loaders import UnstructuredEPubLoader
```
### UnstructuredExcelLoader[](#unstructuredexcelloader "Direct link to UnstructuredExcelLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/microsoft_excel/).
```
from langchain_community.document_loaders import UnstructuredExcelLoader
```
### UnstructuredFileIOLoader[](#unstructuredfileioloader "Direct link to UnstructuredFileIOLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/google_drive/#passing-in-optional-file-loaders).
```
from langchain_community.document_loaders import UnstructuredFileIOLoader
```
### UnstructuredFileLoader[](#unstructuredfileloader "Direct link to UnstructuredFileLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/unstructured_file/).
```
from langchain_community.document_loaders import UnstructuredFileLoader
```
### UnstructuredHTMLLoader[](#unstructuredhtmlloader "Direct link to UnstructuredHTMLLoader")
See a [usage example](https://python.langchain.com/docs/modules/data_connection/document_loaders/html/).
```
from langchain_community.document_loaders import UnstructuredHTMLLoader
```
### UnstructuredImageLoader[](#unstructuredimageloader "Direct link to UnstructuredImageLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/image/).
```
from langchain_community.document_loaders import UnstructuredImageLoader
```
### UnstructuredMarkdownLoader[](#unstructuredmarkdownloader "Direct link to UnstructuredMarkdownLoader")
See a [usage example](https://python.langchain.com/docs/integrations/vectorstores/starrocks/).
```
from langchain_community.document_loaders import UnstructuredMarkdownLoader
```
### UnstructuredODTLoader[](#unstructuredodtloader "Direct link to UnstructuredODTLoader")
The `Open Document Format for Office Applications (ODF)`, also known as `OpenDocument`, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications.
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/odt/).
```
from langchain_community.document_loaders import UnstructuredODTLoader
```
### UnstructuredOrgModeLoader[](#unstructuredorgmodeloader "Direct link to UnstructuredOrgModeLoader")
An [Org Mode](https://en.wikipedia.org/wiki/Org-mode) document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs.
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/org_mode/).
```
from langchain_community.document_loaders import UnstructuredOrgModeLoader
```
### UnstructuredPDFLoader[](#unstructuredpdfloader "Direct link to UnstructuredPDFLoader")
See a [usage example](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf/#using-unstructured).
```
from langchain_community.document_loaders import UnstructuredPDFLoader
```
### UnstructuredPowerPointLoader[](#unstructuredpowerpointloader "Direct link to UnstructuredPowerPointLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/microsoft_powerpoint/).
```
from langchain_community.document_loaders import UnstructuredPowerPointLoader
```
### UnstructuredRSTLoader[](#unstructuredrstloader "Direct link to UnstructuredRSTLoader")
A `reStructured Text` (`RST`) file is a file format for textual data used primarily in the Python programming language community for technical documentation.
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/rst/).
```
from langchain_community.document_loaders import UnstructuredRSTLoader
```
### UnstructuredRTFLoader[](#unstructuredrtfloader "Direct link to UnstructuredRTFLoader")
See a usage example in the API documentation.
```
from langchain_community.document_loaders import UnstructuredRTFLoader
```
### UnstructuredTSVLoader[](#unstructuredtsvloader "Direct link to UnstructuredTSVLoader")
A `tab-separated values` (`TSV`) file is a simple, text-based file format for storing tabular data. Records are separated by newlines, and values within a record are separated by tab characters.
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/tsv/).
```
from langchain_community.document_loaders import UnstructuredTSVLoader
```
### UnstructuredURLLoader[](#unstructuredurlloader "Direct link to UnstructuredURLLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/url/).
```
from langchain_community.document_loaders import UnstructuredURLLoader
```
### UnstructuredWordDocumentLoader[](#unstructuredworddocumentloader "Direct link to UnstructuredWordDocumentLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/microsoft_word/#using-unstructured).
```
from langchain_community.document_loaders import UnstructuredWordDocumentLoader
```
### UnstructuredXMLLoader[](#unstructuredxmlloader "Direct link to UnstructuredXMLLoader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/xml/).
```
from langchain_community.document_loaders import UnstructuredXMLLoader
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:01.694Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/unstructured/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/unstructured/",
"description": "The unstructured package from",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "8803",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"unstructured\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:01 GMT",
"etag": "W/\"355eccf990e63cc67187109a8ffea02c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::rsd2t-1713753721400-4cf7977f93f5"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/unstructured/",
"property": "og:url"
},
{
"content": "Unstructured | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "The unstructured package from",
"property": "og:description"
}
],
"title": "Unstructured | 🦜️🔗 LangChain"
} | Unstructured
The unstructured package from Unstructured.IO extracts clean text from raw source documents like PDFs and Word documents. This page covers how to use the unstructured ecosystem within LangChain.
Installation and Setup
If you are using a loader that runs locally, use the following steps to get unstructured and its dependencies running locally.
Install the Python SDK with pip install unstructured.
You can install document specific dependencies with extras, i.e. pip install "unstructured[docx]".
To install the dependencies for all document types, use pip install "unstructured[all-docs]".
Install the following system dependencies if they are not already available on your system. Depending on what document types you're parsing, you may not need all of these.
libmagic-dev (filetype detection)
poppler-utils (images and PDFs)
tesseract-ocr(images and PDFs)
libreoffice (MS Office docs)
pandoc (EPUBs)
If you want to get up and running with less set up, you can simply run pip install unstructured and use UnstructuredAPIFileLoader or UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API.
The Unstructured API requires API keys to make requests. You can request an API key here and start using it today! Checkout the README here here to get started making API calls. We'd love to hear your feedback, let us know how it goes in our community slack. And stay tuned for improvements to both quality and performance! Check out the instructions here if you'd like to self-host the Unstructured API or run it locally.
Data Loaders
The primary usage of the Unstructured is in data loaders.
UnstructuredAPIFileIOLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredAPIFileIOLoader
UnstructuredAPIFileLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredAPIFileLoader
UnstructuredCHMLoader
CHM means Microsoft Compiled HTML Help.
See a usage example in the API documentation.
from langchain_community.document_loaders import UnstructuredCHMLoader
UnstructuredCSVLoader
A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.
See a usage example.
from langchain_community.document_loaders import UnstructuredCSVLoader
UnstructuredEmailLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredEmailLoader
UnstructuredEPubLoader
EPUB is an e-book file format that uses the “.epub” file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.
See a usage example.
from langchain_community.document_loaders import UnstructuredEPubLoader
UnstructuredExcelLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredExcelLoader
UnstructuredFileIOLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredFileIOLoader
UnstructuredFileLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredFileLoader
UnstructuredHTMLLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredHTMLLoader
UnstructuredImageLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredImageLoader
UnstructuredMarkdownLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredMarkdownLoader
UnstructuredODTLoader
The Open Document Format for Office Applications (ODF), also known as OpenDocument, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications.
See a usage example.
from langchain_community.document_loaders import UnstructuredODTLoader
UnstructuredOrgModeLoader
An Org Mode document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs.
See a usage example.
from langchain_community.document_loaders import UnstructuredOrgModeLoader
UnstructuredPDFLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredPDFLoader
UnstructuredPowerPointLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredPowerPointLoader
UnstructuredRSTLoader
A reStructured Text (RST) file is a file format for textual data used primarily in the Python programming language community for technical documentation.
See a usage example.
from langchain_community.document_loaders import UnstructuredRSTLoader
UnstructuredRTFLoader
See a usage example in the API documentation.
from langchain_community.document_loaders import UnstructuredRTFLoader
UnstructuredTSVLoader
A tab-separated values (TSV) file is a simple, text-based file format for storing tabular data. Records are separated by newlines, and values within a record are separated by tab characters.
See a usage example.
from langchain_community.document_loaders import UnstructuredTSVLoader
UnstructuredURLLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredURLLoader
UnstructuredWordDocumentLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredWordDocumentLoader
UnstructuredXMLLoader
See a usage example.
from langchain_community.document_loaders import UnstructuredXMLLoader |
https://python.langchain.com/docs/integrations/providers/upstage/ | ## Upstage
[Upstage](https://upstage.ai/) is a leading artificial intelligence (AI) company specializing in delivering above-human-grade performance LLM components.
## Solar LLM[](#solar-llm "Direct link to Solar LLM")
**Solar Mini Chat** is a fast yet powerful advanced large language model focusing on English and Korean. It has been specifically fine-tuned for multi-turn chat purposes, showing enhanced performance across a wide range of natural language processing tasks, like multi-turn conversation or tasks that require an understanding of long contexts, such as RAG (Retrieval-Augmented Generation), compared to other models of a similar size. This fine-tuning equips it with the ability to handle longer conversations more effectively, making it particularly adept for interactive applications.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
Install `langchain-upstage` package:
```
pip install -qU langchain-core langchain-upstage
```
Get an [access token](https://console.upstage.ai/) and set it as an environment variable (`UPSTAGE_API_KEY`)
## Upstage LangChain integrations[](#upstage-langchain-integrations "Direct link to Upstage LangChain integrations")
| API | Description | Import | Example usage |
| --- | --- | --- | --- |
| Chat | Build assistants using Solar Mini Chat | `from langchain_upstage import ChatUpstage` | [Go](https://python.langchain.com/docs/integrations/chat/upstage/) |
| Text Embedding | Embed strings to vectors | `from langchain_upstage import UpstageEmbeddings` | [Go](https://python.langchain.com/docs/integrations/text_embedding/upstage/) |
See [documentations](https://developers.upstage.ai/) for more details about the features.
## Quick Examples[](#quick-examples "Direct link to Quick Examples")
### Environment Setup[](#environment-setup "Direct link to Environment Setup")
```
import osos.environ["UPSTAGE_API_KEY"] = "YOUR_API_KEY"
```
### Chat[](#chat "Direct link to Chat")
```
from langchain_upstage import ChatUpstagechat = ChatUpstage()response = chat.invoke("Hello, how are you?")print(response)
```
### Text embedding[](#text-embedding "Direct link to Text embedding")
```
from langchain_upstage import UpstageEmbeddingsembeddings = UpstageEmbeddings()doc_result = embeddings.embed_documents( ["Sam is a teacher.", "This is another document"])print(doc_result)query_result = embeddings.embed_query("What does Sam do?")print(query_result)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:02.517Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/upstage/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/upstage/",
"description": "Upstage is a leading artificial intelligence (AI)",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3575",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"upstage\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:02 GMT",
"etag": "W/\"31d98d43a2a10f496a19c43d888f6c1a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8tjzq-1713753722444-c7860d1b16e4"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/upstage/",
"property": "og:url"
},
{
"content": "Upstage | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Upstage is a leading artificial intelligence (AI)",
"property": "og:description"
}
],
"title": "Upstage | 🦜️🔗 LangChain"
} | Upstage
Upstage is a leading artificial intelligence (AI) company specializing in delivering above-human-grade performance LLM components.
Solar LLM
Solar Mini Chat is a fast yet powerful advanced large language model focusing on English and Korean. It has been specifically fine-tuned for multi-turn chat purposes, showing enhanced performance across a wide range of natural language processing tasks, like multi-turn conversation or tasks that require an understanding of long contexts, such as RAG (Retrieval-Augmented Generation), compared to other models of a similar size. This fine-tuning equips it with the ability to handle longer conversations more effectively, making it particularly adept for interactive applications.
Installation and Setup
Install langchain-upstage package:
pip install -qU langchain-core langchain-upstage
Get an access token and set it as an environment variable (UPSTAGE_API_KEY)
Upstage LangChain integrations
APIDescriptionImportExample usage
Chat Build assistants using Solar Mini Chat from langchain_upstage import ChatUpstage Go
Text Embedding Embed strings to vectors from langchain_upstage import UpstageEmbeddings Go
See documentations for more details about the features.
Quick Examples
Environment Setup
import os
os.environ["UPSTAGE_API_KEY"] = "YOUR_API_KEY"
Chat
from langchain_upstage import ChatUpstage
chat = ChatUpstage()
response = chat.invoke("Hello, how are you?")
print(response)
Text embedding
from langchain_upstage import UpstageEmbeddings
embeddings = UpstageEmbeddings()
doc_result = embeddings.embed_documents(
["Sam is a teacher.", "This is another document"]
)
print(doc_result)
query_result = embeddings.embed_query("What does Sam do?")
print(query_result) |
https://python.langchain.com/docs/integrations/providers/upstash/ | Upstash offers developers serverless databases and messaging platforms to build powerful applications without having to worry about the operational complexity of running databases at scale.
All of Upstash-LangChain integrations are based on `upstash-redis` Python SDK being utilized as wrappers for LangChain. This SDK utilizes Upstash Redis DB by giving UPSTASH\_REDIS\_REST\_URL and UPSTASH\_REDIS\_REST\_TOKEN parameters from the console. One significant advantage of this is that, this SDK uses a REST API. This means, you can run this in serverless platforms, edge or any platform that does not support TCP connections.
```
import langchainfrom upstash_redis import RedisURL = "<UPSTASH_REDIS_REST_URL>"TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:03.038Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/upstash/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/upstash/",
"description": "Upstash offers developers serverless databases and messaging platforms to build powerful applications without having to worry about the operational complexity of running databases at scale.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3576",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"upstash\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:02 GMT",
"etag": "W/\"6a43a93d1836458c104287d889c90eb3\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::tlvfk-1713753722985-ce6c06c60218"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/upstash/",
"property": "og:url"
},
{
"content": "Upstash Redis | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Upstash offers developers serverless databases and messaging platforms to build powerful applications without having to worry about the operational complexity of running databases at scale.",
"property": "og:description"
}
],
"title": "Upstash Redis | 🦜️🔗 LangChain"
} | Upstash offers developers serverless databases and messaging platforms to build powerful applications without having to worry about the operational complexity of running databases at scale.
All of Upstash-LangChain integrations are based on upstash-redis Python SDK being utilized as wrappers for LangChain. This SDK utilizes Upstash Redis DB by giving UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN parameters from the console. One significant advantage of this is that, this SDK uses a REST API. This means, you can run this in serverless platforms, edge or any platform that does not support TCP connections.
import langchain
from upstash_redis import Redis
URL = "<UPSTASH_REDIS_REST_URL>"
TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"
langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN)) |
https://python.langchain.com/docs/integrations/providers/vearch/ | [Vearch](https://github.com/vearch/vearch) is a scalable distributed system for efficient similarity search of deep learning vectors.
Vearch Python SDK enables vearch to use locally. Vearch python sdk can be installed easily by pip install vearch.
```
from langchain_community.vectorstores import Vearch
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:03.811Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/vearch/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/vearch/",
"description": "Vearch is a scalable distributed system for efficient similarity search of deep learning vectors.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"vearch\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:03 GMT",
"etag": "W/\"5e9513fe454f0ddda08cef020a9f3f47\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::6jffl-1713753723674-230adea3886a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/vearch/",
"property": "og:url"
},
{
"content": "Vearch | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Vearch is a scalable distributed system for efficient similarity search of deep learning vectors.",
"property": "og:description"
}
],
"title": "Vearch | 🦜️🔗 LangChain"
} | Vearch is a scalable distributed system for efficient similarity search of deep learning vectors.
Vearch Python SDK enables vearch to use locally. Vearch python sdk can be installed easily by pip install vearch.
from langchain_community.vectorstores import Vearch |
https://python.langchain.com/docs/integrations/providers/vectara/vectara_summary/ | ## Vectara
> [Vectara](https://vectara.com/) is the trusted GenAI platform that provides an easy-to-use API for document indexing and querying.
Vectara provides an end-to-end managed service for Retrieval Augmented Generation or [RAG](https://vectara.com/grounded-generation/), which includes:
1. A way to extract text from document files and chunk them into sentences.
2. The state-of-the-art [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model. Each text chunk is encoded into a vector embedding using Boomerang, and stored in the Vectara internal knowledge (vector+text) store
3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/))
4. An option to create [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents, including citations.
See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.
This notebook shows how to use functionality related to the `Vectara`’s integration with langchain. Specificaly we will demonstrate how to use chaining with [LangChain’s Expression Language](https://python.langchain.com/docs/expression_language/) and using Vectara’s integrated summarization capability.
## Setup
You will need a Vectara account to use Vectara with LangChain. To get started, use the following steps:
1. [Sign up](https://www.vectara.com/integrations/langchain) for a Vectara account if you don’t already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.
2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **“Create Corpus”** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.
3. Next you’ll need to create API keys to access the corpus. Click on the **“Authorization”** tab in the corpus view and then the **“Create API Key”** button. Give your key a name, and choose whether you want query only or query+index for your key. Click “Create” and you now have an active API key. Keep this key confidential.
To use LangChain with Vectara, you’ll need to have these three values: customer ID, corpus ID and api\_key. You can provide those to LangChain in two ways:
1. Include in your environment these three variables: `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`.
> For example, you can set these variables using os.environ and getpass as follows:
```
import osimport getpassos.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")
```
1. Add them to the Vectara vectorstore constructor:
```
vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key )
```
```
from langchain_community.embeddings import FakeEmbeddingsfrom langchain_community.vectorstores import Vectarafrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnableLambda, RunnablePassthrough
```
First we load the state-of-the-union text into Vectara. Note that we use the `from_files` interface which does not require any local processing or chunking - Vectara receives the file content and performs all the necessary pre-processing, chunking and embedding of the file into its knowledge store.
```
vectara = Vectara.from_files(["state_of_the_union.txt"])
```
We now create a Vectara retriever and specify that: \* It should return only the 3 top Document matches \* For summary, it should use the top 5 results and respond in English
```
summary_config = {"is_enabled": True, "max_results": 5, "response_lang": "eng"}retriever = vectara.as_retriever( search_kwargs={"k": 3, "summary_config": summary_config})
```
When using summarization with Vectara, the retriever responds with a list of `Document` objects: 1. The first `k` documents are the ones that match the query (as we are used to with a standard vector store) 2. With summary enabled, an additional `Document` object is apended, which includes the summary text. This Document has the metadata field `summary` set as True.
Let’s define two utility functions to split those out:
```
def get_sources(documents): return documents[:-1]def get_summary(documents): return documents[-1].page_contentquery_str = "what did Biden say?"
```
Now we can try a summary response for the query:
```
(retriever | get_summary).invoke(query_str)
```
```
'The returned results did not contain sufficient information to be summarized into a useful answer for your query. Please try a different search or restate your query differently.'
```
And if we would like to see the sources retrieved from Vectara that were used in this summary (the citations):
```
(retriever | get_sources).invoke(query_str)
```
```
[Document(page_content='When they came home, many of the world’s fittest and best trained warriors were never the same. Dizziness. \n\nA cancer that would put them in a flag-draped coffin. I know. \n\nOne of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can.', metadata={'lang': 'eng', 'section': '1', 'offset': '34652', 'len': '60', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}), Document(page_content='The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value.', metadata={'lang': 'eng', 'section': '1', 'offset': '3807', 'len': '42', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}), Document(page_content='He rejected repeated efforts at diplomacy. He thought the West and NATO wouldn’t respond. And he thought he could divide us at home. We were ready. Here is what we did. We prepared extensively and carefully.', metadata={'lang': 'eng', 'section': '1', 'offset': '2100', 'len': '42', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'})]
```
Vectara’s “RAG as a service” does a lot of the heavy lifting in creating question answering or chatbot chains. The integration with LangChain provides the option to use additional capabilities such as query pre-processing like `SelfQueryRetriever` or `MultiQueryRetriever`. Let’s look at an example of using the [MultiQueryRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/MultiQueryRetriever/).
Since MQR uses an LLM we have to set that up - here we choose `ChatOpenAI`:
```
from langchain.retrievers.multi_query import MultiQueryRetrieverfrom langchain_openai import ChatOpenAIllm = ChatOpenAI(temperature=0)mqr = MultiQueryRetriever.from_llm(retriever=retriever, llm=llm)(mqr | get_summary).invoke(query_str)
```
```
"President Biden has made several notable quotes and comments. He expressed a commitment to investigate the potential impact of burn pits on soldiers' health, referencing his son's brain cancer [1]. He emphasized the importance of unity among Americans, urging us to see each other as fellow citizens rather than enemies [2]. Biden also highlighted the need for schools to use funds from the American Rescue Plan to hire teachers and address learning loss, while encouraging community involvement in supporting education [3]."
```
```
(mqr | get_sources).invoke(query_str)
```
```
[Document(page_content='When they came home, many of the world’s fittest and best trained warriors were never the same. Dizziness. \n\nA cancer that would put them in a flag-draped coffin. I know. \n\nOne of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can.', metadata={'lang': 'eng', 'section': '1', 'offset': '34652', 'len': '60', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}), Document(page_content='The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value.', metadata={'lang': 'eng', 'section': '1', 'offset': '3807', 'len': '42', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}), Document(page_content='And, if Congress provides the funds we need, we’ll have new stockpiles of tests, masks, and pills ready if needed. I cannot promise a new variant won’t come. But I can promise you we’ll do everything within our power to be ready if it does. Third – we can end the shutdown of schools and businesses. We have the tools we need.', metadata={'lang': 'eng', 'section': '1', 'offset': '24753', 'len': '82', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}), Document(page_content='The returned results did not contain sufficient information to be summarized into a useful answer for your query. Please try a different search or restate your query differently.', metadata={'summary': True}), Document(page_content='Danielle says Heath was a fighter to the very end. He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits.', metadata={'lang': 'eng', 'section': '1', 'offset': '35502', 'len': '58', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}), Document(page_content='Let’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.', metadata={'lang': 'eng', 'section': '1', 'offset': '26312', 'len': '89', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}), Document(page_content='The American Rescue Plan gave schools money to hire teachers and help students make up for lost learning. I urge every parent to make sure your school does just that. And we can all play a part—sign up to be a tutor or a mentor. Children were also struggling before the pandemic. Bullying, violence, trauma, and the harms of social media.', metadata={'lang': 'eng', 'section': '1', 'offset': '33227', 'len': '61', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:03.857Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/vectara/vectara_summary/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/vectara/vectara_summary/",
"description": "Vectara is the trusted GenAI platform that",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3576",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"vectara_summary\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:03 GMT",
"etag": "W/\"3f909026bf06dd1b5ed71debdb231556\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::xp972-1713753723762-51d5c9f69557"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/vectara/vectara_summary/",
"property": "og:url"
},
{
"content": "Vectara | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Vectara is the trusted GenAI platform that",
"property": "og:description"
}
],
"title": "Vectara | 🦜️🔗 LangChain"
} | Vectara
Vectara is the trusted GenAI platform that provides an easy-to-use API for document indexing and querying.
Vectara provides an end-to-end managed service for Retrieval Augmented Generation or RAG, which includes:
A way to extract text from document files and chunk them into sentences.
The state-of-the-art Boomerang embeddings model. Each text chunk is encoded into a vector embedding using Boomerang, and stored in the Vectara internal knowledge (vector+text) store
A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for Hybrid Search and MMR)
An option to create generative summary, based on the retrieved documents, including citations.
See the Vectara API documentation for more information on how to use the API.
This notebook shows how to use functionality related to the Vectara’s integration with langchain. Specificaly we will demonstrate how to use chaining with LangChain’s Expression Language and using Vectara’s integrated summarization capability.
Setup
You will need a Vectara account to use Vectara with LangChain. To get started, use the following steps:
Sign up for a Vectara account if you don’t already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.
Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the “Create Corpus” button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.
Next you’ll need to create API keys to access the corpus. Click on the “Authorization” tab in the corpus view and then the “Create API Key” button. Give your key a name, and choose whether you want query only or query+index for your key. Click “Create” and you now have an active API key. Keep this key confidential.
To use LangChain with Vectara, you’ll need to have these three values: customer ID, corpus ID and api_key. You can provide those to LangChain in two ways:
Include in your environment these three variables: VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY.
For example, you can set these variables using os.environ and getpass as follows:
import os
import getpass
os.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")
os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")
os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")
Add them to the Vectara vectorstore constructor:
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)
from langchain_community.embeddings import FakeEmbeddings
from langchain_community.vectorstores import Vectara
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
First we load the state-of-the-union text into Vectara. Note that we use the from_files interface which does not require any local processing or chunking - Vectara receives the file content and performs all the necessary pre-processing, chunking and embedding of the file into its knowledge store.
vectara = Vectara.from_files(["state_of_the_union.txt"])
We now create a Vectara retriever and specify that: * It should return only the 3 top Document matches * For summary, it should use the top 5 results and respond in English
summary_config = {"is_enabled": True, "max_results": 5, "response_lang": "eng"}
retriever = vectara.as_retriever(
search_kwargs={"k": 3, "summary_config": summary_config}
)
When using summarization with Vectara, the retriever responds with a list of Document objects: 1. The first k documents are the ones that match the query (as we are used to with a standard vector store) 2. With summary enabled, an additional Document object is apended, which includes the summary text. This Document has the metadata field summary set as True.
Let’s define two utility functions to split those out:
def get_sources(documents):
return documents[:-1]
def get_summary(documents):
return documents[-1].page_content
query_str = "what did Biden say?"
Now we can try a summary response for the query:
(retriever | get_summary).invoke(query_str)
'The returned results did not contain sufficient information to be summarized into a useful answer for your query. Please try a different search or restate your query differently.'
And if we would like to see the sources retrieved from Vectara that were used in this summary (the citations):
(retriever | get_sources).invoke(query_str)
[Document(page_content='When they came home, many of the world’s fittest and best trained warriors were never the same. Dizziness. \n\nA cancer that would put them in a flag-draped coffin. I know. \n\nOne of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can.', metadata={'lang': 'eng', 'section': '1', 'offset': '34652', 'len': '60', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),
Document(page_content='The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value.', metadata={'lang': 'eng', 'section': '1', 'offset': '3807', 'len': '42', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),
Document(page_content='He rejected repeated efforts at diplomacy. He thought the West and NATO wouldn’t respond. And he thought he could divide us at home. We were ready. Here is what we did. We prepared extensively and carefully.', metadata={'lang': 'eng', 'section': '1', 'offset': '2100', 'len': '42', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'})]
Vectara’s “RAG as a service” does a lot of the heavy lifting in creating question answering or chatbot chains. The integration with LangChain provides the option to use additional capabilities such as query pre-processing like SelfQueryRetriever or MultiQueryRetriever. Let’s look at an example of using the MultiQueryRetriever.
Since MQR uses an LLM we have to set that up - here we choose ChatOpenAI:
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(temperature=0)
mqr = MultiQueryRetriever.from_llm(retriever=retriever, llm=llm)
(mqr | get_summary).invoke(query_str)
"President Biden has made several notable quotes and comments. He expressed a commitment to investigate the potential impact of burn pits on soldiers' health, referencing his son's brain cancer [1]. He emphasized the importance of unity among Americans, urging us to see each other as fellow citizens rather than enemies [2]. Biden also highlighted the need for schools to use funds from the American Rescue Plan to hire teachers and address learning loss, while encouraging community involvement in supporting education [3]."
(mqr | get_sources).invoke(query_str)
[Document(page_content='When they came home, many of the world’s fittest and best trained warriors were never the same. Dizziness. \n\nA cancer that would put them in a flag-draped coffin. I know. \n\nOne of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can.', metadata={'lang': 'eng', 'section': '1', 'offset': '34652', 'len': '60', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),
Document(page_content='The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value.', metadata={'lang': 'eng', 'section': '1', 'offset': '3807', 'len': '42', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),
Document(page_content='And, if Congress provides the funds we need, we’ll have new stockpiles of tests, masks, and pills ready if needed. I cannot promise a new variant won’t come. But I can promise you we’ll do everything within our power to be ready if it does. Third – we can end the shutdown of schools and businesses. We have the tools we need.', metadata={'lang': 'eng', 'section': '1', 'offset': '24753', 'len': '82', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),
Document(page_content='The returned results did not contain sufficient information to be summarized into a useful answer for your query. Please try a different search or restate your query differently.', metadata={'summary': True}),
Document(page_content='Danielle says Heath was a fighter to the very end. He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits.', metadata={'lang': 'eng', 'section': '1', 'offset': '35502', 'len': '58', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),
Document(page_content='Let’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.', metadata={'lang': 'eng', 'section': '1', 'offset': '26312', 'len': '89', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'}),
Document(page_content='The American Rescue Plan gave schools money to hire teachers and help students make up for lost learning. I urge every parent to make sure your school does just that. And we can all play a part—sign up to be a tutor or a mentor. Children were also struggling before the pandemic. Bullying, violence, trauma, and the harms of social media.', metadata={'lang': 'eng', 'section': '1', 'offset': '33227', 'len': '61', 'X-TIKA:Parsed-By': 'org.apache.tika.parser.csv.TextAndCSVParser', 'Content-Encoding': 'UTF-8', 'Content-Type': 'text/plain; charset=UTF-8', 'source': 'vectara'})] |
https://python.langchain.com/docs/integrations/providers/vdms/ | The vector store is a simple wrapper around VDMS. It provides a simple interface to store and retrieve data.
```
from langchain_community.document_loaders import TextLoaderfrom langchain.text_splitter import CharacterTextSplitterloader = TextLoader("./state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)docs = text_splitter.split_documents(documents)from langchain_community.vectorstores import VDMSfrom langchain_community.vectorstores.vdms import VDMS_Clientfrom langchain_community.embeddings.huggingface import HuggingFaceEmbeddingsclient = VDMS_Client("localhost", 55555)vectorstore = VDMS.from_documents( docs, client=client, collection_name="langchain-demo", embedding_function=HuggingFaceEmbeddings(), engine="FaissFlat" distance_strategy="L2",)query = "What did the president say about Ketanji Brown Jackson"results = vectorstore.similarity_search(query)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:04.172Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/vdms/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/vdms/",
"description": "VDMS is a storage solution for efficient access",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3576",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"vdms\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:03 GMT",
"etag": "W/\"d38d177721f5aa9ecdaddb0872053c6d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::qw5cn-1713753723774-97e8546ab53a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/vdms/",
"property": "og:url"
},
{
"content": "VDMS | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "VDMS is a storage solution for efficient access",
"property": "og:description"
}
],
"title": "VDMS | 🦜️🔗 LangChain"
} | The vector store is a simple wrapper around VDMS. It provides a simple interface to store and retrieve data.
from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
loader = TextLoader("./state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
from langchain_community.vectorstores import VDMS
from langchain_community.vectorstores.vdms import VDMS_Client
from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings
client = VDMS_Client("localhost", 55555)
vectorstore = VDMS.from_documents(
docs,
client=client,
collection_name="langchain-demo",
embedding_function=HuggingFaceEmbeddings(),
engine="FaissFlat"
distance_strategy="L2",
)
query = "What did the president say about Ketanji Brown Jackson"
results = vectorstore.similarity_search(query) |
https://python.langchain.com/docs/integrations/providers/uptrain/ | [UpTrain](https://uptrain.ai/) is an open-source unified platform to evaluate and improve Generative AI applications. It provides grades for 20+ preconfigured evaluations (covering language, code, embedding use cases), performs root cause analysis on failure cases and gives insights on how to resolve them. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:04.256Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/uptrain/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/uptrain/",
"description": "UpTrain is an open-source unified platform to evaluate and",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4625",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"uptrain\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:03 GMT",
"etag": "W/\"0a000588165810216b6ed3de66d34787\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::cfhg6-1713753723791-0112652013d4"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/uptrain/",
"property": "og:url"
},
{
"content": "UpTrain | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "UpTrain is an open-source unified platform to evaluate and",
"property": "og:description"
}
],
"title": "UpTrain | 🦜️🔗 LangChain"
} | UpTrain is an open-source unified platform to evaluate and improve Generative AI applications. It provides grades for 20+ preconfigured evaluations (covering language, code, embedding use cases), performs root cause analysis on failure cases and gives insights on how to resolve them. |
https://python.langchain.com/docs/integrations/providers/vectara/ | ## Vectara
> [Vectara](https://vectara.com/) is the trusted GenAI platform for developers. It provides a simple API to build GenAI applications for semantic search or RAG (Retreieval augmented generation).
**Vectara Overview:**
* `Vectara` is developer-first API platform for building trusted GenAI applications.
* To use Vectara - first [sign up](https://vectara.com/integrations/langchain) and create an account. Then create a corpus and an API key for indexing and searching.
* You can use Vectara's [indexing API](https://docs.vectara.com/docs/indexing-apis/indexing) to add documents into Vectara's index
* You can use Vectara's [Search API](https://docs.vectara.com/docs/search-apis/search) to query Vectara's index (which also supports Hybrid search implicitly).
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
To use `Vectara` with LangChain no special installation steps are required. To get started, [sign up](https://vectara.com/integrations/langchain) and follow our [quickstart](https://docs.vectara.com/docs/quickstart) guide to create a corpus and an API key. Once you have these, you can provide them as arguments to the Vectara vectorstore, or you can set them as environment variables.
* export `VECTARA_CUSTOMER_ID`\="your\_customer\_id"
* export `VECTARA_CORPUS_ID`\="your\_corpus\_id"
* export `VECTARA_API_KEY`\="your-vectara-api-key"
## Vectara as a Vector Store[](#vectara-as-a-vector-store "Direct link to Vectara as a Vector Store")
There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.
To import this vectorstore:
```
from langchain_community.vectorstores import Vectara
```
To create an instance of the Vectara vectorstore:
```
vectara = Vectara( vectara_customer_id=customer_id, vectara_corpus_id=corpus_id, vectara_api_key=api_key)
```
The customer\_id, corpus\_id and api\_key are optional, and if they are not supplied will be read from the environment variables `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`, respectively.
After you have the vectorstore, you can `add_texts` or `add_documents` as per the standard `VectorStore` interface, for example:
```
vectara.add_texts(["to be or not to be", "that is the question"])
```
Since Vectara supports file-upload, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly as file. When using this method, the file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism.
As an example:
```
vectara.add_files(["path/to/file1.pdf", "path/to/file2.pdf",...])
```
To query the vectorstore, you can use the `similarity_search` method (or `similarity_search_with_score`), which takes a query string and returns a list of results:
```
results = vectara.similarity_score("what is LangChain?")
```
The results are returned as a list of relevant documents, and a relevance score of each document.
In this case, we used the default retrieval parameters, but you can also specify the following additional arguments in `similarity_search` or `similarity_search_with_score`:
* `k`: number of results to return (defaults to 5)
* `lambda_val`: the [lexical matching](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) factor for hybrid search (defaults to 0.025)
* `filter`: a [filter](https://docs.vectara.com/docs/common-use-cases/filtering-by-metadata/filter-overview) to apply to the results (default None)
* `n_sentence_context`: number of sentences to include before/after the actual matching segment when returning results. This defaults to 2.
* `mmr_config`: can be used to specify MMR mode in the query.
* `is_enabled`: True or False
* `mmr_k`: number of results to use for MMR reranking
* `diversity_bias`: 0 = no diversity, 1 = full diversity. This is the lambda parameter in the MMR formula and is in the range 0...1
## Vectara for Retrieval Augmented Generation (RAG)[](#vectara-for-retrieval-augmented-generation-rag "Direct link to Vectara for Retrieval Augmented Generation (RAG)")
Vectara provides a full RAG pipeline, including generative summarization. To use this pipeline, you can specify the `summary_config` argument in `similarity_search` or `similarity_search_with_score` as follows:
* `summary_config`: can be used to request an LLM summary in RAG
* `is_enabled`: True or False
* `max_results`: number of results to use for summary generation
* `response_lang`: language of the response summary, in ISO 639-2 format (e.g. 'en', 'fr', 'de', etc)
## Example Notebooks[](#example-notebooks "Direct link to Example Notebooks")
For a more detailed examples of using Vectara, see the following examples:
* [this notebook](https://python.langchain.com/docs/integrations/vectorstores/vectara/) shows how to use Vectara as a vectorstore for semantic search
* [this notebook](https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat/) shows how to build a chatbot with Langchain and Vectara
* [this notebook](https://python.langchain.com/docs/integrations/providers/vectara/vectara_summary/) shows how to use the full Vectara RAG pipeline, including generative summarization
* [this notebook](https://python.langchain.com/docs/integrations/retrievers/self_query/vectara_self_query/) shows the self-query capability with Vectara. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:04.625Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/vectara/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/vectara/",
"description": "Vectara is the trusted GenAI platform for developers. It provides a simple API to build GenAI applications",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3576",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"vectara\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:03 GMT",
"etag": "W/\"1fa1f314d511875593e06b070f04ea91\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::qf8zq-1713753723798-6dae1b89c63d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/vectara/",
"property": "og:url"
},
{
"content": "Vectara | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Vectara is the trusted GenAI platform for developers. It provides a simple API to build GenAI applications",
"property": "og:description"
}
],
"title": "Vectara | 🦜️🔗 LangChain"
} | Vectara
Vectara is the trusted GenAI platform for developers. It provides a simple API to build GenAI applications for semantic search or RAG (Retreieval augmented generation).
Vectara Overview:
Vectara is developer-first API platform for building trusted GenAI applications.
To use Vectara - first sign up and create an account. Then create a corpus and an API key for indexing and searching.
You can use Vectara's indexing API to add documents into Vectara's index
You can use Vectara's Search API to query Vectara's index (which also supports Hybrid search implicitly).
Installation and Setup
To use Vectara with LangChain no special installation steps are required. To get started, sign up and follow our quickstart guide to create a corpus and an API key. Once you have these, you can provide them as arguments to the Vectara vectorstore, or you can set them as environment variables.
export VECTARA_CUSTOMER_ID="your_customer_id"
export VECTARA_CORPUS_ID="your_corpus_id"
export VECTARA_API_KEY="your-vectara-api-key"
Vectara as a Vector Store
There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.
To import this vectorstore:
from langchain_community.vectorstores import Vectara
To create an instance of the Vectara vectorstore:
vectara = Vectara(
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key
)
The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY, respectively.
After you have the vectorstore, you can add_texts or add_documents as per the standard VectorStore interface, for example:
vectara.add_texts(["to be or not to be", "that is the question"])
Since Vectara supports file-upload, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly as file. When using this method, the file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism.
As an example:
vectara.add_files(["path/to/file1.pdf", "path/to/file2.pdf",...])
To query the vectorstore, you can use the similarity_search method (or similarity_search_with_score), which takes a query string and returns a list of results:
results = vectara.similarity_score("what is LangChain?")
The results are returned as a list of relevant documents, and a relevance score of each document.
In this case, we used the default retrieval parameters, but you can also specify the following additional arguments in similarity_search or similarity_search_with_score:
k: number of results to return (defaults to 5)
lambda_val: the lexical matching factor for hybrid search (defaults to 0.025)
filter: a filter to apply to the results (default None)
n_sentence_context: number of sentences to include before/after the actual matching segment when returning results. This defaults to 2.
mmr_config: can be used to specify MMR mode in the query.
is_enabled: True or False
mmr_k: number of results to use for MMR reranking
diversity_bias: 0 = no diversity, 1 = full diversity. This is the lambda parameter in the MMR formula and is in the range 0...1
Vectara for Retrieval Augmented Generation (RAG)
Vectara provides a full RAG pipeline, including generative summarization. To use this pipeline, you can specify the summary_config argument in similarity_search or similarity_search_with_score as follows:
summary_config: can be used to request an LLM summary in RAG
is_enabled: True or False
max_results: number of results to use for summary generation
response_lang: language of the response summary, in ISO 639-2 format (e.g. 'en', 'fr', 'de', etc)
Example Notebooks
For a more detailed examples of using Vectara, see the following examples:
this notebook shows how to use Vectara as a vectorstore for semantic search
this notebook shows how to build a chatbot with Langchain and Vectara
this notebook shows how to use the full Vectara RAG pipeline, including generative summarization
this notebook shows the self-query capability with Vectara. |
https://python.langchain.com/docs/integrations/providers/vespa/ | ## Vespa
> [Vespa](https://vespa.ai/) is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
## Retriever[](#retriever "Direct link to Retriever")
See a [usage example](https://python.langchain.com/docs/integrations/retrievers/vespa/).
```
from langchain.retrievers import VespaRetriever
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:04.411Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/vespa/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/vespa/",
"description": "Vespa is a fully featured search engine and vector database.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3576",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"vespa\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:03 GMT",
"etag": "W/\"812b374c5c02df868d2039f1f3878521\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::v2ch6-1713753723807-cc8aae907356"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/vespa/",
"property": "og:url"
},
{
"content": "Vespa | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Vespa is a fully featured search engine and vector database.",
"property": "og:description"
}
],
"title": "Vespa | 🦜️🔗 LangChain"
} | Vespa
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Installation and Setup
Retriever
See a usage example.
from langchain.retrievers import VespaRetriever
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat/ | ## Setup
You will need a Vectara account to use Vectara with LangChain. To get started, use the following steps: 1. [Sign up](https://www.vectara.com/integrations/langchain) for a Vectara account if you don’t already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window. 2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **“Create Corpus”** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top. 3. Next you’ll need to create API keys to access the corpus. Click on the **“Authorization”** tab in the corpus view and then the **“Create API Key”** button. Give your key a name, and choose whether you want query only or query+index for your key. Click “Create” and you now have an active API key. Keep this key confidential.
To use LangChain with Vectara, you’ll need to have these three values: customer ID, corpus ID and api\_key. You can provide those to LangChain in two ways:
1. Include in your environment these three variables: `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`.
> For example, you can set these variables using os.environ and getpass as follows:
```
import osimport getpassos.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")
```
1. Add them to the Vectara vectorstore constructor:
```
vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key )
```
```
import osfrom langchain.chains import ConversationalRetrievalChainfrom langchain_community.vectorstores import Vectarafrom langchain_openai import OpenAI
```
Load in documents. You can replace this with a loader for whatever type of data you want
```
from langchain_community.document_loaders import TextLoaderloader = TextLoader("state_of_the_union.txt")documents = loader.load()
```
Since we’re using Vectara, there’s no need to chunk the documents, as that is done automatically in the Vectara platform backend. We just use `from_document()` to upload the text loaded from the file, and directly ingest it into Vectara:
```
vectara = Vectara.from_documents(documents, embedding=None)
```
We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation.
```
from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
```
We now initialize the `ConversationalRetrievalChain`:
```
openai_api_key = os.environ["OPENAI_API_KEY"]llm = OpenAI(openai_api_key=openai_api_key, temperature=0)retriever = vectara.as_retriever()d = retriever.get_relevant_documents( "What did the president say about Ketanji Brown Jackson", k=2)print(d)
```
```
[Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '29486', 'len': '97'}), Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '1083', 'len': '117'}), Document(page_content='All told, we created 369,000 new manufacturing jobs in America just last year. Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight. As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. \n\nBut with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. Inflation is robbing them of the gains they might otherwise feel.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '14257', 'len': '77'}), Document(page_content='This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease. Last month, I announced our plan to supercharge \nthe Cancer Moonshot that President Obama asked me to lead six years ago. Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. More support for patients and families.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '36196', 'len': '122'}), Document(page_content='Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '664', 'len': '68'}), Document(page_content='I understand. \n\nI remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '8042', 'len': '97'}), Document(page_content='He rejected repeated efforts at diplomacy. He thought the West and NATO wouldn’t respond. And he thought he could divide us at home. We were ready. Here is what we did. We prepared extensively and carefully.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '2100', 'len': '42'}), Document(page_content='He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '788', 'len': '28'}), Document(page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. He rejected repeated efforts at diplomacy. He thought the West and NATO wouldn’t respond. And he thought he could divide us at home. We were ready. Here is what we did.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '2053', 'len': '46'}), Document(page_content='A unity agenda for the nation. We can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. We have fought for freedom, expanded liberty, defeated totalitarianism and terror. And built the strongest, freest, and most prosperous nation the world has ever known.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '36968', 'len': '131'})]
```
```
bot = ConversationalRetrievalChain.from_llm( llm, retriever, memory=memory, verbose=False)
```
And can have a multi-turn conversation with out new bot:
```
query = "What did the president say about Ketanji Brown Jackson"result = bot.invoke({"question": query})
```
```
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence."
```
```
query = "Did he mention who she suceeded"result = bot.invoke({"question": query})
```
```
' Ketanji Brown Jackson succeeded Justice Breyer on the United States Supreme Court.'
```
## Pass in chat history[](#pass-in-chat-history "Direct link to Pass in chat history")
In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object.
```
bot = ConversationalRetrievalChain.from_llm( OpenAI(temperature=0), vectara.as_retriever())
```
Here’s an example of asking a question with no chat history
```
chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = bot.invoke({"question": query, "chat_history": chat_history})
```
```
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence."
```
Here’s an example of asking a question with some chat history
```
chat_history = [(query, result["answer"])]query = "Did he mention who she suceeded"result = bot.invoke({"question": query, "chat_history": chat_history})
```
```
' Ketanji Brown Jackson succeeded Justice Breyer on the United States Supreme Court.'
```
## Return Source Documents[](#return-source-documents "Direct link to Return Source Documents")
You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.
```
bot = ConversationalRetrievalChain.from_llm( llm, vectara.as_retriever(), return_source_documents=True)
```
```
chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = bot.invoke({"question": query, "chat_history": chat_history})
```
```
result["source_documents"][0]
```
```
Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '29486', 'len': '97'})
```
## ConversationalRetrievalChain with `map_reduce`[](#conversationalretrievalchain-with-map_reduce "Direct link to conversationalretrievalchain-with-map_reduce")
LangChain supports different types of ways to combine document chains with the ConversationalRetrievalChain chain.
```
from langchain.chains import LLMChainfrom langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPTfrom langchain.chains.question_answering import load_qa_chain
```
```
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_chain(llm, chain_type="map_reduce")chain = ConversationalRetrievalChain( retriever=vectara.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain,)
```
```
chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = chain({"question": query, "chat_history": chat_history})
```
```
" The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of the nation's top legal minds and a former top litigator in private practice."
```
## ConversationalRetrievalChain with Question Answering with sources[](#conversationalretrievalchain-with-question-answering-with-sources "Direct link to ConversationalRetrievalChain with Question Answering with sources")
You can also use this chain with the question answering with sources chain.
```
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
```
```
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")chain = ConversationalRetrievalChain( retriever=vectara.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain,)
```
```
chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = chain({"question": query, "chat_history": chat_history})
```
```
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice.\nSOURCES: langchain"
```
## ConversationalRetrievalChain with streaming to `stdout`[](#conversationalretrievalchain-with-streaming-to-stdout "Direct link to conversationalretrievalchain-with-streaming-to-stdout")
Output from the chain will be streamed to `stdout` token by token in this example.
```
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.chains.conversational_retrieval.prompts import ( CONDENSE_QUESTION_PROMPT, QA_PROMPT,)from langchain.chains.llm import LLMChainfrom langchain.chains.question_answering import load_qa_chain# Construct a ConversationalRetrievalChain with a streaming llm for combine docs# and a separate, non-streaming llm for question generationllm = OpenAI(temperature=0, openai_api_key=openai_api_key)streaming_llm = OpenAI( streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0, openai_api_key=openai_api_key,)question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)bot = ConversationalRetrievalChain( retriever=vectara.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator,)
```
```
chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = bot.invoke({"question": query, "chat_history": chat_history})
```
```
The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence.
```
```
chat_history = [(query, result["answer"])]query = "Did he mention who she suceeded"result = bot.invoke({"question": query, "chat_history": chat_history})
```
```
Ketanji Brown Jackson succeeded Justice Breyer on the United States Supreme Court.
```
## get\_chat\_history Function[](#get_chat_history-function "Direct link to get_chat_history Function")
You can also specify a `get_chat_history` function, which can be used to format the chat\_history string.
```
def get_chat_history(inputs) -> str: res = [] for human, ai in inputs: res.append(f"Human:{human}\nAI:{ai}") return "\n".join(res)bot = ConversationalRetrievalChain.from_llm( llm, vectara.as_retriever(), get_chat_history=get_chat_history)
```
```
chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = bot.invoke({"question": query, "chat_history": chat_history})
```
```
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence."
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:04.901Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat/",
"description": "setup}",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3576",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"vectara_chat\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:03 GMT",
"etag": "W/\"87d5b8588fbd80bbbed0775665b1b9b8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::kflrz-1713753723802-3cbc48c65013"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat/",
"property": "og:url"
},
{
"content": "Chat Over Documents with Vectara | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "setup}",
"property": "og:description"
}
],
"title": "Chat Over Documents with Vectara | 🦜️🔗 LangChain"
} | Setup
You will need a Vectara account to use Vectara with LangChain. To get started, use the following steps: 1. Sign up for a Vectara account if you don’t already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window. 2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the “Create Corpus” button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top. 3. Next you’ll need to create API keys to access the corpus. Click on the “Authorization” tab in the corpus view and then the “Create API Key” button. Give your key a name, and choose whether you want query only or query+index for your key. Click “Create” and you now have an active API key. Keep this key confidential.
To use LangChain with Vectara, you’ll need to have these three values: customer ID, corpus ID and api_key. You can provide those to LangChain in two ways:
Include in your environment these three variables: VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY.
For example, you can set these variables using os.environ and getpass as follows:
import os
import getpass
os.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")
os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")
os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")
Add them to the Vectara vectorstore constructor:
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)
import os
from langchain.chains import ConversationalRetrievalChain
from langchain_community.vectorstores import Vectara
from langchain_openai import OpenAI
Load in documents. You can replace this with a loader for whatever type of data you want
from langchain_community.document_loaders import TextLoader
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
Since we’re using Vectara, there’s no need to chunk the documents, as that is done automatically in the Vectara platform backend. We just use from_document() to upload the text loaded from the file, and directly ingest it into Vectara:
vectara = Vectara.from_documents(documents, embedding=None)
We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation.
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
We now initialize the ConversationalRetrievalChain:
openai_api_key = os.environ["OPENAI_API_KEY"]
llm = OpenAI(openai_api_key=openai_api_key, temperature=0)
retriever = vectara.as_retriever()
d = retriever.get_relevant_documents(
"What did the president say about Ketanji Brown Jackson", k=2
)
print(d)
[Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '29486', 'len': '97'}), Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '1083', 'len': '117'}), Document(page_content='All told, we created 369,000 new manufacturing jobs in America just last year. Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight. As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. \n\nBut with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. Inflation is robbing them of the gains they might otherwise feel.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '14257', 'len': '77'}), Document(page_content='This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease. Last month, I announced our plan to supercharge \nthe Cancer Moonshot that President Obama asked me to lead six years ago. Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. More support for patients and families.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '36196', 'len': '122'}), Document(page_content='Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '664', 'len': '68'}), Document(page_content='I understand. \n\nI remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '8042', 'len': '97'}), Document(page_content='He rejected repeated efforts at diplomacy. He thought the West and NATO wouldn’t respond. And he thought he could divide us at home. We were ready. Here is what we did. We prepared extensively and carefully.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '2100', 'len': '42'}), Document(page_content='He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '788', 'len': '28'}), Document(page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. He rejected repeated efforts at diplomacy. He thought the West and NATO wouldn’t respond. And he thought he could divide us at home. We were ready. Here is what we did.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '2053', 'len': '46'}), Document(page_content='A unity agenda for the nation. We can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. We have fought for freedom, expanded liberty, defeated totalitarianism and terror. And built the strongest, freest, and most prosperous nation the world has ever known.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '36968', 'len': '131'})]
bot = ConversationalRetrievalChain.from_llm(
llm, retriever, memory=memory, verbose=False
)
And can have a multi-turn conversation with out new bot:
query = "What did the president say about Ketanji Brown Jackson"
result = bot.invoke({"question": query})
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence."
query = "Did he mention who she suceeded"
result = bot.invoke({"question": query})
' Ketanji Brown Jackson succeeded Justice Breyer on the United States Supreme Court.'
Pass in chat history
In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object.
bot = ConversationalRetrievalChain.from_llm(
OpenAI(temperature=0), vectara.as_retriever()
)
Here’s an example of asking a question with no chat history
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = bot.invoke({"question": query, "chat_history": chat_history})
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence."
Here’s an example of asking a question with some chat history
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = bot.invoke({"question": query, "chat_history": chat_history})
' Ketanji Brown Jackson succeeded Justice Breyer on the United States Supreme Court.'
Return Source Documents
You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.
bot = ConversationalRetrievalChain.from_llm(
llm, vectara.as_retriever(), return_source_documents=True
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = bot.invoke({"question": query, "chat_history": chat_history})
result["source_documents"][0]
Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice.', metadata={'source': 'langchain', 'lang': 'eng', 'offset': '29486', 'len': '97'})
ConversationalRetrievalChain with map_reduce
LangChain supports different types of ways to combine document chains with the ConversationalRetrievalChain chain.
from langchain.chains import LLMChain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
from langchain.chains.question_answering import load_qa_chain
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectara.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
" The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of the nation's top legal minds and a former top litigator in private practice."
ConversationalRetrievalChain with Question Answering with sources
You can also use this chain with the question answering with sources chain.
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectara.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice.\nSOURCES: langchain"
ConversationalRetrievalChain with streaming to stdout
Output from the chain will be streamed to stdout token by token in this example.
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains.conversational_retrieval.prompts import (
CONDENSE_QUESTION_PROMPT,
QA_PROMPT,
)
from langchain.chains.llm import LLMChain
from langchain.chains.question_answering import load_qa_chain
# Construct a ConversationalRetrievalChain with a streaming llm for combine docs
# and a separate, non-streaming llm for question generation
llm = OpenAI(temperature=0, openai_api_key=openai_api_key)
streaming_llm = OpenAI(
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
temperature=0,
openai_api_key=openai_api_key,
)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)
bot = ConversationalRetrievalChain(
retriever=vectara.as_retriever(),
combine_docs_chain=doc_chain,
question_generator=question_generator,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = bot.invoke({"question": query, "chat_history": chat_history})
The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence.
chat_history = [(query, result["answer"])]
query = "Did he mention who she suceeded"
result = bot.invoke({"question": query, "chat_history": chat_history})
Ketanji Brown Jackson succeeded Justice Breyer on the United States Supreme Court.
get_chat_history Function
You can also specify a get_chat_history function, which can be used to format the chat_history string.
def get_chat_history(inputs) -> str:
res = []
for human, ai in inputs:
res.append(f"Human:{human}\nAI:{ai}")
return "\n".join(res)
bot = ConversationalRetrievalChain.from_llm(
llm, vectara.as_retriever(), get_chat_history=get_chat_history
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = bot.invoke({"question": query, "chat_history": chat_history})
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence." |
https://python.langchain.com/docs/integrations/providers/usearch/ | `USearch's` base functionality is identical to `FAISS`, and the interface should look familiar if you have ever investigated Approximate Nearest Neighbors search. `USearch` and `FAISS` both employ `HNSW` algorithm, but they differ significantly in their design principles. `USearch` is compact and broadly compatible with FAISS without sacrificing performance, with a primary focus on user-defined metrics and fewer dependencies.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
We need to install `usearch` python package.
```
from langchain_community.vectorstores import USearch
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:05.538Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/usearch/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/usearch/",
"description": "USearch is a Smaller & Faster Single-File Vector Search Engine.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4626",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"usearch\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:04 GMT",
"etag": "W/\"8af35951758a3259d81fb3064937db77\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::xvkrm-1713753724927-357c53254d35"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/usearch/",
"property": "og:url"
},
{
"content": "USearch | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "USearch is a Smaller & Faster Single-File Vector Search Engine.",
"property": "og:description"
}
],
"title": "USearch | 🦜️🔗 LangChain"
} | USearch's base functionality is identical to FAISS, and the interface should look familiar if you have ever investigated Approximate Nearest Neighbors search. USearch and FAISS both employ HNSW algorithm, but they differ significantly in their design principles. USearch is compact and broadly compatible with FAISS without sacrificing performance, with a primary focus on user-defined metrics and fewer dependencies.
Installation and Setup
We need to install usearch python package.
from langchain_community.vectorstores import USearch |
https://python.langchain.com/docs/integrations/retrievers/breebs/ | [BREEBS](https://www.breebs.com/) is an open collaborative knowledge platform. Anybody can create a Breeb, a knowledge capsule, based on PDFs stored on a Google Drive folder. A breeb can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources. Behind the scenes, Breebs implements several Retrieval Augmented Generation (RAG) models to seamlessly provide useful context at each iteration.
To get the full list of Breebs, including their key (breeb\_key) and description : [https://breebs.promptbreeders.com/web/listbreebs](https://breebs.promptbreeders.com/web/listbreebs).
Dozens of Breebs have already been created by the community and are freely available for use. They cover a wide range of expertise, from organic chemistry to mythology, as well as tips on seduction and decentralized finance.
To generate a new Breeb, simply compile PDF files in a publicly shared Google Drive folder and initiate the creation process on the [BREEBS website](https://www.breebs.com/) by clicking the “Create Breeb” button. You can currently include up to 120 files, with a total character limit of 15 million.
```
[Document(page_content="de poupées• Ladurée - Madeleine• Ladurée - rue Bonaparte• Flamant• Bonnichon Saint Germain• Dinh Van• Léonor Greyl• Berthillon• Christian Louboutin• Patrick Cox• Baby Dior• FNAC Musique - Bastille• FNAC - Saint Lazare• La guinguette pirate• Park Hyatt• Restaurant de Sers• Hilton Arc de Triomphe• Café Barge• Le Celadon• Le Drouant• La Perouse• Cigale Recamier• Ledoyen• Tanjia• Les Muses• Bistrot du Dôme• Avenue Foch• Fontaine Saint-Michel• Funiculaire de Montmartre• Promotrain - Place Blanche• Grand Palais• Hotel de Rohan• Hotel de Sully• Hotel des Ventes Drouot• Institut de France• Place des Invalides• Jardin d'acclimatation• Jardin des plantes Zoo• Jouffroy (passage)• Quartier de La Défense• La Villette (quartier)• Lac Inferieur du Bois de Boulogne• Les Catacombes de Paris• Place du Louvre• Rue Mazarine• Rue Monsieur le Prince11/12/2023 07:51Guide en pdf Paris à imprimer gratuitement.", metadata={'source': 'https://breebs.promptbreeders.com/breeb?breeb_key=Parivoyage&doc=44d78553-a&page=11', 'score': 1}), Document(page_content="cafés et des restaurants situésdans les rues adjacentes. Il y a également une cafétéria dans le musée, qui propose des collations, desboissons et des repas légers.À voir et visiter autour :Le Muséum d'histoire naturelle de Paris est situé àproximité de plusieurs autres attractions populaires, notamment le Jardin des Plantes, la Grande Mosquéede Paris, la Sorbonne et la Bibliothèque nationale de France.Comment y aller en bus, métro, train :LeMuséum d'histoire naturelle de Paris est facilement accessible en transports en commun. Les stations demétro les plus proches sont la station Censier-Daubenton sur la ligne 7 et la station Jussieu sur les lignes 7et 10. Le musée est également accessible en bus, avec plusieurs lignes desservant la zone, telles que leslignes 24, 57, 61, 63, 67, 89 et 91. En train, la gare la plus proche est la Gare d'Austerlitz, qui est desserviepar plusieurs lignes, notamment les lignes RER C et les trains intercités. Il est également possible de serendre au musée en utilisant les services de taxis ou de VTC.Plus d'informations :+33140795601,6 euros,Ouverture : 10h - 17h, Week end: 10h - 18h ; Fermeture: Mardi(haut de", metadata={'source': 'https://breebs.promptbreeders.com/breeb?breeb_key=Parivoyage&doc=44d78553-a&page=403', 'score': 1}), Document(page_content="Le célèbre Drugstore des Champs Elysées abrite de nombreuses boutiques dans un décor design. V ouspourrez y découvrir un espace beauté, des expositions éphémères, une pharmacie et des espaces réservésaux plaisirs des sens. A noter la façade d'architecture extérieure en verrePlus d'informations :+33144437900, https://www.publicisdrugstore.com/, Visite libre,(haut de page)• Place du Marché Sainte-CatherinePlace du Marché Sainte-Catherine, Paris, 75008, FR11/12/2023 07:51Guide en pdf Paris à imprimer gratuitement.\nPage 200 sur 545https://www.cityzeum.com/imprimer-pdf/parisUne place hors de l'agitation de la capitale, où vous découvrirez des petits restaurants au charme certaindans un cadre fort agréable. Terrasses au rendez-vous l'été! Un bar à magie pour couronner le toutPlus d'informations :15-30 euros,(haut de page)• Rue de Lappe, ParisRue de Lappe, Paris, FR", metadata={'source': 'https://breebs.promptbreeders.com/breeb?breeb_key=Parivoyage&doc=44d78553-a&page=198', 'score': 1}), Document(page_content="des visiteurs pour la nature etles attractions du parc. Les visiteurs peuvent prévoir de passer entre 1 à 2 heures pour visiter le parcL'accès au parc Montsouris est gratuit pour tous les visiteurs. Aucune réservation n'est nécessaire pourvisiter le parc. Cependant, pour les visites guidées, il est conseillé de réserver à l'avance pour garantir uneplace. Les tarifs pour les visites guidées peuvent varier en fonction de l'organisme proposant la visite.Ensomme, le parc Montsouris est un endroit magnifique pour se détendre et profiter de la nature en pleincœur de Paris. Avec ses attractions pittoresques, son paysage verdoyant et ses visites guidées, c'est unendroit idéal pour une sortie en famille ou entre amis.Plus d'informations :https://www.parisinfo.com/musee-monument-paris/71218/Parc-Montsouris,Gratuit,Ouverture : 8h/9h - 17h30/21h30(haut de page)• Parc des Buttes Chaumont", metadata={'source': 'https://breebs.promptbreeders.com/breeb?breeb_key=Parivoyage&doc=44d78553-a&page=291', 'score': 1})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:05.702Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/breebs/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/breebs/",
"description": "BREEBS is an open collaborative knowledge",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3575",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"breebs\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:05 GMT",
"etag": "W/\"126feef48e41e5e5f2ca00294cb117ea\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::bqkmk-1713753725598-b82f9ab3a3e6"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/breebs/",
"property": "og:url"
},
{
"content": "BREEBS (Open Knowledge) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "BREEBS is an open collaborative knowledge",
"property": "og:description"
}
],
"title": "BREEBS (Open Knowledge) | 🦜️🔗 LangChain"
} | BREEBS is an open collaborative knowledge platform. Anybody can create a Breeb, a knowledge capsule, based on PDFs stored on a Google Drive folder. A breeb can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources. Behind the scenes, Breebs implements several Retrieval Augmented Generation (RAG) models to seamlessly provide useful context at each iteration.
To get the full list of Breebs, including their key (breeb_key) and description : https://breebs.promptbreeders.com/web/listbreebs.
Dozens of Breebs have already been created by the community and are freely available for use. They cover a wide range of expertise, from organic chemistry to mythology, as well as tips on seduction and decentralized finance.
To generate a new Breeb, simply compile PDF files in a publicly shared Google Drive folder and initiate the creation process on the BREEBS website by clicking the “Create Breeb” button. You can currently include up to 120 files, with a total character limit of 15 million.
[Document(page_content="de poupées• Ladurée - Madeleine• Ladurée - rue Bonaparte• Flamant• Bonnichon Saint Germain• Dinh Van• Léonor Greyl• Berthillon• Christian Louboutin• Patrick Cox• Baby Dior• FNAC Musique - Bastille• FNAC - Saint Lazare• La guinguette pirate• Park Hyatt• Restaurant de Sers• Hilton Arc de Triomphe• Café Barge• Le Celadon• Le Drouant• La Perouse• Cigale Recamier• Ledoyen• Tanjia• Les Muses• Bistrot du Dôme• Avenue Foch• Fontaine Saint-Michel• Funiculaire de Montmartre• Promotrain - Place Blanche• Grand Palais• Hotel de Rohan• Hotel de Sully• Hotel des Ventes Drouot• Institut de France• Place des Invalides• Jardin d'acclimatation• Jardin des plantes Zoo• Jouffroy (passage)• Quartier de La Défense• La Villette (quartier)• Lac Inferieur du Bois de Boulogne• Les Catacombes de Paris• Place du Louvre• Rue Mazarine• Rue Monsieur le Prince11/12/2023 07:51Guide en pdf Paris à imprimer gratuitement.", metadata={'source': 'https://breebs.promptbreeders.com/breeb?breeb_key=Parivoyage&doc=44d78553-a&page=11', 'score': 1}), Document(page_content="cafés et des restaurants situésdans les rues adjacentes. Il y a également une cafétéria dans le musée, qui propose des collations, desboissons et des repas légers.À voir et visiter autour :Le Muséum d'histoire naturelle de Paris est situé àproximité de plusieurs autres attractions populaires, notamment le Jardin des Plantes, la Grande Mosquéede Paris, la Sorbonne et la Bibliothèque nationale de France.Comment y aller en bus, métro, train :LeMuséum d'histoire naturelle de Paris est facilement accessible en transports en commun. Les stations demétro les plus proches sont la station Censier-Daubenton sur la ligne 7 et la station Jussieu sur les lignes 7et 10. Le musée est également accessible en bus, avec plusieurs lignes desservant la zone, telles que leslignes 24, 57, 61, 63, 67, 89 et 91. En train, la gare la plus proche est la Gare d'Austerlitz, qui est desserviepar plusieurs lignes, notamment les lignes RER C et les trains intercités. Il est également possible de serendre au musée en utilisant les services de taxis ou de VTC.Plus d'informations :+33140795601,6 euros,Ouverture : 10h - 17h, Week end: 10h - 18h ; Fermeture: Mardi(haut de", metadata={'source': 'https://breebs.promptbreeders.com/breeb?breeb_key=Parivoyage&doc=44d78553-a&page=403', 'score': 1}), Document(page_content="Le célèbre Drugstore des Champs Elysées abrite de nombreuses boutiques dans un décor design. V ouspourrez y découvrir un espace beauté, des expositions éphémères, une pharmacie et des espaces réservésaux plaisirs des sens. A noter la façade d'architecture extérieure en verrePlus d'informations :+33144437900, https://www.publicisdrugstore.com/, Visite libre,(haut de page)• Place du Marché Sainte-CatherinePlace du Marché Sainte-Catherine, Paris, 75008, FR11/12/2023 07:51Guide en pdf Paris à imprimer gratuitement.\nPage 200 sur 545https://www.cityzeum.com/imprimer-pdf/parisUne place hors de l'agitation de la capitale, où vous découvrirez des petits restaurants au charme certaindans un cadre fort agréable. Terrasses au rendez-vous l'été! Un bar à magie pour couronner le toutPlus d'informations :15-30 euros,(haut de page)• Rue de Lappe, ParisRue de Lappe, Paris, FR", metadata={'source': 'https://breebs.promptbreeders.com/breeb?breeb_key=Parivoyage&doc=44d78553-a&page=198', 'score': 1}), Document(page_content="des visiteurs pour la nature etles attractions du parc. Les visiteurs peuvent prévoir de passer entre 1 à 2 heures pour visiter le parcL'accès au parc Montsouris est gratuit pour tous les visiteurs. Aucune réservation n'est nécessaire pourvisiter le parc. Cependant, pour les visites guidées, il est conseillé de réserver à l'avance pour garantir uneplace. Les tarifs pour les visites guidées peuvent varier en fonction de l'organisme proposant la visite.Ensomme, le parc Montsouris est un endroit magnifique pour se détendre et profiter de la nature en pleincœur de Paris. Avec ses attractions pittoresques, son paysage verdoyant et ses visites guidées, c'est unendroit idéal pour une sortie en famille ou entre amis.Plus d'informations :https://www.parisinfo.com/musee-monument-paris/71218/Parc-Montsouris,Gratuit,Ouverture : 8h/9h - 17h30/21h30(haut de page)• Parc des Buttes Chaumont", metadata={'source': 'https://breebs.promptbreeders.com/breeb?breeb_key=Parivoyage&doc=44d78553-a&page=291', 'score': 1})] |
https://python.langchain.com/docs/integrations/retrievers/bm25/ | ## BM25
> [BM25 (Wikipedia)](https://en.wikipedia.org/wiki/Okapi_BM25) also known as the `Okapi BM25`, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.
>
> `BM25Retriever` retriever uses the [`rank_bm25`](https://github.com/dorianbrown/rank_bm25) package.
```
%pip install --upgrade --quiet rank_bm25
```
```
from langchain_community.retrievers import BM25Retriever
```
## Create New Retriever with Texts[](#create-new-retriever-with-texts "Direct link to Create New Retriever with Texts")
```
retriever = BM25Retriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])
```
## Create a New Retriever with Documents[](#create-a-new-retriever-with-documents "Direct link to Create a New Retriever with Documents")
You can now create a new retriever with the documents you created.
```
from langchain_core.documents import Documentretriever = BM25Retriever.from_documents( [ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ])
```
## Use Retriever[](#use-retriever "Direct link to Use Retriever")
We can now use the retriever!
```
result = retriever.get_relevant_documents("foo")
```
```
[Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:05.936Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/bm25/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/bm25/",
"description": "BM25 (Wikipedia) also",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4030",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"bm25\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:05 GMT",
"etag": "W/\"559f26a9c2ec008df3eb0643c630e8a0\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::ncfnt-1713753725826-e3738a5891b0"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/bm25/",
"property": "og:url"
},
{
"content": "BM25 | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "BM25 (Wikipedia) also",
"property": "og:description"
}
],
"title": "BM25 | 🦜️🔗 LangChain"
} | BM25
BM25 (Wikipedia) also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query.
BM25Retriever retriever uses the rank_bm25 package.
%pip install --upgrade --quiet rank_bm25
from langchain_community.retrievers import BM25Retriever
Create New Retriever with Texts
retriever = BM25Retriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])
Create a New Retriever with Documents
You can now create a new retriever with the documents you created.
from langchain_core.documents import Document
retriever = BM25Retriever.from_documents(
[
Document(page_content="foo"),
Document(page_content="bar"),
Document(page_content="world"),
Document(page_content="hello"),
Document(page_content="foo bar"),
]
)
Use Retriever
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/retrievers/chaindesk/ | [Chaindesk platform](https://docs.chaindesk.ai/introduction) brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the `Chaindesk API`.
First, you will need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url. You need the [API Key](https://docs.chaindesk.ai/api-reference/authentication).
Now that our index is set up, we can set up a retriever and start querying it.
```
[Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}), Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}), Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:06.024Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/chaindesk/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/chaindesk/",
"description": "Chaindesk platform brings",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"chaindesk\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:05 GMT",
"etag": "W/\"fa6a45298ee4643a6d6407e484df1098\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::wfh6w-1713753725841-1b6b3e9d10c2"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/chaindesk/",
"property": "og:url"
},
{
"content": "Chaindesk | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Chaindesk platform brings",
"property": "og:description"
}
],
"title": "Chaindesk | 🦜️🔗 LangChain"
} | Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Chaindesk API.
First, you will need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url. You need the API Key.
Now that our index is set up, we can set up a retriever and start querying it.
[Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}),
Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}),
Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})] |
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker/ | Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.
```
Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.----------------------------------------------------------------------------------------------------Document 3:As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.----------------------------------------------------------------------------------------------------Document 4:I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice.----------------------------------------------------------------------------------------------------Document 5:He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.----------------------------------------------------------------------------------------------------Document 6:So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.----------------------------------------------------------------------------------------------------Document 7:But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down.----------------------------------------------------------------------------------------------------Document 8:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.----------------------------------------------------------------------------------------------------Document 9:The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. Stationed near Baghdad, just yards from burn pits the size of football fields. Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. But cancer from prolonged exposure to burn pits ravaged Heath’s lungs and body. Danielle says Heath was a fighter to the very end.----------------------------------------------------------------------------------------------------Document 10:As I’ve told Xi Jinping, it is never a good bet to bet against the American people. We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.----------------------------------------------------------------------------------------------------Document 11:As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. Inflation is robbing them of the gains they might otherwise feel. I get it. That’s why my top priority is getting prices under control.----------------------------------------------------------------------------------------------------Document 12:This was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen. We’re done talking about infrastructure weeks. We’re going to have an infrastructure decade. It is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the world—particularly with China. As I’ve told Xi Jinping, it is never a good bet to bet against the American people.----------------------------------------------------------------------------------------------------Document 13:He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand.----------------------------------------------------------------------------------------------------Document 14:I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.----------------------------------------------------------------------------------------------------Document 15:My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. When they came home, many of the world’s fittest and best trained warriors were never the same. Headaches. Numbness. Dizziness.----------------------------------------------------------------------------------------------------Document 16:Danielle says Heath was a fighter to the very end. He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.----------------------------------------------------------------------------------------------------Document 17:Cancer is the #2 cause of death in America–second only to heart disease. Last month, I announced our plan to supercharge the Cancer Moonshot that President Obama asked me to lead six years ago. Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. More support for patients and families. To get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health.----------------------------------------------------------------------------------------------------Document 18:My plan to fight inflation will lower your costs and lower the deficit. 17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And here’s the plan: First – cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.----------------------------------------------------------------------------------------------------Document 19:Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges. And let’s pass the PRO Act when a majority of workers want to form a union—they shouldn’t be stopped.----------------------------------------------------------------------------------------------------Document 20:Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny.
```
Now let’s wrap our base retriever with a `ContextualCompressionRetriever`. We’ll add an `CohereRerank`, uses the Cohere rerank endpoint to rerank the returned results.
```
{'query': 'What did the president say about Ketanji Brown Jackson', 'result': " The president speaks highly of Ketanji Brown Jackson, stating that she is one of the nation's top legal minds, and will continue the legacy of excellence of Justice Breyer. The president also mentions that he worked with her family and that she comes from a family of public school educators and police officers. Since her nomination, she has received support from various groups, including the Fraternal Order of Police and judges from both major political parties. \n\nWould you like me to extract another sentence from the provided text? "}
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:06.105Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/cohere-reranker/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/cohere-reranker/",
"description": "Cohere is a Canadian startup that provides",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3575",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"cohere-reranker\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:05 GMT",
"etag": "W/\"72cea186af0377d7468de59bcbc42530\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::c5znt-1713753725892-85500d0adaa2"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/cohere-reranker/",
"property": "og:url"
},
{
"content": "Cohere reranker | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Cohere is a Canadian startup that provides",
"property": "og:description"
}
],
"title": "Cohere reranker | 🦜️🔗 LangChain"
} | Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
----------------------------------------------------------------------------------------------------
Document 3:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
----------------------------------------------------------------------------------------------------
Document 4:
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
----------------------------------------------------------------------------------------------------
Document 5:
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.
In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.
----------------------------------------------------------------------------------------------------
Document 6:
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
----------------------------------------------------------------------------------------------------
Document 7:
But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century.
Vice President Harris and I ran for office with a new economic vision for America.
Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up
and the middle out, not from the top down.
----------------------------------------------------------------------------------------------------
Document 8:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
----------------------------------------------------------------------------------------------------
Document 9:
The widow of Sergeant First Class Heath Robinson.
He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq.
Stationed near Baghdad, just yards from burn pits the size of football fields.
Heath’s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter.
But cancer from prolonged exposure to burn pits ravaged Heath’s lungs and body.
Danielle says Heath was a fighter to the very end.
----------------------------------------------------------------------------------------------------
Document 10:
As I’ve told Xi Jinping, it is never a good bet to bet against the American people.
We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America.
And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.
----------------------------------------------------------------------------------------------------
Document 11:
As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.”
It’s time.
But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills.
Inflation is robbing them of the gains they might otherwise feel.
I get it. That’s why my top priority is getting prices under control.
----------------------------------------------------------------------------------------------------
Document 12:
This was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen.
We’re done talking about infrastructure weeks.
We’re going to have an infrastructure decade.
It is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the world—particularly with China.
As I’ve told Xi Jinping, it is never a good bet to bet against the American people.
----------------------------------------------------------------------------------------------------
Document 13:
He will never extinguish their love of freedom. He will never weaken the resolve of the free world.
We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.
The pandemic has been punishing.
And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more.
I understand.
----------------------------------------------------------------------------------------------------
Document 14:
I understand.
I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it.
That’s why one of the first things I did as President was fight to pass the American Rescue Plan.
Because people were hurting. We needed to act, and we did.
Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.
----------------------------------------------------------------------------------------------------
Document 15:
My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free.
Our troops in Iraq and Afghanistan faced many dangers.
One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more.
When they came home, many of the world’s fittest and best trained warriors were never the same.
Headaches. Numbness. Dizziness.
----------------------------------------------------------------------------------------------------
Document 16:
Danielle says Heath was a fighter to the very end.
He didn’t know how to stop fighting, and neither did she.
Through her pain she found purpose to demand we do better.
Tonight, Danielle—we are.
The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits.
And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers.
----------------------------------------------------------------------------------------------------
Document 17:
Cancer is the #2 cause of death in America–second only to heart disease.
Last month, I announced our plan to supercharge
the Cancer Moonshot that President Obama asked me to lead six years ago.
Our goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases.
More support for patients and families.
To get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health.
----------------------------------------------------------------------------------------------------
Document 18:
My plan to fight inflation will lower your costs and lower the deficit.
17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And here’s the plan:
First – cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.
----------------------------------------------------------------------------------------------------
Document 19:
Let’s pass the Paycheck Fairness Act and paid leave.
Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty.
Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.
And let’s pass the PRO Act when a majority of workers want to form a union—they shouldn’t be stopped.
----------------------------------------------------------------------------------------------------
Document 20:
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.
{'query': 'What did the president say about Ketanji Brown Jackson',
'result': " The president speaks highly of Ketanji Brown Jackson, stating that she is one of the nation's top legal minds, and will continue the legacy of excellence of Justice Breyer. The president also mentions that he worked with her family and that she comes from a family of public school educators and police officers. Since her nomination, she has received support from various groups, including the Fraternal Order of Police and judges from both major political parties. \n\nWould you like me to extract another sentence from the provided text? "} |
https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin/ | Plugins allow `ChatGPT` to do things like: - Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc. - Retrieve knowledge-base information; e.g., company docs, personal notes, etc. - Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.
```
# STEP 1: Load# Load documents using LangChain's DocumentLoaders# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.htmlfrom langchain_community.document_loaders import CSVLoaderloader = CSVLoader( file_path="../../document_loaders/examples/example_data/mlb_teams_2012.csv")data = loader.load()# STEP 2: Convert# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-pluginimport jsonfrom typing import Listfrom langchain_community.docstore.document import Documentdef write_json(path: str, documents: List[Document]) -> None: results = [{"text": doc.page_content} for doc in documents] with open(path, "w") as f: json.dump(results, f, indent=2)write_json("foo.json", data)# STEP 3: Use# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json
```
Okay, so we’ve created the ChatGPT Retriever Plugin, but how do we actually use it?
The below code walks through how to do that.
We want to use `ChatGPTPluginRetriever` so we have to get the OpenAI API Key.
```
[Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0), Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:06.374Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin/",
"description": "[OpenAI",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4021",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"chatgpt-plugin\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:06 GMT",
"etag": "W/\"26ed11cfae9ad3f22cef07c1bfd41a65\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::qvg7r-1713753726038-c4cd6f49a496"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin/",
"property": "og:url"
},
{
"content": "ChatGPT plugin | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[OpenAI",
"property": "og:description"
}
],
"title": "ChatGPT plugin | 🦜️🔗 LangChain"
} | Plugins allow ChatGPT to do things like: - Retrieve real-time information; e.g., sports scores, stock prices, the latest news, etc. - Retrieve knowledge-base information; e.g., company docs, personal notes, etc. - Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc.
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.
# STEP 1: Load
# Load documents using LangChain's DocumentLoaders
# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html
from langchain_community.document_loaders import CSVLoader
loader = CSVLoader(
file_path="../../document_loaders/examples/example_data/mlb_teams_2012.csv"
)
data = loader.load()
# STEP 2: Convert
# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-plugin
import json
from typing import List
from langchain_community.docstore.document import Document
def write_json(path: str, documents: List[Document]) -> None:
results = [{"text": doc.page_content} for doc in documents]
with open(path, "w") as f:
json.dump(results, f, indent=2)
write_json("foo.json", data)
# STEP 3: Use
# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json
Okay, so we’ve created the ChatGPT Retriever Plugin, but how do we actually use it?
The below code walks through how to do that.
We want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key.
[Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0),
Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0),
Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)] |
https://python.langchain.com/docs/integrations/retrievers/dria_index/ | ## Dria
> [Dria](https://dria.co/) is a hub of public RAG models for developers to both contribute and utilize a shared embedding lake. This notebook demonstrates how to use the `Dria API` for data retrieval tasks.
## Installation
Ensure you have the `dria` package installed. You can install it using pip:
```
%pip install --upgrade --quiet dria
```
## Configure API Key
Set up your Dria API key for access.
```
import osos.environ["DRIA_API_KEY"] = "DRIA_API_KEY"
```
## Initialize Dria Retriever
Create an instance of `DriaRetriever`.
```
from langchain.retrievers import DriaRetrieverapi_key = os.getenv("DRIA_API_KEY")retriever = DriaRetriever(api_key=api_key)
```
## **Create Knowledge Base**
Create a knowledge on [Dria’s Knowledge Hub](https://dria.co/knowledge)
```
contract_id = retriever.create_knowledge_base( name="France's AI Development", embedding=DriaRetriever.models.jina_embeddings_v2_base_en.value, category="Artificial Intelligence", description="Explore the growth and contributions of France in the field of Artificial Intelligence.",)
```
## Add Data
Load data into your Dria knowledge base.
```
texts = [ "The first text to add to Dria.", "Another piece of information to store.", "More data to include in the Dria knowledge base.",]ids = retriever.add_texts(texts)print("Data added with IDs:", ids)
```
## Retrieve Data
Use the retriever to find relevant documents given a query.
```
query = "Find information about Dria."result = retriever.get_relevant_documents(query)for doc in result: print(doc)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:06.639Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/dria_index/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/dria_index/",
"description": "Dria is a hub of public RAG models for developers",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3575",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"dria_index\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:06 GMT",
"etag": "W/\"6a2099fbfd27b5f23d971a09a6b35972\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::xqkjm-1713753726562-30f083960aa2"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/dria_index/",
"property": "og:url"
},
{
"content": "Dria | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Dria is a hub of public RAG models for developers",
"property": "og:description"
}
],
"title": "Dria | 🦜️🔗 LangChain"
} | Dria
Dria is a hub of public RAG models for developers to both contribute and utilize a shared embedding lake. This notebook demonstrates how to use the Dria API for data retrieval tasks.
Installation
Ensure you have the dria package installed. You can install it using pip:
%pip install --upgrade --quiet dria
Configure API Key
Set up your Dria API key for access.
import os
os.environ["DRIA_API_KEY"] = "DRIA_API_KEY"
Initialize Dria Retriever
Create an instance of DriaRetriever.
from langchain.retrievers import DriaRetriever
api_key = os.getenv("DRIA_API_KEY")
retriever = DriaRetriever(api_key=api_key)
Create Knowledge Base
Create a knowledge on Dria’s Knowledge Hub
contract_id = retriever.create_knowledge_base(
name="France's AI Development",
embedding=DriaRetriever.models.jina_embeddings_v2_base_en.value,
category="Artificial Intelligence",
description="Explore the growth and contributions of France in the field of Artificial Intelligence.",
)
Add Data
Load data into your Dria knowledge base.
texts = [
"The first text to add to Dria.",
"Another piece of information to store.",
"More data to include in the Dria knowledge base.",
]
ids = retriever.add_texts(texts)
print("Data added with IDs:", ids)
Retrieve Data
Use the retriever to find relevant documents given a query.
query = "Find information about Dria."
result = retriever.get_relevant_documents(query)
for doc in result:
print(doc) |
https://python.langchain.com/docs/integrations/retrievers/cohere/ | This notebook covers how to get started with the `Cohere RAG` retriever. This allows you to leverage the ability to search documents over various connectors or by supplying your own.
```
{'id': 'web-search_4:0', 'snippet': 'AI startup Cohere, now valued at over $2.1B, raises $270M\n\nKyle Wiggers 4 months\n\nIn a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, today announced that it raised $270 million as part of its Series C round.\n\nReuters reported earlier in the year that Cohere was in talks to raise “hundreds of millions” of dollars at a valuation of upward of just over $6 billion. If there’s credence to that reporting, Cohere appears to have missed the valuation mark substantially; a source familiar with the matter tells TechCrunch that this tranche values the company at between $2.1 billion and $2.2 billion.', 'title': 'AI startup Cohere, now valued at over $2.1B, raises $270M | TechCrunch', 'url': 'https://techcrunch.com/2023/06/08/ai-startup-cohere-now-valued-at-over-2-1b-raises-270m/'}AI startup Cohere, now valued at over $2.1B, raises $270MKyle Wiggers 4 monthsIn a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, today announced that it raised $270 million as part of its Series C round.Reuters reported earlier in the year that Cohere was in talks to raise “hundreds of millions” of dollars at a valuation of upward of just over $6 billion. If there’s credence to that reporting, Cohere appears to have missed the valuation mark substantially; a source familiar with the matter tells TechCrunch that this tranche values the company at between $2.1 billion and $2.2 billion.------------------------------{'id': 'web-search_9:0', 'snippet': 'Cohere is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto and London.\n\nIn 2017, a team of researchers at Google Brain, which included Aidan Gomez, published a paper called "Attention is All You Need," which introduced the transformer machine learning architecture, setting state-of-the-art performance on a variety of natural language processing tasks. In 2019, Gomez and Nick Frosst, another researcher at Google Brain, founded Cohere along with Ivan Zhang, with whom Gomez had done research at FOR.ai. All of the co-founders attended University of Toronto.', 'title': 'Cohere - Wikipedia', 'url': 'https://en.wikipedia.org/wiki/Cohere'}Cohere is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto and London.In 2017, a team of researchers at Google Brain, which included Aidan Gomez, published a paper called "Attention is All You Need," which introduced the transformer machine learning architecture, setting state-of-the-art performance on a variety of natural language processing tasks. In 2019, Gomez and Nick Frosst, another researcher at Google Brain, founded Cohere along with Ivan Zhang, with whom Gomez had done research at FOR.ai. All of the co-founders attended University of Toronto.------------------------------{'id': 'web-search_8:2', 'snippet': ' Cofounded by Aidan Gomez, a Google Brain alum and coauthor of the seminal transformer research paper, Cohere describes itself as being “on a mission to transform enterprises and their products with AI to unlock a more intuitive way to generate, search, and summarize information than ever before.” One key element of Cohere’s approach is its focus on data protection, deploying its models inside enterprises’ secure data environment.\n\n“We are both independent and cloud-agnostic, meaning we are not beholden to any one tech company and empower enterprises to implement customized AI solutions on the cloud of their choosing, or even on-premises,” says Martin Kon, COO and president of Cohere.', 'title': 'McKinsey and Cohere collaborate to transform clients with enterprise generative AI', 'url': 'https://www.mckinsey.com/about-us/new-at-mckinsey-blog/mckinsey-and-cohere-collaborate-to-transform-clients-with-enterprise-generative-ai'} Cofounded by Aidan Gomez, a Google Brain alum and coauthor of the seminal transformer research paper, Cohere describes itself as being “on a mission to transform enterprises and their products with AI to unlock a more intuitive way to generate, search, and summarize information than ever before.” One key element of Cohere’s approach is its focus on data protection, deploying its models inside enterprises’ secure data environment.“We are both independent and cloud-agnostic, meaning we are not beholden to any one tech company and empower enterprises to implement customized AI solutions on the cloud of their choosing, or even on-premises,” says Martin Kon, COO and president of Cohere.------------------------------
```
```
{'id': 'web-search_9:0', 'snippet': 'Cohere is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto and London.\n\nIn 2017, a team of researchers at Google Brain, which included Aidan Gomez, published a paper called "Attention is All You Need," which introduced the transformer machine learning architecture, setting state-of-the-art performance on a variety of natural language processing tasks. In 2019, Gomez and Nick Frosst, another researcher at Google Brain, founded Cohere along with Ivan Zhang, with whom Gomez had done research at FOR.ai. All of the co-founders attended University of Toronto.', 'title': 'Cohere - Wikipedia', 'url': 'https://en.wikipedia.org/wiki/Cohere'}Cohere is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto and London.In 2017, a team of researchers at Google Brain, which included Aidan Gomez, published a paper called "Attention is All You Need," which introduced the transformer machine learning architecture, setting state-of-the-art performance on a variety of natural language processing tasks. In 2019, Gomez and Nick Frosst, another researcher at Google Brain, founded Cohere along with Ivan Zhang, with whom Gomez had done research at FOR.ai. All of the co-founders attended University of Toronto.------------------------------{'id': 'web-search_8:2', 'snippet': ' Cofounded by Aidan Gomez, a Google Brain alum and coauthor of the seminal transformer research paper, Cohere describes itself as being “on a mission to transform enterprises and their products with AI to unlock a more intuitive way to generate, search, and summarize information than ever before.” One key element of Cohere’s approach is its focus on data protection, deploying its models inside enterprises’ secure data environment.\n\n“We are both independent and cloud-agnostic, meaning we are not beholden to any one tech company and empower enterprises to implement customized AI solutions on the cloud of their choosing, or even on-premises,” says Martin Kon, COO and president of Cohere.', 'title': 'McKinsey and Cohere collaborate to transform clients with enterprise generative AI', 'url': 'https://www.mckinsey.com/about-us/new-at-mckinsey-blog/mckinsey-and-cohere-collaborate-to-transform-clients-with-enterprise-generative-ai'} Cofounded by Aidan Gomez, a Google Brain alum and coauthor of the seminal transformer research paper, Cohere describes itself as being “on a mission to transform enterprises and their products with AI to unlock a more intuitive way to generate, search, and summarize information than ever before.” One key element of Cohere’s approach is its focus on data protection, deploying its models inside enterprises’ secure data environment.“We are both independent and cloud-agnostic, meaning we are not beholden to any one tech company and empower enterprises to implement customized AI solutions on the cloud of their choosing, or even on-premises,” says Martin Kon, COO and president of Cohere.------------------------------{'id': 'web-search_4:0', 'snippet': 'AI startup Cohere, now valued at over $2.1B, raises $270M\n\nKyle Wiggers 4 months\n\nIn a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, today announced that it raised $270 million as part of its Series C round.\n\nReuters reported earlier in the year that Cohere was in talks to raise “hundreds of millions” of dollars at a valuation of upward of just over $6 billion. If there’s credence to that reporting, Cohere appears to have missed the valuation mark substantially; a source familiar with the matter tells TechCrunch that this tranche values the company at between $2.1 billion and $2.2 billion.', 'title': 'AI startup Cohere, now valued at over $2.1B, raises $270M | TechCrunch', 'url': 'https://techcrunch.com/2023/06/08/ai-startup-cohere-now-valued-at-over-2-1b-raises-270m/'}AI startup Cohere, now valued at over $2.1B, raises $270MKyle Wiggers 4 monthsIn a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, today announced that it raised $270 million as part of its Series C round.Reuters reported earlier in the year that Cohere was in talks to raise “hundreds of millions” of dollars at a valuation of upward of just over $6 billion. If there’s credence to that reporting, Cohere appears to have missed the valuation mark substantially; a source familiar with the matter tells TechCrunch that this tranche values the company at between $2.1 billion and $2.2 billion.------------------------------
```
```
{'id': 'doc-0', 'snippet': 'Langchain supports cohere RAG!'}Langchain supports cohere RAG!------------------------------
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:06.719Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/cohere/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/cohere/",
"description": "Cohere is a Canadian startup that provides",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4030",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"cohere\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:06 GMT",
"etag": "W/\"74df6dc219faf237f9f0b9c24dbbfeba\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::qqqbm-1713753726586-e7c6cec67502"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/cohere/",
"property": "og:url"
},
{
"content": "Cohere RAG | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Cohere is a Canadian startup that provides",
"property": "og:description"
}
],
"title": "Cohere RAG | 🦜️🔗 LangChain"
} | This notebook covers how to get started with the Cohere RAG retriever. This allows you to leverage the ability to search documents over various connectors or by supplying your own.
{'id': 'web-search_4:0', 'snippet': 'AI startup Cohere, now valued at over $2.1B, raises $270M\n\nKyle Wiggers 4 months\n\nIn a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, today announced that it raised $270 million as part of its Series C round.\n\nReuters reported earlier in the year that Cohere was in talks to raise “hundreds of millions” of dollars at a valuation of upward of just over $6 billion. If there’s credence to that reporting, Cohere appears to have missed the valuation mark substantially; a source familiar with the matter tells TechCrunch that this tranche values the company at between $2.1 billion and $2.2 billion.', 'title': 'AI startup Cohere, now valued at over $2.1B, raises $270M | TechCrunch', 'url': 'https://techcrunch.com/2023/06/08/ai-startup-cohere-now-valued-at-over-2-1b-raises-270m/'}
AI startup Cohere, now valued at over $2.1B, raises $270M
Kyle Wiggers 4 months
In a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, today announced that it raised $270 million as part of its Series C round.
Reuters reported earlier in the year that Cohere was in talks to raise “hundreds of millions” of dollars at a valuation of upward of just over $6 billion. If there’s credence to that reporting, Cohere appears to have missed the valuation mark substantially; a source familiar with the matter tells TechCrunch that this tranche values the company at between $2.1 billion and $2.2 billion.
------------------------------
{'id': 'web-search_9:0', 'snippet': 'Cohere is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto and London.\n\nIn 2017, a team of researchers at Google Brain, which included Aidan Gomez, published a paper called "Attention is All You Need," which introduced the transformer machine learning architecture, setting state-of-the-art performance on a variety of natural language processing tasks. In 2019, Gomez and Nick Frosst, another researcher at Google Brain, founded Cohere along with Ivan Zhang, with whom Gomez had done research at FOR.ai. All of the co-founders attended University of Toronto.', 'title': 'Cohere - Wikipedia', 'url': 'https://en.wikipedia.org/wiki/Cohere'}
Cohere is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto and London.
In 2017, a team of researchers at Google Brain, which included Aidan Gomez, published a paper called "Attention is All You Need," which introduced the transformer machine learning architecture, setting state-of-the-art performance on a variety of natural language processing tasks. In 2019, Gomez and Nick Frosst, another researcher at Google Brain, founded Cohere along with Ivan Zhang, with whom Gomez had done research at FOR.ai. All of the co-founders attended University of Toronto.
------------------------------
{'id': 'web-search_8:2', 'snippet': ' Cofounded by Aidan Gomez, a Google Brain alum and coauthor of the seminal transformer research paper, Cohere describes itself as being “on a mission to transform enterprises and their products with AI to unlock a more intuitive way to generate, search, and summarize information than ever before.” One key element of Cohere’s approach is its focus on data protection, deploying its models inside enterprises’ secure data environment.\n\n“We are both independent and cloud-agnostic, meaning we are not beholden to any one tech company and empower enterprises to implement customized AI solutions on the cloud of their choosing, or even on-premises,” says Martin Kon, COO and president of Cohere.', 'title': 'McKinsey and Cohere collaborate to transform clients with enterprise generative AI', 'url': 'https://www.mckinsey.com/about-us/new-at-mckinsey-blog/mckinsey-and-cohere-collaborate-to-transform-clients-with-enterprise-generative-ai'}
Cofounded by Aidan Gomez, a Google Brain alum and coauthor of the seminal transformer research paper, Cohere describes itself as being “on a mission to transform enterprises and their products with AI to unlock a more intuitive way to generate, search, and summarize information than ever before.” One key element of Cohere’s approach is its focus on data protection, deploying its models inside enterprises’ secure data environment.
“We are both independent and cloud-agnostic, meaning we are not beholden to any one tech company and empower enterprises to implement customized AI solutions on the cloud of their choosing, or even on-premises,” says Martin Kon, COO and president of Cohere.
------------------------------
{'id': 'web-search_9:0', 'snippet': 'Cohere is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto and London.\n\nIn 2017, a team of researchers at Google Brain, which included Aidan Gomez, published a paper called "Attention is All You Need," which introduced the transformer machine learning architecture, setting state-of-the-art performance on a variety of natural language processing tasks. In 2019, Gomez and Nick Frosst, another researcher at Google Brain, founded Cohere along with Ivan Zhang, with whom Gomez had done research at FOR.ai. All of the co-founders attended University of Toronto.', 'title': 'Cohere - Wikipedia', 'url': 'https://en.wikipedia.org/wiki/Cohere'}
Cohere is a Canadian multinational technology company focused on artificial intelligence for the enterprise, specializing in large language models. Cohere was founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, and is headquartered in Toronto and San Francisco, with offices in Palo Alto and London.
In 2017, a team of researchers at Google Brain, which included Aidan Gomez, published a paper called "Attention is All You Need," which introduced the transformer machine learning architecture, setting state-of-the-art performance on a variety of natural language processing tasks. In 2019, Gomez and Nick Frosst, another researcher at Google Brain, founded Cohere along with Ivan Zhang, with whom Gomez had done research at FOR.ai. All of the co-founders attended University of Toronto.
------------------------------
{'id': 'web-search_8:2', 'snippet': ' Cofounded by Aidan Gomez, a Google Brain alum and coauthor of the seminal transformer research paper, Cohere describes itself as being “on a mission to transform enterprises and their products with AI to unlock a more intuitive way to generate, search, and summarize information than ever before.” One key element of Cohere’s approach is its focus on data protection, deploying its models inside enterprises’ secure data environment.\n\n“We are both independent and cloud-agnostic, meaning we are not beholden to any one tech company and empower enterprises to implement customized AI solutions on the cloud of their choosing, or even on-premises,” says Martin Kon, COO and president of Cohere.', 'title': 'McKinsey and Cohere collaborate to transform clients with enterprise generative AI', 'url': 'https://www.mckinsey.com/about-us/new-at-mckinsey-blog/mckinsey-and-cohere-collaborate-to-transform-clients-with-enterprise-generative-ai'}
Cofounded by Aidan Gomez, a Google Brain alum and coauthor of the seminal transformer research paper, Cohere describes itself as being “on a mission to transform enterprises and their products with AI to unlock a more intuitive way to generate, search, and summarize information than ever before.” One key element of Cohere’s approach is its focus on data protection, deploying its models inside enterprises’ secure data environment.
“We are both independent and cloud-agnostic, meaning we are not beholden to any one tech company and empower enterprises to implement customized AI solutions on the cloud of their choosing, or even on-premises,” says Martin Kon, COO and president of Cohere.
------------------------------
{'id': 'web-search_4:0', 'snippet': 'AI startup Cohere, now valued at over $2.1B, raises $270M\n\nKyle Wiggers 4 months\n\nIn a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, today announced that it raised $270 million as part of its Series C round.\n\nReuters reported earlier in the year that Cohere was in talks to raise “hundreds of millions” of dollars at a valuation of upward of just over $6 billion. If there’s credence to that reporting, Cohere appears to have missed the valuation mark substantially; a source familiar with the matter tells TechCrunch that this tranche values the company at between $2.1 billion and $2.2 billion.', 'title': 'AI startup Cohere, now valued at over $2.1B, raises $270M | TechCrunch', 'url': 'https://techcrunch.com/2023/06/08/ai-startup-cohere-now-valued-at-over-2-1b-raises-270m/'}
AI startup Cohere, now valued at over $2.1B, raises $270M
Kyle Wiggers 4 months
In a sign that there’s plenty of cash to go around for generative AI startups, Cohere, which is developing an AI model ecosystem for the enterprise, today announced that it raised $270 million as part of its Series C round.
Reuters reported earlier in the year that Cohere was in talks to raise “hundreds of millions” of dollars at a valuation of upward of just over $6 billion. If there’s credence to that reporting, Cohere appears to have missed the valuation mark substantially; a source familiar with the matter tells TechCrunch that this tranche values the company at between $2.1 billion and $2.2 billion.
------------------------------
{'id': 'doc-0', 'snippet': 'Langchain supports cohere RAG!'}
Langchain supports cohere RAG!
------------------------------ |
https://python.langchain.com/docs/integrations/retrievers/elasticsearch_retriever/ | ## Elasticsearch
> [Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. It supports keyword search, vector search, hybrid search and complex filtering.
The `ElasticsearchRetriever` is a generic wrapper to enable flexible access to all `Elasticsearch` features through the [Query DSL](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html). For most use cases the other classes (`ElasticsearchStore`, `ElasticsearchEmbeddings`, etc.) should suffice, but if they don’t you can use `ElasticsearchRetriever`.
```
%pip install --upgrade --quiet elasticsearch langchain-elasticsearch
```
```
from typing import Any, Dict, Iterablefrom elasticsearch import Elasticsearchfrom elasticsearch.helpers import bulkfrom langchain.embeddings import DeterministicFakeEmbeddingfrom langchain_core.documents import Documentfrom langchain_core.embeddings import Embeddingsfrom langchain_elasticsearch import ElasticsearchRetriever
```
## Configure[](#configure "Direct link to Configure")
Here we define the conncection to Elasticsearch. In this example we use a locally running instance. Alternatively, you can make an account in [Elastic Cloud](https://cloud.elastic.co/) and start a [free trial](https://www.elastic.co/cloud/cloud-trial-overview).
```
es_url = "http://localhost:9200"es_client = Elasticsearch(hosts=[es_url])es_client.info()
```
For vector search, we are going to use random embeddings just for illustration. For real use cases, pick one of the available LangChain `Embeddings` classes.
```
embeddings = DeterministicFakeEmbedding(size=3)
```
## Define example data[](#define-example-data "Direct link to Define example data")
```
index_name = "test-langchain-retriever"text_field = "text"dense_vector_field = "fake_embedding"num_characters_field = "num_characters"texts = [ "foo", "bar", "world", "hello world", "hello", "foo bar", "bla bla foo",]
```
## Index data[](#index-data "Direct link to Index data")
Typically, users make use of `ElasticsearchRetriever` when they already have data in an Elasticsearch index. Here we index some example text documents. If you created an index for example using `ElasticsearchStore.from_documents` that’s also fine.
```
def create_index( es_client: Elasticsearch, index_name: str, text_field: str, dense_vector_field: str, num_characters_field: str,): es_client.indices.create( index=index_name, mappings={ "properties": { text_field: {"type": "text"}, dense_vector_field: {"type": "dense_vector"}, num_characters_field: {"type": "integer"}, } }, )def index_data( es_client: Elasticsearch, index_name: str, text_field: str, dense_vector_field: str, embeddings: Embeddings, texts: Iterable[str], refresh: bool = True,) -> None: create_index( es_client, index_name, text_field, dense_vector_field, num_characters_field ) vectors = embeddings.embed_documents(list(texts)) requests = [ { "_op_type": "index", "_index": index_name, "_id": i, text_field: text, dense_vector_field: vector, num_characters_field: len(text), } for i, (text, vector) in enumerate(zip(texts, vectors)) ] bulk(es_client, requests) if refresh: es_client.indices.refresh(index=index_name) return len(requests)
```
```
index_data(es_client, index_name, text_field, dense_vector_field, embeddings, texts)
```
## Usage examples[](#usage-examples "Direct link to Usage examples")
### Vector search[](#vector-search "Direct link to Vector search")
Dense vector retrival using fake embeddings in this example.
```
def vector_query(search_query: str) -> Dict: vector = embeddings.embed_query(search_query) # same embeddings as for indexing return { "knn": { "field": dense_vector_field, "query_vector": vector, "k": 5, "num_candidates": 10, } }vector_retriever = ElasticsearchRetriever.from_es_params( index_name=index_name, body_func=vector_query, content_field=text_field, url=es_url,)vector_retriever.get_relevant_documents("foo")
```
```
[Document(page_content='foo', metadata={'_index': 'test-langchain-index', '_id': '0', '_score': 1.0, '_source': {'fake_embedding': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339], 'num_characters': 3}}), Document(page_content='world', metadata={'_index': 'test-langchain-index', '_id': '2', '_score': 0.6770179, '_source': {'fake_embedding': [-0.7041151202179595, -1.4652961969276497, -0.25786766898672847], 'num_characters': 5}}), Document(page_content='hello world', metadata={'_index': 'test-langchain-index', '_id': '3', '_score': 0.4816144, '_source': {'fake_embedding': [0.42728413221815387, -1.1889908285425348, -1.445433230084671], 'num_characters': 11}}), Document(page_content='hello', metadata={'_index': 'test-langchain-index', '_id': '4', '_score': 0.46853775, '_source': {'fake_embedding': [-0.28560441330564046, 0.9958894823084921, 1.5489829880195058], 'num_characters': 5}}), Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 0.2086992, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}})]
```
### BM25[](#bm25 "Direct link to BM25")
Traditional keyword matching.
```
def bm25_query(search_query: str) -> Dict: return { "query": { "match": { text_field: search_query, }, }, }bm25_retriever = ElasticsearchRetriever.from_es_params( index_name=index_name, body_func=bm25_query, content_field=text_field, url=es_url,)bm25_retriever.get_relevant_documents("foo")
```
```
[Document(page_content='foo', metadata={'_index': 'test-langchain-index', '_id': '0', '_score': 0.9711467, '_source': {'fake_embedding': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339], 'num_characters': 3}}), Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 0.7437035, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}}), Document(page_content='bla bla foo', metadata={'_index': 'test-langchain-index', '_id': '6', '_score': 0.6025789, '_source': {'fake_embedding': [1.7365927060137358, -0.5230400847844948, 0.7978339724186192], 'num_characters': 11}})]
```
### Hybrid search[](#hybrid-search "Direct link to Hybrid search")
The combination of vector search and BM25 search using [Reciprocal Rank Fusion](https://www.elastic.co/guide/en/elasticsearch/reference/current/rrf.html) (RRF) to combine the result sets.
```
def hybrid_query(search_query: str) -> Dict: vector = embeddings.embed_query(search_query) # same embeddings as for indexing return { "query": { "match": { text_field: search_query, }, }, "knn": { "field": dense_vector_field, "query_vector": vector, "k": 5, "num_candidates": 10, }, "rank": {"rrf": {}}, }hybrid_retriever = ElasticsearchRetriever.from_es_params( index_name=index_name, body_func=hybrid_query, content_field=text_field, url=es_url,)hybrid_retriever.get_relevant_documents("foo")
```
```
[Document(page_content='foo', metadata={'_index': 'test-langchain-index', '_id': '0', '_score': 0.9711467, '_source': {'fake_embedding': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339], 'num_characters': 3}}), Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 0.7437035, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}}), Document(page_content='bla bla foo', metadata={'_index': 'test-langchain-index', '_id': '6', '_score': 0.6025789, '_source': {'fake_embedding': [1.7365927060137358, -0.5230400847844948, 0.7978339724186192], 'num_characters': 11}})]
```
### Fuzzy matching[](#fuzzy-matching "Direct link to Fuzzy matching")
Keyword matching with typo tolerance.
```
def fuzzy_query(search_query: str) -> Dict: return { "query": { "match": { text_field: { "query": search_query, "fuzziness": "AUTO", } }, }, }fuzzy_retriever = ElasticsearchRetriever.from_es_params( index_name=index_name, body_func=fuzzy_query, content_field=text_field, url=es_url,)fuzzy_retriever.get_relevant_documents("fox") # note the character tolernace
```
```
[Document(page_content='foo', metadata={'_index': 'test-langchain-index', '_id': '0', '_score': 0.6474311, '_source': {'fake_embedding': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339], 'num_characters': 3}}), Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 0.49580228, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}}), Document(page_content='bla bla foo', metadata={'_index': 'test-langchain-index', '_id': '6', '_score': 0.40171927, '_source': {'fake_embedding': [1.7365927060137358, -0.5230400847844948, 0.7978339724186192], 'num_characters': 11}})]
```
### Complex filtering[](#complex-filtering "Direct link to Complex filtering")
Combination of filters on different fields.
```
def filter_query_func(search_query: str) -> Dict: return { "query": { "bool": { "must": [ {"range": {num_characters_field: {"gte": 5}}}, ], "must_not": [ {"prefix": {text_field: "bla"}}, ], "should": [ {"match": {text_field: search_query}}, ], } } }filtering_retriever = ElasticsearchRetriever.from_es_params( index_name=index_name, body_func=filter_query_func, content_field=text_field, url=es_url,)filtering_retriever.get_relevant_documents("foo")
```
```
[Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 1.7437035, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}}), Document(page_content='world', metadata={'_index': 'test-langchain-index', '_id': '2', '_score': 1.0, '_source': {'fake_embedding': [-0.7041151202179595, -1.4652961969276497, -0.25786766898672847], 'num_characters': 5}}), Document(page_content='hello world', metadata={'_index': 'test-langchain-index', '_id': '3', '_score': 1.0, '_source': {'fake_embedding': [0.42728413221815387, -1.1889908285425348, -1.445433230084671], 'num_characters': 11}}), Document(page_content='hello', metadata={'_index': 'test-langchain-index', '_id': '4', '_score': 1.0, '_source': {'fake_embedding': [-0.28560441330564046, 0.9958894823084921, 1.5489829880195058], 'num_characters': 5}})]
```
Note that the query match is on top. The other documents that got passed the filter are also in the result set, but they all have the same score.
### Custom document mapper[](#custom-document-mapper "Direct link to Custom document mapper")
It is possible to cusomize the function tha maps an Elasticsearch result (hit) to a LangChain document.
```
def num_characters_mapper(hit: Dict[str, Any]) -> Document: num_chars = hit["_source"][num_characters_field] content = hit["_source"][text_field] return Document( page_content=f"This document has {num_chars} characters", metadata={"text_content": content}, )custom_mapped_retriever = ElasticsearchRetriever.from_es_params( index_name=index_name, body_func=filter_query_func, document_mapper=num_characters_mapper, url=es_url,)custom_mapped_retriever.get_relevant_documents("foo")
```
```
[Document(page_content='This document has 7 characters', metadata={'text_content': 'foo bar'}), Document(page_content='This document has 5 characters', metadata={'text_content': 'world'}), Document(page_content='This document has 11 characters', metadata={'text_content': 'hello world'}), Document(page_content='This document has 5 characters', metadata={'text_content': 'hello'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:07.001Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/elasticsearch_retriever/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/elasticsearch_retriever/",
"description": "Elasticsearch is a",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "7384",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"elasticsearch_retriever\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:06 GMT",
"etag": "W/\"554f1809d1e09f98bfeafd270e0e77b1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::nps6w-1713753726927-4f73119c96a2"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/elasticsearch_retriever/",
"property": "og:url"
},
{
"content": "Elasticsearch | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Elasticsearch is a",
"property": "og:description"
}
],
"title": "Elasticsearch | 🦜️🔗 LangChain"
} | Elasticsearch
Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. It supports keyword search, vector search, hybrid search and complex filtering.
The ElasticsearchRetriever is a generic wrapper to enable flexible access to all Elasticsearch features through the Query DSL. For most use cases the other classes (ElasticsearchStore, ElasticsearchEmbeddings, etc.) should suffice, but if they don’t you can use ElasticsearchRetriever.
%pip install --upgrade --quiet elasticsearch langchain-elasticsearch
from typing import Any, Dict, Iterable
from elasticsearch import Elasticsearch
from elasticsearch.helpers import bulk
from langchain.embeddings import DeterministicFakeEmbedding
from langchain_core.documents import Document
from langchain_core.embeddings import Embeddings
from langchain_elasticsearch import ElasticsearchRetriever
Configure
Here we define the conncection to Elasticsearch. In this example we use a locally running instance. Alternatively, you can make an account in Elastic Cloud and start a free trial.
es_url = "http://localhost:9200"
es_client = Elasticsearch(hosts=[es_url])
es_client.info()
For vector search, we are going to use random embeddings just for illustration. For real use cases, pick one of the available LangChain Embeddings classes.
embeddings = DeterministicFakeEmbedding(size=3)
Define example data
index_name = "test-langchain-retriever"
text_field = "text"
dense_vector_field = "fake_embedding"
num_characters_field = "num_characters"
texts = [
"foo",
"bar",
"world",
"hello world",
"hello",
"foo bar",
"bla bla foo",
]
Index data
Typically, users make use of ElasticsearchRetriever when they already have data in an Elasticsearch index. Here we index some example text documents. If you created an index for example using ElasticsearchStore.from_documents that’s also fine.
def create_index(
es_client: Elasticsearch,
index_name: str,
text_field: str,
dense_vector_field: str,
num_characters_field: str,
):
es_client.indices.create(
index=index_name,
mappings={
"properties": {
text_field: {"type": "text"},
dense_vector_field: {"type": "dense_vector"},
num_characters_field: {"type": "integer"},
}
},
)
def index_data(
es_client: Elasticsearch,
index_name: str,
text_field: str,
dense_vector_field: str,
embeddings: Embeddings,
texts: Iterable[str],
refresh: bool = True,
) -> None:
create_index(
es_client, index_name, text_field, dense_vector_field, num_characters_field
)
vectors = embeddings.embed_documents(list(texts))
requests = [
{
"_op_type": "index",
"_index": index_name,
"_id": i,
text_field: text,
dense_vector_field: vector,
num_characters_field: len(text),
}
for i, (text, vector) in enumerate(zip(texts, vectors))
]
bulk(es_client, requests)
if refresh:
es_client.indices.refresh(index=index_name)
return len(requests)
index_data(es_client, index_name, text_field, dense_vector_field, embeddings, texts)
Usage examples
Vector search
Dense vector retrival using fake embeddings in this example.
def vector_query(search_query: str) -> Dict:
vector = embeddings.embed_query(search_query) # same embeddings as for indexing
return {
"knn": {
"field": dense_vector_field,
"query_vector": vector,
"k": 5,
"num_candidates": 10,
}
}
vector_retriever = ElasticsearchRetriever.from_es_params(
index_name=index_name,
body_func=vector_query,
content_field=text_field,
url=es_url,
)
vector_retriever.get_relevant_documents("foo")
[Document(page_content='foo', metadata={'_index': 'test-langchain-index', '_id': '0', '_score': 1.0, '_source': {'fake_embedding': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339], 'num_characters': 3}}),
Document(page_content='world', metadata={'_index': 'test-langchain-index', '_id': '2', '_score': 0.6770179, '_source': {'fake_embedding': [-0.7041151202179595, -1.4652961969276497, -0.25786766898672847], 'num_characters': 5}}),
Document(page_content='hello world', metadata={'_index': 'test-langchain-index', '_id': '3', '_score': 0.4816144, '_source': {'fake_embedding': [0.42728413221815387, -1.1889908285425348, -1.445433230084671], 'num_characters': 11}}),
Document(page_content='hello', metadata={'_index': 'test-langchain-index', '_id': '4', '_score': 0.46853775, '_source': {'fake_embedding': [-0.28560441330564046, 0.9958894823084921, 1.5489829880195058], 'num_characters': 5}}),
Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 0.2086992, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}})]
BM25
Traditional keyword matching.
def bm25_query(search_query: str) -> Dict:
return {
"query": {
"match": {
text_field: search_query,
},
},
}
bm25_retriever = ElasticsearchRetriever.from_es_params(
index_name=index_name,
body_func=bm25_query,
content_field=text_field,
url=es_url,
)
bm25_retriever.get_relevant_documents("foo")
[Document(page_content='foo', metadata={'_index': 'test-langchain-index', '_id': '0', '_score': 0.9711467, '_source': {'fake_embedding': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339], 'num_characters': 3}}),
Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 0.7437035, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}}),
Document(page_content='bla bla foo', metadata={'_index': 'test-langchain-index', '_id': '6', '_score': 0.6025789, '_source': {'fake_embedding': [1.7365927060137358, -0.5230400847844948, 0.7978339724186192], 'num_characters': 11}})]
Hybrid search
The combination of vector search and BM25 search using Reciprocal Rank Fusion (RRF) to combine the result sets.
def hybrid_query(search_query: str) -> Dict:
vector = embeddings.embed_query(search_query) # same embeddings as for indexing
return {
"query": {
"match": {
text_field: search_query,
},
},
"knn": {
"field": dense_vector_field,
"query_vector": vector,
"k": 5,
"num_candidates": 10,
},
"rank": {"rrf": {}},
}
hybrid_retriever = ElasticsearchRetriever.from_es_params(
index_name=index_name,
body_func=hybrid_query,
content_field=text_field,
url=es_url,
)
hybrid_retriever.get_relevant_documents("foo")
[Document(page_content='foo', metadata={'_index': 'test-langchain-index', '_id': '0', '_score': 0.9711467, '_source': {'fake_embedding': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339], 'num_characters': 3}}),
Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 0.7437035, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}}),
Document(page_content='bla bla foo', metadata={'_index': 'test-langchain-index', '_id': '6', '_score': 0.6025789, '_source': {'fake_embedding': [1.7365927060137358, -0.5230400847844948, 0.7978339724186192], 'num_characters': 11}})]
Fuzzy matching
Keyword matching with typo tolerance.
def fuzzy_query(search_query: str) -> Dict:
return {
"query": {
"match": {
text_field: {
"query": search_query,
"fuzziness": "AUTO",
}
},
},
}
fuzzy_retriever = ElasticsearchRetriever.from_es_params(
index_name=index_name,
body_func=fuzzy_query,
content_field=text_field,
url=es_url,
)
fuzzy_retriever.get_relevant_documents("fox") # note the character tolernace
[Document(page_content='foo', metadata={'_index': 'test-langchain-index', '_id': '0', '_score': 0.6474311, '_source': {'fake_embedding': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339], 'num_characters': 3}}),
Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 0.49580228, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}}),
Document(page_content='bla bla foo', metadata={'_index': 'test-langchain-index', '_id': '6', '_score': 0.40171927, '_source': {'fake_embedding': [1.7365927060137358, -0.5230400847844948, 0.7978339724186192], 'num_characters': 11}})]
Complex filtering
Combination of filters on different fields.
def filter_query_func(search_query: str) -> Dict:
return {
"query": {
"bool": {
"must": [
{"range": {num_characters_field: {"gte": 5}}},
],
"must_not": [
{"prefix": {text_field: "bla"}},
],
"should": [
{"match": {text_field: search_query}},
],
}
}
}
filtering_retriever = ElasticsearchRetriever.from_es_params(
index_name=index_name,
body_func=filter_query_func,
content_field=text_field,
url=es_url,
)
filtering_retriever.get_relevant_documents("foo")
[Document(page_content='foo bar', metadata={'_index': 'test-langchain-index', '_id': '5', '_score': 1.7437035, '_source': {'fake_embedding': [0.2533670476638539, 0.08100381646160418, 0.7763644080870179], 'num_characters': 7}}),
Document(page_content='world', metadata={'_index': 'test-langchain-index', '_id': '2', '_score': 1.0, '_source': {'fake_embedding': [-0.7041151202179595, -1.4652961969276497, -0.25786766898672847], 'num_characters': 5}}),
Document(page_content='hello world', metadata={'_index': 'test-langchain-index', '_id': '3', '_score': 1.0, '_source': {'fake_embedding': [0.42728413221815387, -1.1889908285425348, -1.445433230084671], 'num_characters': 11}}),
Document(page_content='hello', metadata={'_index': 'test-langchain-index', '_id': '4', '_score': 1.0, '_source': {'fake_embedding': [-0.28560441330564046, 0.9958894823084921, 1.5489829880195058], 'num_characters': 5}})]
Note that the query match is on top. The other documents that got passed the filter are also in the result set, but they all have the same score.
Custom document mapper
It is possible to cusomize the function tha maps an Elasticsearch result (hit) to a LangChain document.
def num_characters_mapper(hit: Dict[str, Any]) -> Document:
num_chars = hit["_source"][num_characters_field]
content = hit["_source"][text_field]
return Document(
page_content=f"This document has {num_chars} characters",
metadata={"text_content": content},
)
custom_mapped_retriever = ElasticsearchRetriever.from_es_params(
index_name=index_name,
body_func=filter_query_func,
document_mapper=num_characters_mapper,
url=es_url,
)
custom_mapped_retriever.get_relevant_documents("foo")
[Document(page_content='This document has 7 characters', metadata={'text_content': 'foo bar'}),
Document(page_content='This document has 5 characters', metadata={'text_content': 'world'}),
Document(page_content='This document has 11 characters', metadata={'text_content': 'hello world'}),
Document(page_content='This document has 5 characters', metadata={'text_content': 'hello'})] |
https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25/ | ## ElasticSearch BM25
> [Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
> In information retrieval, [Okapi BM25](https://en.wikipedia.org/wiki/Okapi_BM25) (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.
> The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London’s City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.
This notebook shows how to use a retriever that uses `ElasticSearch` and `BM25`.
For more information on the details of BM25 see [this blog post](https://www.elastic.co/blog/practical-bm25-part-2-the-bm25-algorithm-and-its-variables).
```
%pip install --upgrade --quiet elasticsearch
```
```
from langchain_community.retrievers import ( ElasticSearchBM25Retriever,)
```
## Create New Retriever[](#create-new-retriever "Direct link to Create New Retriever")
```
elasticsearch_url = "http://localhost:9200"retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")
```
```
# Alternatively, you can load an existing index# import elasticsearch# elasticsearch_url="http://localhost:9200"# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")
```
## Add texts (if necessary)[](#add-texts-if-necessary "Direct link to Add texts (if necessary)")
We can optionally add texts to the retriever (if they aren’t already in there)
```
retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"])
```
```
['cbd4cb47-8d9f-4f34-b80e-ea871bc49856', 'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365', '8631bfc8-7c12-48ee-ab56-8ad5f373676e', '8be8374c-3253-4d87-928d-d73550a2ecf0', 'd79f457b-2842-4eab-ae10-77aa420b53d7']
```
## Use Retriever[](#use-retriever "Direct link to Use Retriever")
We can now use the retriever!
```
result = retriever.get_relevant_documents("foo")
```
```
[Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:07.608Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25/",
"description": "Elasticsearch is a",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3576",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"elastic_search_bm25\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:06 GMT",
"etag": "W/\"69e249501b8e7584c3e24e3eaeac6996\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::lrtsn-1713753726959-a596ecee622c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25/",
"property": "og:url"
},
{
"content": "ElasticSearch BM25 | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Elasticsearch is a",
"property": "og:description"
}
],
"title": "ElasticSearch BM25 | 🦜️🔗 LangChain"
} | ElasticSearch BM25
Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.
The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London’s City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.
This notebook shows how to use a retriever that uses ElasticSearch and BM25.
For more information on the details of BM25 see this blog post.
%pip install --upgrade --quiet elasticsearch
from langchain_community.retrievers import (
ElasticSearchBM25Retriever,
)
Create New Retriever
elasticsearch_url = "http://localhost:9200"
retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")
# Alternatively, you can load an existing index
# import elasticsearch
# elasticsearch_url="http://localhost:9200"
# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")
Add texts (if necessary)
We can optionally add texts to the retriever (if they aren’t already in there)
retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"])
['cbd4cb47-8d9f-4f34-b80e-ea871bc49856',
'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365',
'8631bfc8-7c12-48ee-ab56-8ad5f373676e',
'8be8374c-3253-4d87-928d-d73550a2ecf0',
'd79f457b-2842-4eab-ae10-77aa420b53d7']
Use Retriever
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={})] |
https://python.langchain.com/docs/integrations/retrievers/flashrank-reranker/ | Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.
```
Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.----------------------------------------------------------------------------------------------------Document 3:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.----------------------------------------------------------------------------------------------------Document 4:He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.----------------------------------------------------------------------------------------------------Document 5:But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down.----------------------------------------------------------------------------------------------------Document 6:And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. I’m a capitalist, but capitalism without competition isn’t capitalism. It’s exploitation—and it drives up prices.----------------------------------------------------------------------------------------------------Document 7:I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice.----------------------------------------------------------------------------------------------------Document 8:As I’ve told Xi Jinping, it is never a good bet to bet against the American people. We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.----------------------------------------------------------------------------------------------------Document 9:Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny.----------------------------------------------------------------------------------------------------Document 10:As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. Inflation is robbing them of the gains they might otherwise feel. I get it. That’s why my top priority is getting prices under control.----------------------------------------------------------------------------------------------------Document 11:I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease.----------------------------------------------------------------------------------------------------Document 12:Headaches. Numbness. Dizziness. A cancer that would put them in a flag-draped coffin. I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson.----------------------------------------------------------------------------------------------------Document 13:He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand.----------------------------------------------------------------------------------------------------Document 14:When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America. For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. And I know you’re tired, frustrated, and exhausted. But I also know this.----------------------------------------------------------------------------------------------------Document 15:And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.----------------------------------------------------------------------------------------------------Document 16:My plan to fight inflation will lower your costs and lower the deficit. 17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And here’s the plan: First – cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.----------------------------------------------------------------------------------------------------Document 17:My plan will not only lower costs to give families a fair shot, it will lower the deficit. The previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted. But in my administration, the watchdogs have been welcomed back. We’re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans.----------------------------------------------------------------------------------------------------Document 18:So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.----------------------------------------------------------------------------------------------------Document 19:I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.----------------------------------------------------------------------------------------------------Document 20:And we will, as one people. One America. The United States of America. May God bless you all. May God protect our troops.
```
Now let’s wrap our base retriever with a `ContextualCompressionRetriever`, using `FlashrankRerank` as a compressor.
After reranking, the top 3 documents are different from the top 3 documents retrieved by the base retriever.
```
Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.----------------------------------------------------------------------------------------------------Document 3:And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. I’m a capitalist, but capitalism without competition isn’t capitalism. It’s exploitation—and it drives up prices.
```
```
{'query': 'What did the president say about Ketanji Brown Jackson', 'result': "The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds and will continue Justice Breyer's legacy of excellence."}
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:07.722Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/flashrank-reranker/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/flashrank-reranker/",
"description": "FlashRank is the",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3575",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"flashrank-reranker\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:07 GMT",
"etag": "W/\"2d8bfaf9694b062517bb479a7e6293cd\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::c2w6b-1713753726990-c796ead590c1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/flashrank-reranker/",
"property": "og:url"
},
{
"content": "FlashRank reranker | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "FlashRank is the",
"property": "og:description"
}
],
"title": "FlashRank reranker | 🦜️🔗 LangChain"
} | Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
----------------------------------------------------------------------------------------------------
Document 4:
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.
In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.
----------------------------------------------------------------------------------------------------
Document 5:
But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century.
Vice President Harris and I ran for office with a new economic vision for America.
Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up
and the middle out, not from the top down.
----------------------------------------------------------------------------------------------------
Document 6:
And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud.
By the end of this year, the deficit will be down to less than half what it was before I took office.
The only president ever to cut the deficit by more than one trillion dollars in a single year.
Lowering your costs also means demanding more competition.
I’m a capitalist, but capitalism without competition isn’t capitalism.
It’s exploitation—and it drives up prices.
----------------------------------------------------------------------------------------------------
Document 7:
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
----------------------------------------------------------------------------------------------------
Document 8:
As I’ve told Xi Jinping, it is never a good bet to bet against the American people.
We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America.
And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.
----------------------------------------------------------------------------------------------------
Document 9:
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
----------------------------------------------------------------------------------------------------
Document 10:
As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.”
It’s time.
But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills.
Inflation is robbing them of the gains they might otherwise feel.
I get it. That’s why my top priority is getting prices under control.
----------------------------------------------------------------------------------------------------
Document 11:
I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve.
And fourth, let’s end cancer as we know it.
This is personal to me and Jill, to Kamala, and to so many of you.
Cancer is the #2 cause of death in America–second only to heart disease.
----------------------------------------------------------------------------------------------------
Document 12:
Headaches. Numbness. Dizziness.
A cancer that would put them in a flag-draped coffin.
I know.
One of those soldiers was my son Major Beau Biden.
We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops.
But I’m committed to finding out everything we can.
Committed to military families like Danielle Robinson from Ohio.
The widow of Sergeant First Class Heath Robinson.
----------------------------------------------------------------------------------------------------
Document 13:
He will never extinguish their love of freedom. He will never weaken the resolve of the free world.
We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.
The pandemic has been punishing.
And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more.
I understand.
----------------------------------------------------------------------------------------------------
Document 14:
When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America.
For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation.
And I know you’re tired, frustrated, and exhausted.
But I also know this.
----------------------------------------------------------------------------------------------------
Document 15:
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.
----------------------------------------------------------------------------------------------------
Document 16:
My plan to fight inflation will lower your costs and lower the deficit.
17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And here’s the plan:
First – cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.
----------------------------------------------------------------------------------------------------
Document 17:
My plan will not only lower costs to give families a fair shot, it will lower the deficit.
The previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted.
But in my administration, the watchdogs have been welcomed back.
We’re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans.
----------------------------------------------------------------------------------------------------
Document 18:
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
----------------------------------------------------------------------------------------------------
Document 19:
I understand.
I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it.
That’s why one of the first things I did as President was fight to pass the American Rescue Plan.
Because people were hurting. We needed to act, and we did.
Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.
----------------------------------------------------------------------------------------------------
Document 20:
And we will, as one people.
One America.
The United States of America.
May God bless you all. May God protect our troops.
Now let’s wrap our base retriever with a ContextualCompressionRetriever, using FlashrankRerank as a compressor.
After reranking, the top 3 documents are different from the top 3 documents retrieved by the base retriever.
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.
In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.
----------------------------------------------------------------------------------------------------
Document 3:
And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud.
By the end of this year, the deficit will be down to less than half what it was before I took office.
The only president ever to cut the deficit by more than one trillion dollars in a single year.
Lowering your costs also means demanding more competition.
I’m a capitalist, but capitalism without competition isn’t capitalism.
It’s exploitation—and it drives up prices.
{'query': 'What did the president say about Ketanji Brown Jackson',
'result': "The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds and will continue Justice Breyer's legacy of excellence."} |
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever/ | ## DocArray
> [DocArray](https://github.com/docarray/docarray) is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your `DocArray` document index to create a `DocArrayRetriever`, and build awesome Langchain apps!
This notebook is split into two sections. The [first section](#document-index-backends) offers an introduction to all five supported document index backends. It provides guidance on setting up and indexing each backend and also instructs you on how to build a `DocArrayRetriever` for finding relevant documents. In the [second section](#movie-retrieval-using-hnswdocumentindex), we’ll select one of these backends and illustrate how to use it through a basic example.
## Document Index Backends[](#document-index-backends "Direct link to Document Index Backends")
```
import randomfrom docarray import BaseDocfrom docarray.typing import NdArrayfrom langchain_community.embeddings import FakeEmbeddingsfrom langchain_community.retrievers import DocArrayRetrieverembeddings = FakeEmbeddings(size=32)
```
Before you start building the index, it’s important to define your document schema. This determines what fields your documents will have and what type of data each field will hold.
For this demonstration, we’ll create a somewhat random schema containing ‘title’ (str), ‘title\_embedding’ (numpy array), ‘year’ (int), and ‘color’ (str)
```
class MyDoc(BaseDoc): title: str title_embedding: NdArray[32] year: int color: str
```
### InMemoryExactNNIndex[](#inmemoryexactnnindex "Direct link to InMemoryExactNNIndex")
`InMemoryExactNNIndex` stores all Documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.
Learn more here: [https://docs.docarray.org/user\_guide/storing/index\_in\_memory/](https://docs.docarray.org/user_guide/storing/index_in_memory/)
```
from docarray.index import InMemoryExactNNIndex# initialize the indexdb = InMemoryExactNNIndex[MyDoc]()# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}
```
```
# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc)
```
```
[Document(page_content='My document 56', metadata={'id': '1f33e58b6468ab722f3786b96b20afe6', 'year': 56, 'color': 'red'})]
```
### HnswDocumentIndex[](#hnswdocumentindex "Direct link to HnswDocumentIndex")
`HnswDocumentIndex` is a lightweight Document Index implementation that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in [hnswlib](https://github.com/nmslib/hnswlib), and stores all other data in [SQLite](https://www.sqlite.org/index.html).
Learn more here: [https://docs.docarray.org/user\_guide/storing/index\_hnswlib/](https://docs.docarray.org/user_guide/storing/index_hnswlib/)
```
from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="hnsw_index")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"year": {"$lte": 90}}
```
```
# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc)
```
```
[Document(page_content='My document 28', metadata={'id': 'ca9f3f4268eec7c97a7d6e77f541cb82', 'year': 28, 'color': 'red'})]
```
### WeaviateDocumentIndex[](#weaviatedocumentindex "Direct link to WeaviateDocumentIndex")
`WeaviateDocumentIndex` is a document index that is built upon [Weaviate](https://weaviate.io/) vector database.
Learn more here: [https://docs.docarray.org/user\_guide/storing/index\_weaviate/](https://docs.docarray.org/user_guide/storing/index_weaviate/)
```
# There's a small difference with the Weaviate backend compared to the others.# Here, you need to 'mark' the field used for vector search with 'is_embedding=True'.# So, let's create a new schema for Weaviate that takes care of this requirement.from pydantic import Fieldclass WeaviateDoc(BaseDoc): title: str title_embedding: NdArray[32] = Field(is_embedding=True) year: int color: str
```
```
from docarray.index import WeaviateDocumentIndex# initialize the indexdbconfig = WeaviateDocumentIndex.DBConfig(host="http://localhost:8080")db = WeaviateDocumentIndex[WeaviateDoc](db_config=dbconfig)# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"path": ["year"], "operator": "LessThanEqual", "valueInt": "90"}
```
```
# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc)
```
```
[Document(page_content='My document 17', metadata={'id': '3a5b76e85f0d0a01785dc8f9d965ce40', 'year': 17, 'color': 'red'})]
```
### ElasticDocIndex[](#elasticdocindex "Direct link to ElasticDocIndex")
`ElasticDocIndex` is a document index that is built upon [ElasticSearch](https://github.com/elastic/elasticsearch)
Learn more [here](https://docs.docarray.org/user_guide/storing/index_elastic/)
```
from docarray.index import ElasticDocIndex# initialize the indexdb = ElasticDocIndex[MyDoc]( hosts="http://localhost:9200", index_name="docarray_retriever")# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = {"range": {"year": {"lte": 90}}}
```
```
# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc)
```
```
[Document(page_content='My document 46', metadata={'id': 'edbc721bac1c2ad323414ad1301528a4', 'year': 46, 'color': 'green'})]
```
### QdrantDocumentIndex[](#qdrantdocumentindex "Direct link to QdrantDocumentIndex")
`QdrantDocumentIndex` is a document index that is built upon [Qdrant](https://qdrant.tech/) vector database
Learn more [here](https://docs.docarray.org/user_guide/storing/index_qdrant/)
```
from docarray.index import QdrantDocumentIndexfrom qdrant_client.http import models as rest# initialize the indexqdrant_config = QdrantDocumentIndex.DBConfig(path=":memory:")db = QdrantDocumentIndex[MyDoc](qdrant_config)# index datadb.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ])# optionally, you can create a filter queryfilter_query = rest.Filter( must=[ rest.FieldCondition( key="year", range=rest.Range( gte=10, lt=90, ), ) ])
```
```
WARNING:root:Payload indexes have no effect in the local Qdrant. Please use server Qdrant if you need payload indexes.
```
```
# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query,)# find the relevant documentdoc = retriever.get_relevant_documents("some query")print(doc)
```
```
[Document(page_content='My document 80', metadata={'id': '97465f98d0810f1f330e4ecc29b13d20', 'year': 80, 'color': 'blue'})]
```
## Movie Retrieval using HnswDocumentIndex[](#movie-retrieval-using-hnswdocumentindex "Direct link to Movie Retrieval using HnswDocumentIndex")
```
movies = [ { "title": "Inception", "description": "A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.", "director": "Christopher Nolan", "rating": 8.8, }, { "title": "The Dark Knight", "description": "When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.", "director": "Christopher Nolan", "rating": 9.0, }, { "title": "Interstellar", "description": "Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.", "director": "Christopher Nolan", "rating": 8.6, }, { "title": "Pulp Fiction", "description": "The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", "director": "Quentin Tarantino", "rating": 8.9, }, { "title": "Reservoir Dogs", "description": "When a simple jewelry heist goes horribly wrong, the surviving criminals begin to suspect that one of them is a police informant.", "director": "Quentin Tarantino", "rating": 8.3, }, { "title": "The Godfather", "description": "An aging patriarch of an organized crime dynasty transfers control of his empire to his reluctant son.", "director": "Francis Ford Coppola", "rating": 9.2, },]
```
```
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
```
```
from docarray import BaseDoc, DocListfrom docarray.typing import NdArrayfrom langchain_openai import OpenAIEmbeddings# define schema for your movie documentsclass MyDoc(BaseDoc): title: str description: str description_embedding: NdArray[1536] rating: float director: strembeddings = OpenAIEmbeddings()# get "description" embeddings, and create documentsdocs = DocList[MyDoc]( [ MyDoc( description_embedding=embeddings.embed_query(movie["description"]), **movie ) for movie in movies ])
```
```
from docarray.index import HnswDocumentIndex# initialize the indexdb = HnswDocumentIndex[MyDoc](work_dir="movie_search")# add datadb.index(docs)
```
### Normal Retriever[](#normal-retriever "Direct link to Normal Retriever")
```
from langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description",)# find the relevant documentdoc = retriever.get_relevant_documents("movie about dreams")print(doc)
```
```
[Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]
```
### Retriever with Filters[](#retriever-with-filters "Direct link to Retriever with Filters")
```
from langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"director": {"$eq": "Christopher Nolan"}}, top_k=2,)# find relevant documentsdocs = retriever.get_relevant_documents("space travel")print(docs)
```
```
[Document(page_content='Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.', metadata={'id': 'ab704cc7ae8573dc617f9a5e25df022a', 'title': 'Interstellar', 'rating': 8.6, 'director': 'Christopher Nolan'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]
```
### Retriever with MMR search[](#retriever-with-mmr-search "Direct link to Retriever with MMR search")
```
from langchain.retrievers import DocArrayRetriever# create a retrieverretriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"rating": {"$gte": 8.7}}, search_type="mmr", top_k=3,)# find relevant documentsdocs = retriever.get_relevant_documents("action movies")print(docs)
```
```
[Document(page_content="The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", metadata={'id': 'e6aa313bbde514e23fbc80ab34511afd', 'title': 'Pulp Fiction', 'rating': 8.9, 'director': 'Quentin Tarantino'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'}), Document(page_content='When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.', metadata={'id': '91dec17d4272041b669fd113333a65f7', 'title': 'The Dark Knight', 'rating': 9.0, 'director': 'Christopher Nolan'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:08.122Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/docarray_retriever/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/docarray_retriever/",
"description": "DocArray is a versatile,",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4030",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"docarray_retriever\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:07 GMT",
"etag": "W/\"3bc2815b55b0b6e1815d11dd372cf0b8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wcgrm-1713753727002-b6bf31ae4d47"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/docarray_retriever/",
"property": "og:url"
},
{
"content": "DocArray | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "DocArray is a versatile,",
"property": "og:description"
}
],
"title": "DocArray | 🦜️🔗 LangChain"
} | DocArray
DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps!
This notebook is split into two sections. The first section offers an introduction to all five supported document index backends. It provides guidance on setting up and indexing each backend and also instructs you on how to build a DocArrayRetriever for finding relevant documents. In the second section, we’ll select one of these backends and illustrate how to use it through a basic example.
Document Index Backends
import random
from docarray import BaseDoc
from docarray.typing import NdArray
from langchain_community.embeddings import FakeEmbeddings
from langchain_community.retrievers import DocArrayRetriever
embeddings = FakeEmbeddings(size=32)
Before you start building the index, it’s important to define your document schema. This determines what fields your documents will have and what type of data each field will hold.
For this demonstration, we’ll create a somewhat random schema containing ‘title’ (str), ‘title_embedding’ (numpy array), ‘year’ (int), and ‘color’ (str)
class MyDoc(BaseDoc):
title: str
title_embedding: NdArray[32]
year: int
color: str
InMemoryExactNNIndex
InMemoryExactNNIndex stores all Documents in memory. It is a great starting point for small datasets, where you may not want to launch a database server.
Learn more here: https://docs.docarray.org/user_guide/storing/index_in_memory/
from docarray.index import InMemoryExactNNIndex
# initialize the index
db = InMemoryExactNNIndex[MyDoc]()
# index data
db.index(
[
MyDoc(
title=f"My document {i}",
title_embedding=embeddings.embed_query(f"query {i}"),
year=i,
color=random.choice(["red", "green", "blue"]),
)
for i in range(100)
]
)
# optionally, you can create a filter query
filter_query = {"year": {"$lte": 90}}
# create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="title_embedding",
content_field="title",
filters=filter_query,
)
# find the relevant document
doc = retriever.get_relevant_documents("some query")
print(doc)
[Document(page_content='My document 56', metadata={'id': '1f33e58b6468ab722f3786b96b20afe6', 'year': 56, 'color': 'red'})]
HnswDocumentIndex
HnswDocumentIndex is a lightweight Document Index implementation that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.
Learn more here: https://docs.docarray.org/user_guide/storing/index_hnswlib/
from docarray.index import HnswDocumentIndex
# initialize the index
db = HnswDocumentIndex[MyDoc](work_dir="hnsw_index")
# index data
db.index(
[
MyDoc(
title=f"My document {i}",
title_embedding=embeddings.embed_query(f"query {i}"),
year=i,
color=random.choice(["red", "green", "blue"]),
)
for i in range(100)
]
)
# optionally, you can create a filter query
filter_query = {"year": {"$lte": 90}}
# create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="title_embedding",
content_field="title",
filters=filter_query,
)
# find the relevant document
doc = retriever.get_relevant_documents("some query")
print(doc)
[Document(page_content='My document 28', metadata={'id': 'ca9f3f4268eec7c97a7d6e77f541cb82', 'year': 28, 'color': 'red'})]
WeaviateDocumentIndex
WeaviateDocumentIndex is a document index that is built upon Weaviate vector database.
Learn more here: https://docs.docarray.org/user_guide/storing/index_weaviate/
# There's a small difference with the Weaviate backend compared to the others.
# Here, you need to 'mark' the field used for vector search with 'is_embedding=True'.
# So, let's create a new schema for Weaviate that takes care of this requirement.
from pydantic import Field
class WeaviateDoc(BaseDoc):
title: str
title_embedding: NdArray[32] = Field(is_embedding=True)
year: int
color: str
from docarray.index import WeaviateDocumentIndex
# initialize the index
dbconfig = WeaviateDocumentIndex.DBConfig(host="http://localhost:8080")
db = WeaviateDocumentIndex[WeaviateDoc](db_config=dbconfig)
# index data
db.index(
[
MyDoc(
title=f"My document {i}",
title_embedding=embeddings.embed_query(f"query {i}"),
year=i,
color=random.choice(["red", "green", "blue"]),
)
for i in range(100)
]
)
# optionally, you can create a filter query
filter_query = {"path": ["year"], "operator": "LessThanEqual", "valueInt": "90"}
# create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="title_embedding",
content_field="title",
filters=filter_query,
)
# find the relevant document
doc = retriever.get_relevant_documents("some query")
print(doc)
[Document(page_content='My document 17', metadata={'id': '3a5b76e85f0d0a01785dc8f9d965ce40', 'year': 17, 'color': 'red'})]
ElasticDocIndex
ElasticDocIndex is a document index that is built upon ElasticSearch
Learn more here
from docarray.index import ElasticDocIndex
# initialize the index
db = ElasticDocIndex[MyDoc](
hosts="http://localhost:9200", index_name="docarray_retriever"
)
# index data
db.index(
[
MyDoc(
title=f"My document {i}",
title_embedding=embeddings.embed_query(f"query {i}"),
year=i,
color=random.choice(["red", "green", "blue"]),
)
for i in range(100)
]
)
# optionally, you can create a filter query
filter_query = {"range": {"year": {"lte": 90}}}
# create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="title_embedding",
content_field="title",
filters=filter_query,
)
# find the relevant document
doc = retriever.get_relevant_documents("some query")
print(doc)
[Document(page_content='My document 46', metadata={'id': 'edbc721bac1c2ad323414ad1301528a4', 'year': 46, 'color': 'green'})]
QdrantDocumentIndex
QdrantDocumentIndex is a document index that is built upon Qdrant vector database
Learn more here
from docarray.index import QdrantDocumentIndex
from qdrant_client.http import models as rest
# initialize the index
qdrant_config = QdrantDocumentIndex.DBConfig(path=":memory:")
db = QdrantDocumentIndex[MyDoc](qdrant_config)
# index data
db.index(
[
MyDoc(
title=f"My document {i}",
title_embedding=embeddings.embed_query(f"query {i}"),
year=i,
color=random.choice(["red", "green", "blue"]),
)
for i in range(100)
]
)
# optionally, you can create a filter query
filter_query = rest.Filter(
must=[
rest.FieldCondition(
key="year",
range=rest.Range(
gte=10,
lt=90,
),
)
]
)
WARNING:root:Payload indexes have no effect in the local Qdrant. Please use server Qdrant if you need payload indexes.
# create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="title_embedding",
content_field="title",
filters=filter_query,
)
# find the relevant document
doc = retriever.get_relevant_documents("some query")
print(doc)
[Document(page_content='My document 80', metadata={'id': '97465f98d0810f1f330e4ecc29b13d20', 'year': 80, 'color': 'blue'})]
Movie Retrieval using HnswDocumentIndex
movies = [
{
"title": "Inception",
"description": "A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.",
"director": "Christopher Nolan",
"rating": 8.8,
},
{
"title": "The Dark Knight",
"description": "When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.",
"director": "Christopher Nolan",
"rating": 9.0,
},
{
"title": "Interstellar",
"description": "Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.",
"director": "Christopher Nolan",
"rating": 8.6,
},
{
"title": "Pulp Fiction",
"description": "The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.",
"director": "Quentin Tarantino",
"rating": 8.9,
},
{
"title": "Reservoir Dogs",
"description": "When a simple jewelry heist goes horribly wrong, the surviving criminals begin to suspect that one of them is a police informant.",
"director": "Quentin Tarantino",
"rating": 8.3,
},
{
"title": "The Godfather",
"description": "An aging patriarch of an organized crime dynasty transfers control of his empire to his reluctant son.",
"director": "Francis Ford Coppola",
"rating": 9.2,
},
]
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
from langchain_openai import OpenAIEmbeddings
# define schema for your movie documents
class MyDoc(BaseDoc):
title: str
description: str
description_embedding: NdArray[1536]
rating: float
director: str
embeddings = OpenAIEmbeddings()
# get "description" embeddings, and create documents
docs = DocList[MyDoc](
[
MyDoc(
description_embedding=embeddings.embed_query(movie["description"]), **movie
)
for movie in movies
]
)
from docarray.index import HnswDocumentIndex
# initialize the index
db = HnswDocumentIndex[MyDoc](work_dir="movie_search")
# add data
db.index(docs)
Normal Retriever
from langchain.retrievers import DocArrayRetriever
# create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="description_embedding",
content_field="description",
)
# find the relevant document
doc = retriever.get_relevant_documents("movie about dreams")
print(doc)
[Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]
Retriever with Filters
from langchain.retrievers import DocArrayRetriever
# create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="description_embedding",
content_field="description",
filters={"director": {"$eq": "Christopher Nolan"}},
top_k=2,
)
# find relevant documents
docs = retriever.get_relevant_documents("space travel")
print(docs)
[Document(page_content='Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.', metadata={'id': 'ab704cc7ae8573dc617f9a5e25df022a', 'title': 'Interstellar', 'rating': 8.6, 'director': 'Christopher Nolan'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})]
Retriever with MMR search
from langchain.retrievers import DocArrayRetriever
# create a retriever
retriever = DocArrayRetriever(
index=db,
embeddings=embeddings,
search_field="description_embedding",
content_field="description",
filters={"rating": {"$gte": 8.7}},
search_type="mmr",
top_k=3,
)
# find relevant documents
docs = retriever.get_relevant_documents("action movies")
print(docs)
[Document(page_content="The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", metadata={'id': 'e6aa313bbde514e23fbc80ab34511afd', 'title': 'Pulp Fiction', 'rating': 8.9, 'director': 'Quentin Tarantino'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'}), Document(page_content='When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.', metadata={'id': '91dec17d4272041b669fd113333a65f7', 'title': 'The Dark Knight', 'rating': 9.0, 'director': 'Christopher Nolan'})] |
https://python.langchain.com/docs/integrations/retrievers/embedchain/ | This notebook shows how to use a retriever that uses `Embedchain`.
`EmbedchainRetriever` has a static `.create()` factory method that takes the following arguments:
In embedchain, you can as many supported data types as possible. You can browse our [docs](https://docs.embedchain.ai/) to see the data types supported.
Embedchain automatically deduces the types of the data. So you can add a string, URL or local file path.
```
Inserting batches in chromadb: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:08<00:00, 2.22s/it]Inserting batches in chromadb: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.17s/it]Inserting batches in chromadb: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.25s/it]
```
```
Successfully saved https://en.wikipedia.org/wiki/Elon_Musk (DataType.WEB_PAGE). New chunks count: 378Successfully saved https://www.forbes.com/profile/elon-musk (DataType.WEB_PAGE). New chunks count: 13Successfully saved https://www.youtube.com/watch?v=RcYjXbSJBN8 (DataType.YOUTUBE_VIDEO). New chunks count: 53
```
```
['1eab8dd1ffa92906f7fc839862871ca5', '8cf46026cabf9b05394a2658bd1fe890', 'da3227cdbcedb018e05c47b774d625f6']
```
```
[Document(page_content='Views Filmography Companies Zip2 X.com PayPal SpaceX Starlink Tesla, Inc. Energycriticismlitigation OpenAI Neuralink The Boring Company Thud X Corp. Twitteracquisitiontenure as CEO xAI In popular culture Elon Musk (Isaacson) Elon Musk (Vance) Ludicrous Power Play "Members Only" "The Platonic Permutation" "The Musk Who Fell to Earth" "One Crew over the Crewcoo\'s Morty" Elon Musk\'s Crash Course Related Boring Test Tunnel Hyperloop Musk family Musk vs. Zuckerberg SolarCity Tesla Roadster in space', metadata={'source': 'https://en.wikipedia.org/wiki/Elon_Musk', 'document_id': 'c33c05d0-5028-498b-b5e3-c43a4f9e8bf8--3342161a0fbc19e91f6bf387204aa30fbb2cea05abc81882502476bde37b9392'}), Document(page_content='Elon Musk PROFILEElon MuskCEO, Tesla$241.2B$508M (0.21%)Real Time Net Worthas of 11/18/23Reflects change since 5 pm ET of prior trading day. 1 in the world todayPhoto by Martin Schoeller for ForbesAbout Elon MuskElon Musk cofounded six companies, including electric car maker Tesla, rocket producer SpaceX and tunneling startup Boring Company.He owns about 21% of Tesla between stock and options, but has pledged more than half his shares as collateral for personal loans of up to $3.5', metadata={'source': 'https://www.forbes.com/profile/elon-musk', 'document_id': 'c33c05d0-5028-498b-b5e3-c43a4f9e8bf8--3c8573134c575fafc025e9211413723e1f7a725b5936e8ee297fb7fb63bdd01a'}), Document(page_content='to form PayPal. In October 2002, eBay acquired PayPal for $1.5 billion, and that same year, with $100 million of the money he made, Musk founded SpaceX, a spaceflight services company. In 2004, he became an early investor in electric vehicle manufacturer Tesla Motors, Inc. (now Tesla, Inc.). He became its chairman and product architect, assuming the position of CEO in 2008. In 2006, Musk helped create SolarCity, a solar-energy company that was acquired by Tesla in 2016 and became Tesla Energy.', metadata={'source': 'https://en.wikipedia.org/wiki/Elon_Musk', 'document_id': 'c33c05d0-5028-498b-b5e3-c43a4f9e8bf8--3342161a0fbc19e91f6bf387204aa30fbb2cea05abc81882502476bde37b9392'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:09.035Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/embedchain/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/embedchain/",
"description": "Embedchain is a RAG",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4030",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"embedchain\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:08 GMT",
"etag": "W/\"e7f0967269b5bd725f735f38341615e0\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::kfqs7-1713753728810-46526429c925"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/embedchain/",
"property": "og:url"
},
{
"content": "Embedchain | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Embedchain is a RAG",
"property": "og:description"
}
],
"title": "Embedchain | 🦜️🔗 LangChain"
} | This notebook shows how to use a retriever that uses Embedchain.
EmbedchainRetriever has a static .create() factory method that takes the following arguments:
In embedchain, you can as many supported data types as possible. You can browse our docs to see the data types supported.
Embedchain automatically deduces the types of the data. So you can add a string, URL or local file path.
Inserting batches in chromadb: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:08<00:00, 2.22s/it]
Inserting batches in chromadb: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.17s/it]
Inserting batches in chromadb: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.25s/it]
Successfully saved https://en.wikipedia.org/wiki/Elon_Musk (DataType.WEB_PAGE). New chunks count: 378
Successfully saved https://www.forbes.com/profile/elon-musk (DataType.WEB_PAGE). New chunks count: 13
Successfully saved https://www.youtube.com/watch?v=RcYjXbSJBN8 (DataType.YOUTUBE_VIDEO). New chunks count: 53
['1eab8dd1ffa92906f7fc839862871ca5',
'8cf46026cabf9b05394a2658bd1fe890',
'da3227cdbcedb018e05c47b774d625f6']
[Document(page_content='Views Filmography Companies Zip2 X.com PayPal SpaceX Starlink Tesla, Inc. Energycriticismlitigation OpenAI Neuralink The Boring Company Thud X Corp. Twitteracquisitiontenure as CEO xAI In popular culture Elon Musk (Isaacson) Elon Musk (Vance) Ludicrous Power Play "Members Only" "The Platonic Permutation" "The Musk Who Fell to Earth" "One Crew over the Crewcoo\'s Morty" Elon Musk\'s Crash Course Related Boring Test Tunnel Hyperloop Musk family Musk vs. Zuckerberg SolarCity Tesla Roadster in space', metadata={'source': 'https://en.wikipedia.org/wiki/Elon_Musk', 'document_id': 'c33c05d0-5028-498b-b5e3-c43a4f9e8bf8--3342161a0fbc19e91f6bf387204aa30fbb2cea05abc81882502476bde37b9392'}),
Document(page_content='Elon Musk PROFILEElon MuskCEO, Tesla$241.2B$508M (0.21%)Real Time Net Worthas of 11/18/23Reflects change since 5 pm ET of prior trading day. 1 in the world todayPhoto by Martin Schoeller for ForbesAbout Elon MuskElon Musk cofounded six companies, including electric car maker Tesla, rocket producer SpaceX and tunneling startup Boring Company.He owns about 21% of Tesla between stock and options, but has pledged more than half his shares as collateral for personal loans of up to $3.5', metadata={'source': 'https://www.forbes.com/profile/elon-musk', 'document_id': 'c33c05d0-5028-498b-b5e3-c43a4f9e8bf8--3c8573134c575fafc025e9211413723e1f7a725b5936e8ee297fb7fb63bdd01a'}),
Document(page_content='to form PayPal. In October 2002, eBay acquired PayPal for $1.5 billion, and that same year, with $100 million of the money he made, Musk founded SpaceX, a spaceflight services company. In 2004, he became an early investor in electric vehicle manufacturer Tesla Motors, Inc. (now Tesla, Inc.). He became its chairman and product architect, assuming the position of CEO in 2008. In 2006, Musk helped create SolarCity, a solar-energy company that was acquired by Tesla in 2016 and became Tesla Energy.', metadata={'source': 'https://en.wikipedia.org/wiki/Elon_Musk', 'document_id': 'c33c05d0-5028-498b-b5e3-c43a4f9e8bf8--3342161a0fbc19e91f6bf387204aa30fbb2cea05abc81882502476bde37b9392'})] |
https://python.langchain.com/docs/integrations/retrievers/fleet_context/ | ## Fleet AI Context
> [Fleet AI Context](https://www.fleet.so/context) is a dataset of high-quality embeddings of the top 1200 most popular & permissive Python Libraries & their documentation.
>
> The `Fleet AI` team is on a mission to embed the world’s most important data. They’ve started by embedding the top 1200 Python libraries to enable code generation with up-to-date knowledge. They’ve been kind enough to share their embeddings of the [LangChain docs](https://python.langchain.com/docs/get_started/introduction/) and [API reference](https://api.python.langchain.com/en/latest/api_reference.html).
Let’s take a look at how we can use these embeddings to power a docs retrieval system and ultimately a simple code-generating chain!
```
%pip install --upgrade --quiet langchain fleet-context langchain-openai pandas faiss-cpu # faiss-gpu for CUDA supported GPU
```
```
from operator import itemgetterfrom typing import Any, Optional, Typeimport pandas as pdfrom langchain.retrievers import MultiVectorRetrieverfrom langchain_community.vectorstores import FAISSfrom langchain_core.documents import Documentfrom langchain_core.stores import BaseStorefrom langchain_core.vectorstores import VectorStorefrom langchain_openai import OpenAIEmbeddingsdef load_fleet_retriever( df: pd.DataFrame, *, vectorstore_cls: Type[VectorStore] = FAISS, docstore: Optional[BaseStore] = None, **kwargs: Any,): vectorstore = _populate_vectorstore(df, vectorstore_cls) if docstore is None: return vectorstore.as_retriever(**kwargs) else: _populate_docstore(df, docstore) return MultiVectorRetriever( vectorstore=vectorstore, docstore=docstore, id_key="parent", **kwargs )def _populate_vectorstore( df: pd.DataFrame, vectorstore_cls: Type[VectorStore],) -> VectorStore: if not hasattr(vectorstore_cls, "from_embeddings"): raise ValueError( f"Incompatible vector store class {vectorstore_cls}." "Must implement `from_embeddings` class method." ) texts_embeddings = [] metadatas = [] for _, row in df.iterrows(): texts_embeddings.append((row.metadata["text"], row["dense_embeddings"])) metadatas.append(row.metadata) return vectorstore_cls.from_embeddings( texts_embeddings, OpenAIEmbeddings(model="text-embedding-ada-002"), metadatas=metadatas, )def _populate_docstore(df: pd.DataFrame, docstore: BaseStore) -> None: parent_docs = [] df = df.copy() df["parent"] = df.metadata.apply(itemgetter("parent")) for parent_id, group in df.groupby("parent"): sorted_group = group.iloc[ group.metadata.apply(itemgetter("section_index")).argsort() ] text = "".join(sorted_group.metadata.apply(itemgetter("text"))) metadata = { k: sorted_group.iloc[0].metadata[k] for k in ("title", "type", "url") } text = metadata["title"] + "\n" + text metadata["id"] = parent_id parent_docs.append(Document(page_content=text, metadata=metadata)) docstore.mset(((d.metadata["id"], d) for d in parent_docs))
```
## Retriever chunks[](#retriever-chunks "Direct link to Retriever chunks")
As part of their embedding process, the Fleet AI team first chunked long documents before embedding them. This means the vectors correspond to sections of pages in the LangChain docs, not entire pages. By default, when we spin up a retriever from these embeddings, we’ll be retrieving these embedded chunks.
We will be using Fleet Context’s `download_embeddings()` to grab Langchain’s documentation embeddings. You can view all supported libraries’ documentation at [https://fleet.so/context](https://fleet.so/context).
```
from context import download_embeddingsdf = download_embeddings("langchain")vecstore_retriever = load_fleet_retriever(df)
```
```
vecstore_retriever.get_relevant_documents("How does the multi vector retriever work")
```
## Other packages[](#other-packages "Direct link to Other packages")
You can download and use other embeddings from [this Dropbox link](https://www.dropbox.com/scl/fo/54t2e7fogtixo58pnlyub/h?rlkey=tne16wkssgf01jor0p1iqg6p9&dl=0).
## Retrieve parent docs[](#retrieve-parent-docs "Direct link to Retrieve parent docs")
The embeddings provided by Fleet AI contain metadata that indicates which embedding chunks correspond to the same original document page. If we’d like we can use this information to retrieve whole parent documents, and not just embedded chunks. Under the hood, we’ll use a MultiVectorRetriever and a BaseStore object to search for relevant chunks and then map them to their parent document.
```
from langchain.storage import InMemoryStoreparent_retriever = load_fleet_retriever( "https://www.dropbox.com/scl/fi/4rescpkrg9970s3huz47l/libraries_langchain_release.parquet?rlkey=283knw4wamezfwiidgpgptkep&dl=1", docstore=InMemoryStore(),)
```
```
parent_retriever.get_relevant_documents("How does the multi vector retriever work")
```
## Putting it in a chain[](#putting-it-in-a-chain "Direct link to Putting it in a chain")
Let’s try using our retrieval systems in a simple chain!
```
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAIprompt = ChatPromptTemplate.from_messages( [ ( "system", """You are a great software engineer who is very familiar \with Python. Given a user question or request about a new Python library called LangChain and \parts of the LangChain documentation, answer the question or generate the requested code. \Your answers must be accurate, should include code whenever possible, and should assume anything \about LangChain which is note explicitly stated in the LangChain documentation. If the required \information is not available, just say so.LangChain Documentation------------------{context}""", ), ("human", "{question}"), ])model = ChatOpenAI(model="gpt-3.5-turbo-16k")chain = ( { "question": RunnablePassthrough(), "context": parent_retriever | (lambda docs: "\n\n".join(d.page_content for d in docs)), } | prompt | model | StrOutputParser())
```
```
for chunk in chain.invoke( "How do I create a FAISS vector store retriever that returns 10 documents per search query"): print(chunk, end="", flush=True)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:09.872Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/fleet_context/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/fleet_context/",
"description": "Fleet AI Context is a dataset of",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4031",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"fleet_context\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:09 GMT",
"etag": "W/\"49c5df7cde85ab81355eac8ec0e5bd9f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::qb88p-1713753729749-19335a442897"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/fleet_context/",
"property": "og:url"
},
{
"content": "Fleet AI Context | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Fleet AI Context is a dataset of",
"property": "og:description"
}
],
"title": "Fleet AI Context | 🦜️🔗 LangChain"
} | Fleet AI Context
Fleet AI Context is a dataset of high-quality embeddings of the top 1200 most popular & permissive Python Libraries & their documentation.
The Fleet AI team is on a mission to embed the world’s most important data. They’ve started by embedding the top 1200 Python libraries to enable code generation with up-to-date knowledge. They’ve been kind enough to share their embeddings of the LangChain docs and API reference.
Let’s take a look at how we can use these embeddings to power a docs retrieval system and ultimately a simple code-generating chain!
%pip install --upgrade --quiet langchain fleet-context langchain-openai pandas faiss-cpu # faiss-gpu for CUDA supported GPU
from operator import itemgetter
from typing import Any, Optional, Type
import pandas as pd
from langchain.retrievers import MultiVectorRetriever
from langchain_community.vectorstores import FAISS
from langchain_core.documents import Document
from langchain_core.stores import BaseStore
from langchain_core.vectorstores import VectorStore
from langchain_openai import OpenAIEmbeddings
def load_fleet_retriever(
df: pd.DataFrame,
*,
vectorstore_cls: Type[VectorStore] = FAISS,
docstore: Optional[BaseStore] = None,
**kwargs: Any,
):
vectorstore = _populate_vectorstore(df, vectorstore_cls)
if docstore is None:
return vectorstore.as_retriever(**kwargs)
else:
_populate_docstore(df, docstore)
return MultiVectorRetriever(
vectorstore=vectorstore, docstore=docstore, id_key="parent", **kwargs
)
def _populate_vectorstore(
df: pd.DataFrame,
vectorstore_cls: Type[VectorStore],
) -> VectorStore:
if not hasattr(vectorstore_cls, "from_embeddings"):
raise ValueError(
f"Incompatible vector store class {vectorstore_cls}."
"Must implement `from_embeddings` class method."
)
texts_embeddings = []
metadatas = []
for _, row in df.iterrows():
texts_embeddings.append((row.metadata["text"], row["dense_embeddings"]))
metadatas.append(row.metadata)
return vectorstore_cls.from_embeddings(
texts_embeddings,
OpenAIEmbeddings(model="text-embedding-ada-002"),
metadatas=metadatas,
)
def _populate_docstore(df: pd.DataFrame, docstore: BaseStore) -> None:
parent_docs = []
df = df.copy()
df["parent"] = df.metadata.apply(itemgetter("parent"))
for parent_id, group in df.groupby("parent"):
sorted_group = group.iloc[
group.metadata.apply(itemgetter("section_index")).argsort()
]
text = "".join(sorted_group.metadata.apply(itemgetter("text")))
metadata = {
k: sorted_group.iloc[0].metadata[k] for k in ("title", "type", "url")
}
text = metadata["title"] + "\n" + text
metadata["id"] = parent_id
parent_docs.append(Document(page_content=text, metadata=metadata))
docstore.mset(((d.metadata["id"], d) for d in parent_docs))
Retriever chunks
As part of their embedding process, the Fleet AI team first chunked long documents before embedding them. This means the vectors correspond to sections of pages in the LangChain docs, not entire pages. By default, when we spin up a retriever from these embeddings, we’ll be retrieving these embedded chunks.
We will be using Fleet Context’s download_embeddings() to grab Langchain’s documentation embeddings. You can view all supported libraries’ documentation at https://fleet.so/context.
from context import download_embeddings
df = download_embeddings("langchain")
vecstore_retriever = load_fleet_retriever(df)
vecstore_retriever.get_relevant_documents("How does the multi vector retriever work")
Other packages
You can download and use other embeddings from this Dropbox link.
Retrieve parent docs
The embeddings provided by Fleet AI contain metadata that indicates which embedding chunks correspond to the same original document page. If we’d like we can use this information to retrieve whole parent documents, and not just embedded chunks. Under the hood, we’ll use a MultiVectorRetriever and a BaseStore object to search for relevant chunks and then map them to their parent document.
from langchain.storage import InMemoryStore
parent_retriever = load_fleet_retriever(
"https://www.dropbox.com/scl/fi/4rescpkrg9970s3huz47l/libraries_langchain_release.parquet?rlkey=283knw4wamezfwiidgpgptkep&dl=1",
docstore=InMemoryStore(),
)
parent_retriever.get_relevant_documents("How does the multi vector retriever work")
Putting it in a chain
Let’s try using our retrieval systems in a simple chain!
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"""You are a great software engineer who is very familiar \
with Python. Given a user question or request about a new Python library called LangChain and \
parts of the LangChain documentation, answer the question or generate the requested code. \
Your answers must be accurate, should include code whenever possible, and should assume anything \
about LangChain which is note explicitly stated in the LangChain documentation. If the required \
information is not available, just say so.
LangChain Documentation
------------------
{context}""",
),
("human", "{question}"),
]
)
model = ChatOpenAI(model="gpt-3.5-turbo-16k")
chain = (
{
"question": RunnablePassthrough(),
"context": parent_retriever
| (lambda docs: "\n\n".join(d.page_content for d in docs)),
}
| prompt
| model
| StrOutputParser()
)
for chunk in chain.invoke(
"How do I create a FAISS vector store retriever that returns 10 documents per search query"
):
print(chunk, end="", flush=True) |
https://python.langchain.com/docs/integrations/providers/voyageai/ | ## VoyageAI
All functionality related to VoyageAI
> [VoyageAI](https://www.voyageai.com/) Voyage AI builds embedding models, customized for your domain and company, for better retrieval quality.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
Install the integration package with
```
pip install langchain-voyageai
```
Get a VoyageAI API key and set it as an environment variable (`VOYAGE_API_KEY`)
## Text Embedding Model[](#text-embedding-model "Direct link to Text Embedding Model")
See a [usage example](https://python.langchain.com/docs/integrations/text_embedding/voyageai/)
```
from langchain_voyageai import VoyageAIEmbeddings
```
## Reranking[](#reranking "Direct link to Reranking")
See a [usage example](https://python.langchain.com/docs/integrations/document_transformers/voyageai-reranker/)
```
from langchain_voyageai import VoyageAIRerank
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:10.552Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/voyageai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/voyageai/",
"description": "All functionality related to VoyageAI",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3582",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"voyageai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:10 GMT",
"etag": "W/\"21574a290ce7b83a30c640fa67667a5a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::wf55v-1713753730497-2cadb9725082"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/voyageai/",
"property": "og:url"
},
{
"content": "VoyageAI | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "All functionality related to VoyageAI",
"property": "og:description"
}
],
"title": "VoyageAI | 🦜️🔗 LangChain"
} | VoyageAI
All functionality related to VoyageAI
VoyageAI Voyage AI builds embedding models, customized for your domain and company, for better retrieval quality.
Installation and Setup
Install the integration package with
pip install langchain-voyageai
Get a VoyageAI API key and set it as an environment variable (VOYAGE_API_KEY)
Text Embedding Model
See a usage example
from langchain_voyageai import VoyageAIEmbeddings
Reranking
See a usage example
from langchain_voyageai import VoyageAIRerank
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/providers/vlite/ | This page covers how to use [vlite](https://github.com/sdan/vlite) within LangChain. vlite is a simple and fast vector database for storing and retrieving embeddings.
vlite provides a wrapper around its vector database, allowing you to use it as a vectorstore for semantic search and example selection.
```
from langchain_community.vectorstores import vlite
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:10.963Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/vlite/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/vlite/",
"description": "This page covers how to use vlite within LangChain. vlite is a simple and fast vector database for storing and retrieving embeddings.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"vlite\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:10 GMT",
"etag": "W/\"e3ebb670275e174fefd6b7be41174b10\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::m45rv-1713753730688-63f9eb15462f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/vlite/",
"property": "og:url"
},
{
"content": "vlite | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This page covers how to use vlite within LangChain. vlite is a simple and fast vector database for storing and retrieving embeddings.",
"property": "og:description"
}
],
"title": "vlite | 🦜️🔗 LangChain"
} | This page covers how to use vlite within LangChain. vlite is a simple and fast vector database for storing and retrieving embeddings.
vlite provides a wrapper around its vector database, allowing you to use it as a vectorstore for semantic search and example selection.
from langchain_community.vectorstores import vlite |
https://python.langchain.com/docs/integrations/retrievers/re_phrase/ | ## RePhraseQuery
`RePhraseQuery` is a simple retriever that applies an LLM between the user input and the query passed by the retriever.
It can be used to pre-process the user input in any way.
## Example[](#example "Direct link to Example")
### Setting up[](#setting-up "Direct link to Setting up")
Create a vector store.
```
import loggingfrom langchain.retrievers import RePhraseQueryRetrieverfrom langchain_chroma import Chromafrom langchain_community.document_loaders import WebBaseLoaderfrom langchain_openai import ChatOpenAI, OpenAIEmbeddingsfrom langchain_text_splitters import RecursiveCharacterTextSplitter
```
```
logging.basicConfig()logging.getLogger("langchain.retrievers.re_phraser").setLevel(logging.INFO)loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)all_splits = text_splitter.split_documents(data)vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())
```
### Using the default prompt[](#using-the-default-prompt "Direct link to Using the default prompt")
The default prompt used in the `from_llm` classmethod:
```
DEFAULT_TEMPLATE = """You are an assistant tasked with taking a natural language \query from a user and converting it into a query for a vectorstore. \In this process, you strip out information that is not relevant for \the retrieval task. Here is the user query: {question}"""
```
```
llm = ChatOpenAI(temperature=0)retriever_from_llm = RePhraseQueryRetriever.from_llm( retriever=vectorstore.as_retriever(), llm=llm)
```
```
docs = retriever_from_llm.get_relevant_documents( "Hi I'm Lance. What are the approaches to Task Decomposition?")
```
```
INFO:langchain.retrievers.re_phraser:Re-phrased question: The user query can be converted into a query for a vectorstore as follows:"approaches to Task Decomposition"
```
```
docs = retriever_from_llm.get_relevant_documents( "I live in San Francisco. What are the Types of Memory?")
```
```
INFO:langchain.retrievers.re_phraser:Re-phrased question: Query for vectorstore: "Types of Memory"
```
### Custom prompt[](#custom-prompt "Direct link to Custom prompt")
```
from langchain.chains import LLMChainfrom langchain_core.prompts import PromptTemplateQUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an assistant tasked with taking a natural languge query from a user and converting it into a query for a vectorstore. In the process, strip out all information that is not relevant for the retrieval task and return a new, simplified question for vectorstore retrieval. The new user query should be in pirate speech. Here is the user query: {question} """,)llm = ChatOpenAI(temperature=0)llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT)
```
```
retriever_from_llm_chain = RePhraseQueryRetriever( retriever=vectorstore.as_retriever(), llm_chain=llm_chain)
```
```
docs = retriever_from_llm_chain.get_relevant_documents( "Hi I'm Lance. What is Maximum Inner Product Search?")
```
```
INFO:langchain.retrievers.re_phraser:Re-phrased question: Ahoy matey! What be Maximum Inner Product Search, ye scurvy dog?
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:10.810Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/re_phrase/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/re_phrase/",
"description": "RePhraseQuery is a simple retriever that applies an LLM between the",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3577",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"re_phrase\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:10 GMT",
"etag": "W/\"ad2370e964218878ef44f6b41061d746\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::7vff4-1713753730680-2344221b7e8d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/re_phrase/",
"property": "og:url"
},
{
"content": "RePhraseQuery | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "RePhraseQuery is a simple retriever that applies an LLM between the",
"property": "og:description"
}
],
"title": "RePhraseQuery | 🦜️🔗 LangChain"
} | RePhraseQuery
RePhraseQuery is a simple retriever that applies an LLM between the user input and the query passed by the retriever.
It can be used to pre-process the user input in any way.
Example
Setting up
Create a vector store.
import logging
from langchain.retrievers import RePhraseQueryRetriever
from langchain_chroma import Chroma
from langchain_community.document_loaders import WebBaseLoader
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
logging.basicConfig()
logging.getLogger("langchain.retrievers.re_phraser").setLevel(logging.INFO)
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)
vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())
Using the default prompt
The default prompt used in the from_llm classmethod:
DEFAULT_TEMPLATE = """You are an assistant tasked with taking a natural language \
query from a user and converting it into a query for a vectorstore. \
In this process, you strip out information that is not relevant for \
the retrieval task. Here is the user query: {question}"""
llm = ChatOpenAI(temperature=0)
retriever_from_llm = RePhraseQueryRetriever.from_llm(
retriever=vectorstore.as_retriever(), llm=llm
)
docs = retriever_from_llm.get_relevant_documents(
"Hi I'm Lance. What are the approaches to Task Decomposition?"
)
INFO:langchain.retrievers.re_phraser:Re-phrased question: The user query can be converted into a query for a vectorstore as follows:
"approaches to Task Decomposition"
docs = retriever_from_llm.get_relevant_documents(
"I live in San Francisco. What are the Types of Memory?"
)
INFO:langchain.retrievers.re_phraser:Re-phrased question: Query for vectorstore: "Types of Memory"
Custom prompt
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an assistant tasked with taking a natural languge query from a user
and converting it into a query for a vectorstore. In the process, strip out all
information that is not relevant for the retrieval task and return a new, simplified
question for vectorstore retrieval. The new user query should be in pirate speech.
Here is the user query: {question} """,
)
llm = ChatOpenAI(temperature=0)
llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT)
retriever_from_llm_chain = RePhraseQueryRetriever(
retriever=vectorstore.as_retriever(), llm_chain=llm_chain
)
docs = retriever_from_llm_chain.get_relevant_documents(
"Hi I'm Lance. What is Maximum Inner Product Search?"
)
INFO:langchain.retrievers.re_phraser:Re-phrased question: Ahoy matey! What be Maximum Inner Product Search, ye scurvy dog? |
https://python.langchain.com/docs/integrations/retrievers/ragatouille/ | We can use this as a [retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](https://python.langchain.com/docs/use_cases/question_answering/) to learn how to use this vector store as part of a larger chain.
The integration lives in the `ragatouille` package.
```
[Jan 07, 10:38:18] #> Creating directory .ragatouille/colbert/indexes/Miyazaki-123 #> Starting...[Jan 07, 10:38:23] Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...[Jan 07, 10:38:24] [0] #> Encoding 81 passages..[Jan 07, 10:38:27] [0] avg_doclen_est = 129.9629669189453 len(local_sample) = 81[Jan 07, 10:38:27] [0] Creating 1,024 partitions.[Jan 07, 10:38:27] [0] *Estimated* 10,527 embeddings.[Jan 07, 10:38:27] [0] #> Saving the indexing plan to .ragatouille/colbert/indexes/Miyazaki-123/plan.json ..Clustering 10001 points in 128D to 1024 clusters, redo 1 times, 20 iterations Preprocessing in 0.00 s Iteration 0 (0.02 s, search 0.02 s): objective=3772.41 imbalance=1.562 nsplit=0 Iteration 1 (0.02 s, search 0.02 s): objective=2408.99 imbalance=1.470 nsplit=1 Iteration 2 (0.03 s, search 0.03 s): objective=2243.87 imbalance=1.450 nsplit=0 Iteration 3 (0.04 s, search 0.04 s): objective=2168.48 imbalance=1.444 nsplit=0 Iteration 4 (0.05 s, search 0.05 s): objective=2134.26 imbalance=1.449 nsplit=0 Iteration 5 (0.06 s, search 0.05 s): objective=2117.18 imbalance=1.449 nsplit=0 Iteration 6 (0.06 s, search 0.06 s): objective=2108.43 imbalance=1.449 nsplit=0 Iteration 7 (0.07 s, search 0.07 s): objective=2102.62 imbalance=1.450 nsplit=0 Iteration 8 (0.08 s, search 0.08 s): objective=2100.68 imbalance=1.451 nsplit=0 Iteration 9 (0.09 s, search 0.08 s): objective=2099.66 imbalance=1.451 nsplit=0 Iteration 10 (0.10 s, search 0.09 s): objective=2099.03 imbalance=1.451 nsplit=0 Iteration 11 (0.10 s, search 0.10 s): objective=2098.67 imbalance=1.453 nsplit=0 Iteration 12 (0.11 s, search 0.11 s): objective=2097.78 imbalance=1.455 nsplit=0 Iteration 13 (0.12 s, search 0.12 s): objective=2097.31 imbalance=1.455 nsplit=0 Iteration 14 (0.13 s, search 0.12 s): objective=2097.13 imbalance=1.455 nsplit=0 Iteration 15 (0.14 s, search 0.13 s): objective=2097.09 imbalance=1.455 nsplit=0 Iteration 16 (0.14 s, search 0.14 s): objective=2097.09 imbalance=1.455 nsplit=0 Iteration 17 (0.15 s, search 0.15 s): objective=2097.09 imbalance=1.455 nsplit=0 Iteration 18 (0.16 s, search 0.15 s): objective=2097.09 imbalance=1.455 nsplit=0 Iteration 19 (0.17 s, search 0.16 s): objective=2097.09 imbalance=1.455 nsplit=0 [0.037, 0.038, 0.041, 0.036, 0.035, 0.036, 0.034, 0.036, 0.034, 0.034, 0.036, 0.037, 0.032, 0.039, 0.035, 0.039, 0.033, 0.035, 0.035, 0.037, 0.037, 0.037, 0.037, 0.037, 0.038, 0.034, 0.037, 0.035, 0.036, 0.037, 0.036, 0.04, 0.037, 0.037, 0.036, 0.034, 0.037, 0.035, 0.038, 0.039, 0.037, 0.039, 0.035, 0.036, 0.036, 0.035, 0.035, 0.038, 0.037, 0.033, 0.036, 0.032, 0.034, 0.035, 0.037, 0.037, 0.041, 0.037, 0.039, 0.033, 0.035, 0.033, 0.036, 0.036, 0.038, 0.036, 0.037, 0.038, 0.035, 0.035, 0.033, 0.034, 0.032, 0.038, 0.037, 0.037, 0.036, 0.04, 0.033, 0.037, 0.035, 0.04, 0.036, 0.04, 0.032, 0.037, 0.036, 0.037, 0.034, 0.042, 0.037, 0.035, 0.035, 0.038, 0.034, 0.036, 0.041, 0.035, 0.036, 0.037, 0.041, 0.04, 0.036, 0.036, 0.035, 0.036, 0.034, 0.033, 0.036, 0.033, 0.037, 0.038, 0.036, 0.033, 0.038, 0.037, 0.038, 0.037, 0.039, 0.04, 0.034, 0.034, 0.036, 0.039, 0.033, 0.037, 0.035, 0.037][Jan 07, 10:38:27] [0] #> Encoding 81 passages..[Jan 07, 10:38:30] #> Optimizing IVF to store map from centroids to list of pids..[Jan 07, 10:38:30] #> Building the emb2pid mapping..[Jan 07, 10:38:30] len(emb2pid) = 10527[Jan 07, 10:38:30] #> Saved optimized IVF to .ragatouille/colbert/indexes/Miyazaki-123/ivf.pid.pt#> Joined...Done indexing!
```
```
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:125: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling. warnings.warn( 0%| | 0/2 [00:00<?, ?it/s]/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn( 50%|█████ | 1/2 [00:02<00:02, 2.85s/it]/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn(100%|██████████| 2/2 [00:03<00:00, 1.74s/it]WARNING clustering 10001 points to 1024 centroids: please provide at least 39936 training points0it [00:00, ?it/s] 0%| | 0/2 [00:00<?, ?it/s] 50%|█████ | 1/2 [00:02<00:02, 2.53s/it]100%|██████████| 2/2 [00:03<00:00, 1.56s/it]1it [00:03, 3.16s/it]100%|██████████| 1/1 [00:00<00:00, 4017.53it/s]100%|██████████| 1024/1024 [00:00<00:00, 306105.57it/s]
```
```
Loading searcher for index Miyazaki-123 for the first time... This may take a few seconds[Jan 07, 10:38:34] Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...[Jan 07, 10:38:35] #> Loading codec...[Jan 07, 10:38:35] #> Loading IVF...[Jan 07, 10:38:35] Loading segmented_lookup_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...[Jan 07, 10:38:35] #> Loading doclens...[Jan 07, 10:38:35] #> Loading codes and residuals...[Jan 07, 10:38:35] Loading filter_pids_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...[Jan 07, 10:38:35] Loading decompress_residuals_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...Searcher loaded!#> QueryTokenizer.tensorize(batch_text[0], batch_background[0], bsize) ==#> Input: . What animation studio did Miyazaki found?, True, None#> Output IDs: torch.Size([32]), tensor([ 101, 1, 2054, 7284, 2996, 2106, 2771, 3148, 18637, 2179, 1029, 102, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103, 103])#> Output Mask: torch.Size([32]), tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
```
```
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:125: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling. warnings.warn(100%|███████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3872.86it/s]100%|████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 604.89it/s]/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn(
```
```
[{'content': 'In April 1984, Miyazaki opened his own office in Suginami Ward, naming it Nibariki.\n\n\n=== Studio Ghibli ===\n\n\n==== Early films (1985–1996) ====\nIn June 1985, Miyazaki, Takahata, Tokuma and Suzuki founded the animation production company Studio Ghibli, with funding from Tokuma Shoten. Studio Ghibli\'s first film, Laputa: Castle in the Sky (1986), employed the same production crew of Nausicaä. Miyazaki\'s designs for the film\'s setting were inspired by Greek architecture and "European urbanistic templates".', 'score': 25.90749740600586, 'rank': 1}, {'content': 'Hayao Miyazaki (宮崎 駿 or 宮﨑 駿, Miyazaki Hayao, [mijaꜜzaki hajao]; born January 5, 1941) is a Japanese animator, filmmaker, and manga artist. A co-founder of Studio Ghibli, he has attained international acclaim as a masterful storyteller and creator of Japanese animated feature films, and is widely regarded as one of the most accomplished filmmakers in the history of animation.\nBorn in Tokyo City in the Empire of Japan, Miyazaki expressed interest in manga and animation from an early age, and he joined Toei Animation in 1963. During his early years at Toei Animation he worked as an in-between artist and later collaborated with director Isao Takahata.', 'score': 25.4748477935791, 'rank': 2}, {'content': 'Glen Keane said Miyazaki is a "huge influence" on Walt Disney Animation Studios and has been "part of our heritage" ever since The Rescuers Down Under (1990). The Disney Renaissance era was also prompted by competition with the development of Miyazaki\'s films. Artists from Pixar and Aardman Studios signed a tribute stating, "You\'re our inspiration, Miyazaki-san!"', 'score': 24.84897232055664, 'rank': 3}]
```
We can then convert easily to a LangChain retriever! We can pass in any kwargs we want when creating (like `k`)
```
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn(
```
```
[Document(page_content='In April 1984, Miyazaki opened his own office in Suginami Ward, naming it Nibariki.\n\n\n=== Studio Ghibli ===\n\n\n==== Early films (1985–1996) ====\nIn June 1985, Miyazaki, Takahata, Tokuma and Suzuki founded the animation production company Studio Ghibli, with funding from Tokuma Shoten. Studio Ghibli\'s first film, Laputa: Castle in the Sky (1986), employed the same production crew of Nausicaä. Miyazaki\'s designs for the film\'s setting were inspired by Greek architecture and "European urbanistic templates".'), Document(page_content='Hayao Miyazaki (宮崎 駿 or 宮﨑 駿, Miyazaki Hayao, [mijaꜜzaki hajao]; born January 5, 1941) is a Japanese animator, filmmaker, and manga artist. A co-founder of Studio Ghibli, he has attained international acclaim as a masterful storyteller and creator of Japanese animated feature films, and is widely regarded as one of the most accomplished filmmakers in the history of animation.\nBorn in Tokyo City in the Empire of Japan, Miyazaki expressed interest in manga and animation from an early age, and he joined Toei Animation in 1963. During his early years at Toei Animation he worked as an in-between artist and later collaborated with director Isao Takahata.'), Document(page_content='Glen Keane said Miyazaki is a "huge influence" on Walt Disney Animation Studios and has been "part of our heritage" ever since The Rescuers Down Under (1990). The Disney Renaissance era was also prompted by competition with the development of Miyazaki\'s films. Artists from Pixar and Aardman Studios signed a tribute stating, "You\'re our inspiration, Miyazaki-san!"')]
```
We can easily combine this retriever in to a chain.
```
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn(
```
```
{'input': 'What animation studio did Miyazaki found?', 'context': [Document(page_content='In April 1984, Miyazaki opened his own office in Suginami Ward, naming it Nibariki.\n\n\n=== Studio Ghibli ===\n\n\n==== Early films (1985–1996) ====\nIn June 1985, Miyazaki, Takahata, Tokuma and Suzuki founded the animation production company Studio Ghibli, with funding from Tokuma Shoten. Studio Ghibli\'s first film, Laputa: Castle in the Sky (1986), employed the same production crew of Nausicaä. Miyazaki\'s designs for the film\'s setting were inspired by Greek architecture and "European urbanistic templates".'), Document(page_content='Hayao Miyazaki (宮崎 駿 or 宮﨑 駿, Miyazaki Hayao, [mijaꜜzaki hajao]; born January 5, 1941) is a Japanese animator, filmmaker, and manga artist. A co-founder of Studio Ghibli, he has attained international acclaim as a masterful storyteller and creator of Japanese animated feature films, and is widely regarded as one of the most accomplished filmmakers in the history of animation.\nBorn in Tokyo City in the Empire of Japan, Miyazaki expressed interest in manga and animation from an early age, and he joined Toei Animation in 1963. During his early years at Toei Animation he worked as an in-between artist and later collaborated with director Isao Takahata.'), Document(page_content='Glen Keane said Miyazaki is a "huge influence" on Walt Disney Animation Studios and has been "part of our heritage" ever since The Rescuers Down Under (1990). The Disney Renaissance era was also prompted by competition with the development of Miyazaki\'s films. Artists from Pixar and Aardman Studios signed a tribute stating, "You\'re our inspiration, Miyazaki-san!"')], 'answer': 'Miyazaki founded Studio Ghibli.'}
```
```
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn(
```
```
Miyazaki founded Studio Ghibli.
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:11.018Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/ragatouille/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/ragatouille/",
"description": "RAGatouille makes it as",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "5466",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"ragatouille\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:10 GMT",
"etag": "W/\"b88a060bb4de62cafcccf8b550f5596f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::kt9bz-1713753730816-8733d0e08116"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/ragatouille/",
"property": "og:url"
},
{
"content": "RAGatouille | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "RAGatouille makes it as",
"property": "og:description"
}
],
"title": "RAGatouille | 🦜️🔗 LangChain"
} | We can use this as a retriever. It will show functionality specific to this integration. After going through, it may be useful to explore relevant use-case pages to learn how to use this vector store as part of a larger chain.
The integration lives in the ragatouille package.
[Jan 07, 10:38:18] #> Creating directory .ragatouille/colbert/indexes/Miyazaki-123
#> Starting...
[Jan 07, 10:38:23] Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...
[Jan 07, 10:38:24] [0] #> Encoding 81 passages..
[Jan 07, 10:38:27] [0] avg_doclen_est = 129.9629669189453 len(local_sample) = 81
[Jan 07, 10:38:27] [0] Creating 1,024 partitions.
[Jan 07, 10:38:27] [0] *Estimated* 10,527 embeddings.
[Jan 07, 10:38:27] [0] #> Saving the indexing plan to .ragatouille/colbert/indexes/Miyazaki-123/plan.json ..
Clustering 10001 points in 128D to 1024 clusters, redo 1 times, 20 iterations
Preprocessing in 0.00 s
Iteration 0 (0.02 s, search 0.02 s): objective=3772.41 imbalance=1.562 nsplit=0 Iteration 1 (0.02 s, search 0.02 s): objective=2408.99 imbalance=1.470 nsplit=1 Iteration 2 (0.03 s, search 0.03 s): objective=2243.87 imbalance=1.450 nsplit=0 Iteration 3 (0.04 s, search 0.04 s): objective=2168.48 imbalance=1.444 nsplit=0 Iteration 4 (0.05 s, search 0.05 s): objective=2134.26 imbalance=1.449 nsplit=0 Iteration 5 (0.06 s, search 0.05 s): objective=2117.18 imbalance=1.449 nsplit=0 Iteration 6 (0.06 s, search 0.06 s): objective=2108.43 imbalance=1.449 nsplit=0 Iteration 7 (0.07 s, search 0.07 s): objective=2102.62 imbalance=1.450 nsplit=0 Iteration 8 (0.08 s, search 0.08 s): objective=2100.68 imbalance=1.451 nsplit=0 Iteration 9 (0.09 s, search 0.08 s): objective=2099.66 imbalance=1.451 nsplit=0 Iteration 10 (0.10 s, search 0.09 s): objective=2099.03 imbalance=1.451 nsplit=0 Iteration 11 (0.10 s, search 0.10 s): objective=2098.67 imbalance=1.453 nsplit=0 Iteration 12 (0.11 s, search 0.11 s): objective=2097.78 imbalance=1.455 nsplit=0 Iteration 13 (0.12 s, search 0.12 s): objective=2097.31 imbalance=1.455 nsplit=0 Iteration 14 (0.13 s, search 0.12 s): objective=2097.13 imbalance=1.455 nsplit=0 Iteration 15 (0.14 s, search 0.13 s): objective=2097.09 imbalance=1.455 nsplit=0 Iteration 16 (0.14 s, search 0.14 s): objective=2097.09 imbalance=1.455 nsplit=0 Iteration 17 (0.15 s, search 0.15 s): objective=2097.09 imbalance=1.455 nsplit=0 Iteration 18 (0.16 s, search 0.15 s): objective=2097.09 imbalance=1.455 nsplit=0 Iteration 19 (0.17 s, search 0.16 s): objective=2097.09 imbalance=1.455 nsplit=0 [0.037, 0.038, 0.041, 0.036, 0.035, 0.036, 0.034, 0.036, 0.034, 0.034, 0.036, 0.037, 0.032, 0.039, 0.035, 0.039, 0.033, 0.035, 0.035, 0.037, 0.037, 0.037, 0.037, 0.037, 0.038, 0.034, 0.037, 0.035, 0.036, 0.037, 0.036, 0.04, 0.037, 0.037, 0.036, 0.034, 0.037, 0.035, 0.038, 0.039, 0.037, 0.039, 0.035, 0.036, 0.036, 0.035, 0.035, 0.038, 0.037, 0.033, 0.036, 0.032, 0.034, 0.035, 0.037, 0.037, 0.041, 0.037, 0.039, 0.033, 0.035, 0.033, 0.036, 0.036, 0.038, 0.036, 0.037, 0.038, 0.035, 0.035, 0.033, 0.034, 0.032, 0.038, 0.037, 0.037, 0.036, 0.04, 0.033, 0.037, 0.035, 0.04, 0.036, 0.04, 0.032, 0.037, 0.036, 0.037, 0.034, 0.042, 0.037, 0.035, 0.035, 0.038, 0.034, 0.036, 0.041, 0.035, 0.036, 0.037, 0.041, 0.04, 0.036, 0.036, 0.035, 0.036, 0.034, 0.033, 0.036, 0.033, 0.037, 0.038, 0.036, 0.033, 0.038, 0.037, 0.038, 0.037, 0.039, 0.04, 0.034, 0.034, 0.036, 0.039, 0.033, 0.037, 0.035, 0.037]
[Jan 07, 10:38:27] [0] #> Encoding 81 passages..
[Jan 07, 10:38:30] #> Optimizing IVF to store map from centroids to list of pids..
[Jan 07, 10:38:30] #> Building the emb2pid mapping..
[Jan 07, 10:38:30] len(emb2pid) = 10527
[Jan 07, 10:38:30] #> Saved optimized IVF to .ragatouille/colbert/indexes/Miyazaki-123/ivf.pid.pt
#> Joined...
Done indexing!
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:125: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.
warnings.warn(
0%| | 0/2 [00:00<?, ?it/s]/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
50%|█████ | 1/2 [00:02<00:02, 2.85s/it]/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
100%|██████████| 2/2 [00:03<00:00, 1.74s/it]
WARNING clustering 10001 points to 1024 centroids: please provide at least 39936 training points
0it [00:00, ?it/s]
0%| | 0/2 [00:00<?, ?it/s]
50%|█████ | 1/2 [00:02<00:02, 2.53s/it]
100%|██████████| 2/2 [00:03<00:00, 1.56s/it]
1it [00:03, 3.16s/it]
100%|██████████| 1/1 [00:00<00:00, 4017.53it/s]
100%|██████████| 1024/1024 [00:00<00:00, 306105.57it/s]
Loading searcher for index Miyazaki-123 for the first time... This may take a few seconds
[Jan 07, 10:38:34] Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...
[Jan 07, 10:38:35] #> Loading codec...
[Jan 07, 10:38:35] #> Loading IVF...
[Jan 07, 10:38:35] Loading segmented_lookup_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...
[Jan 07, 10:38:35] #> Loading doclens...
[Jan 07, 10:38:35] #> Loading codes and residuals...
[Jan 07, 10:38:35] Loading filter_pids_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...
[Jan 07, 10:38:35] Loading decompress_residuals_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...
Searcher loaded!
#> QueryTokenizer.tensorize(batch_text[0], batch_background[0], bsize) ==
#> Input: . What animation studio did Miyazaki found?, True, None
#> Output IDs: torch.Size([32]), tensor([ 101, 1, 2054, 7284, 2996, 2106, 2771, 3148, 18637, 2179,
1029, 102, 103, 103, 103, 103, 103, 103, 103, 103,
103, 103, 103, 103, 103, 103, 103, 103, 103, 103,
103, 103])
#> Output Mask: torch.Size([32]), tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0])
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:125: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.
warnings.warn(
100%|███████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3872.86it/s]
100%|████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 604.89it/s]
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
[{'content': 'In April 1984, Miyazaki opened his own office in Suginami Ward, naming it Nibariki.\n\n\n=== Studio Ghibli ===\n\n\n==== Early films (1985–1996) ====\nIn June 1985, Miyazaki, Takahata, Tokuma and Suzuki founded the animation production company Studio Ghibli, with funding from Tokuma Shoten. Studio Ghibli\'s first film, Laputa: Castle in the Sky (1986), employed the same production crew of Nausicaä. Miyazaki\'s designs for the film\'s setting were inspired by Greek architecture and "European urbanistic templates".',
'score': 25.90749740600586,
'rank': 1},
{'content': 'Hayao Miyazaki (宮崎 駿 or 宮﨑 駿, Miyazaki Hayao, [mijaꜜzaki hajao]; born January 5, 1941) is a Japanese animator, filmmaker, and manga artist. A co-founder of Studio Ghibli, he has attained international acclaim as a masterful storyteller and creator of Japanese animated feature films, and is widely regarded as one of the most accomplished filmmakers in the history of animation.\nBorn in Tokyo City in the Empire of Japan, Miyazaki expressed interest in manga and animation from an early age, and he joined Toei Animation in 1963. During his early years at Toei Animation he worked as an in-between artist and later collaborated with director Isao Takahata.',
'score': 25.4748477935791,
'rank': 2},
{'content': 'Glen Keane said Miyazaki is a "huge influence" on Walt Disney Animation Studios and has been "part of our heritage" ever since The Rescuers Down Under (1990). The Disney Renaissance era was also prompted by competition with the development of Miyazaki\'s films. Artists from Pixar and Aardman Studios signed a tribute stating, "You\'re our inspiration, Miyazaki-san!"',
'score': 24.84897232055664,
'rank': 3}]
We can then convert easily to a LangChain retriever! We can pass in any kwargs we want when creating (like k)
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
[Document(page_content='In April 1984, Miyazaki opened his own office in Suginami Ward, naming it Nibariki.\n\n\n=== Studio Ghibli ===\n\n\n==== Early films (1985–1996) ====\nIn June 1985, Miyazaki, Takahata, Tokuma and Suzuki founded the animation production company Studio Ghibli, with funding from Tokuma Shoten. Studio Ghibli\'s first film, Laputa: Castle in the Sky (1986), employed the same production crew of Nausicaä. Miyazaki\'s designs for the film\'s setting were inspired by Greek architecture and "European urbanistic templates".'),
Document(page_content='Hayao Miyazaki (宮崎 駿 or 宮﨑 駿, Miyazaki Hayao, [mijaꜜzaki hajao]; born January 5, 1941) is a Japanese animator, filmmaker, and manga artist. A co-founder of Studio Ghibli, he has attained international acclaim as a masterful storyteller and creator of Japanese animated feature films, and is widely regarded as one of the most accomplished filmmakers in the history of animation.\nBorn in Tokyo City in the Empire of Japan, Miyazaki expressed interest in manga and animation from an early age, and he joined Toei Animation in 1963. During his early years at Toei Animation he worked as an in-between artist and later collaborated with director Isao Takahata.'),
Document(page_content='Glen Keane said Miyazaki is a "huge influence" on Walt Disney Animation Studios and has been "part of our heritage" ever since The Rescuers Down Under (1990). The Disney Renaissance era was also prompted by competition with the development of Miyazaki\'s films. Artists from Pixar and Aardman Studios signed a tribute stating, "You\'re our inspiration, Miyazaki-san!"')]
We can easily combine this retriever in to a chain.
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
{'input': 'What animation studio did Miyazaki found?',
'context': [Document(page_content='In April 1984, Miyazaki opened his own office in Suginami Ward, naming it Nibariki.\n\n\n=== Studio Ghibli ===\n\n\n==== Early films (1985–1996) ====\nIn June 1985, Miyazaki, Takahata, Tokuma and Suzuki founded the animation production company Studio Ghibli, with funding from Tokuma Shoten. Studio Ghibli\'s first film, Laputa: Castle in the Sky (1986), employed the same production crew of Nausicaä. Miyazaki\'s designs for the film\'s setting were inspired by Greek architecture and "European urbanistic templates".'),
Document(page_content='Hayao Miyazaki (宮崎 駿 or 宮﨑 駿, Miyazaki Hayao, [mijaꜜzaki hajao]; born January 5, 1941) is a Japanese animator, filmmaker, and manga artist. A co-founder of Studio Ghibli, he has attained international acclaim as a masterful storyteller and creator of Japanese animated feature films, and is widely regarded as one of the most accomplished filmmakers in the history of animation.\nBorn in Tokyo City in the Empire of Japan, Miyazaki expressed interest in manga and animation from an early age, and he joined Toei Animation in 1963. During his early years at Toei Animation he worked as an in-between artist and later collaborated with director Isao Takahata.'),
Document(page_content='Glen Keane said Miyazaki is a "huge influence" on Walt Disney Animation Studios and has been "part of our heritage" ever since The Rescuers Down Under (1990). The Disney Renaissance era was also prompted by competition with the development of Miyazaki\'s films. Artists from Pixar and Aardman Studios signed a tribute stating, "You\'re our inspiration, Miyazaki-san!"')],
'answer': 'Miyazaki founded Studio Ghibli.'}
/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn(
Miyazaki founded Studio Ghibli. |
https://python.langchain.com/docs/integrations/retrievers/google_drive/ | ## Google Drive
This notebook covers how to retrieve documents from `Google Drive`.
## Prerequisites[](#prerequisites "Direct link to Prerequisites")
1. Create a Google Cloud project or use an existing project
2. Enable the [Google Drive API](https://console.cloud.google.com/flows/enableapi?apiid=drive.googleapis.com)
3. [Authorize credentials for desktop app](https://developers.google.com/drive/api/quickstart/python#authorize_credentials_for_a_desktop_application)
4. `pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib`
## Retrieve the Google Docs[](#retrieve-the-google-docs "Direct link to Retrieve the Google Docs")
By default, the `GoogleDriveRetriever` expects the `credentials.json` file to be `~/.credentials/credentials.json`, but this is configurable using the `GOOGLE_ACCOUNT_FILE` environment variable. The location of `token.json` uses the same directory (or use the parameter `token_path`). Note that `token.json` will be created automatically the first time you use the retriever.
`GoogleDriveRetriever` can retrieve a selection of files with some requests.
By default, If you use a `folder_id`, all the files inside this folder can be retrieved to `Document`.
You can obtain your folder and document id from the URL:
* Folder: [https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5](https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5) -\> folder id is `"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"`
* Document: [https://docs.google.com/document/d/1bfaMQ18\_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit](https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit) -\> document id is `"1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"`
The special value `root` is for your personal home.
```
from langchain_googledrive.retrievers import GoogleDriveRetrieverfolder_id = "root"# folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5'retriever = GoogleDriveRetriever( num_results=2,)
```
By default, all files with these MIME types can be converted to `Document`.
* `text/text`
* `text/plain`
* `text/html`
* `text/csv`
* `text/markdown`
* `image/png`
* `image/jpeg`
* `application/epub+zip`
* `application/pdf`
* `application/rtf`
* `application/vnd.google-apps.document` (GDoc)
* `application/vnd.google-apps.presentation` (GSlide)
* `application/vnd.google-apps.spreadsheet` (GSheet)
* `application/vnd.google.colaboratory` (Notebook colab)
* `application/vnd.openxmlformats-officedocument.presentationml.presentation` (PPTX)
* `application/vnd.openxmlformats-officedocument.wordprocessingml.document` (DOCX)
It’s possible to update or customize this. See the documentation of `GoogleDriveRetriever`.
But, the corresponding packages must be installed.
```
%pip install --upgrade --quiet unstructured
```
```
retriever.get_relevant_documents("machine learning")
```
You can customize the criteria to select the files. A set of predefined filter are proposed:
| Template | Description |
| --- | --- |
| `gdrive-all-in-folder` | Return all compatible files from a `folder_id` |
| `gdrive-query` | Search `query` in all drives |
| `gdrive-by-name` | Search file with name `query` |
| `gdrive-query-in-folder` | Search `query` in `folder_id` (and sub-folders in `_recursive=true`) |
| `gdrive-mime-type` | Search a specific `mime_type` |
| `gdrive-mime-type-in-folder` | Search a specific `mime_type` in `folder_id` |
| `gdrive-query-with-mime-type` | Search `query` with a specific `mime_type` |
| `gdrive-query-with-mime-type-and-folder` | Search `query` with a specific `mime_type` and in `folder_id` |
```
retriever = GoogleDriveRetriever( template="gdrive-query", # Search everywhere num_results=2, # But take only 2 documents)for doc in retriever.get_relevant_documents("machine learning"): print("---") print(doc.page_content.strip()[:60] + "...")
```
Else, you can customize the prompt with a specialized `PromptTemplate`
```
from langchain_core.prompts import PromptTemplateretriever = GoogleDriveRetriever( template=PromptTemplate( input_variables=["query"], # See https://developers.google.com/drive/api/guides/search-files template="(fullText contains '{query}') " "and mimeType='application/vnd.google-apps.document' " "and modifiedTime > '2000-01-01T00:00:00' " "and trashed=false", ), num_results=2, # See https://developers.google.com/drive/api/v3/reference/files/list includeItemsFromAllDrives=False, supportsAllDrives=False,)for doc in retriever.get_relevant_documents("machine learning"): print(f"{doc.metadata['name']}:") print("---") print(doc.page_content.strip()[:60] + "...")
```
Each Google Drive has a `description` field in metadata (see the _details of a file_). Use the `snippets` mode to return the description of selected files.
```
retriever = GoogleDriveRetriever( template="gdrive-mime-type-in-folder", folder_id=folder_id, mime_type="application/vnd.google-apps.document", # Only Google Docs num_results=2, mode="snippets", includeItemsFromAllDrives=False, supportsAllDrives=False,)retriever.get_relevant_documents("machine learning")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:11.204Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/google_drive/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/google_drive/",
"description": "This notebook covers how to retrieve documents from Google Drive.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3579",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"google_drive\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:10 GMT",
"etag": "W/\"1f4d00d8e05fac24a21a60dc659246ab\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::trhtg-1713753730817-c6a33aff698d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/google_drive/",
"property": "og:url"
},
{
"content": "Google Drive | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to retrieve documents from Google Drive.",
"property": "og:description"
}
],
"title": "Google Drive | 🦜️🔗 LangChain"
} | Google Drive
This notebook covers how to retrieve documents from Google Drive.
Prerequisites
Create a Google Cloud project or use an existing project
Enable the Google Drive API
Authorize credentials for desktop app
pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
Retrieve the Google Docs
By default, the GoogleDriveRetriever expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the GOOGLE_ACCOUNT_FILE environment variable. The location of token.json uses the same directory (or use the parameter token_path). Note that token.json will be created automatically the first time you use the retriever.
GoogleDriveRetriever can retrieve a selection of files with some requests.
By default, If you use a folder_id, all the files inside this folder can be retrieved to Document.
You can obtain your folder and document id from the URL:
Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"
Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"
The special value root is for your personal home.
from langchain_googledrive.retrievers import GoogleDriveRetriever
folder_id = "root"
# folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5'
retriever = GoogleDriveRetriever(
num_results=2,
)
By default, all files with these MIME types can be converted to Document.
text/text
text/plain
text/html
text/csv
text/markdown
image/png
image/jpeg
application/epub+zip
application/pdf
application/rtf
application/vnd.google-apps.document (GDoc)
application/vnd.google-apps.presentation (GSlide)
application/vnd.google-apps.spreadsheet (GSheet)
application/vnd.google.colaboratory (Notebook colab)
application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX)
application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX)
It’s possible to update or customize this. See the documentation of GoogleDriveRetriever.
But, the corresponding packages must be installed.
%pip install --upgrade --quiet unstructured
retriever.get_relevant_documents("machine learning")
You can customize the criteria to select the files. A set of predefined filter are proposed:
TemplateDescription
gdrive-all-in-folder Return all compatible files from a folder_id
gdrive-query Search query in all drives
gdrive-by-name Search file with name query
gdrive-query-in-folder Search query in folder_id (and sub-folders in _recursive=true)
gdrive-mime-type Search a specific mime_type
gdrive-mime-type-in-folder Search a specific mime_type in folder_id
gdrive-query-with-mime-type Search query with a specific mime_type
gdrive-query-with-mime-type-and-folder Search query with a specific mime_type and in folder_id
retriever = GoogleDriveRetriever(
template="gdrive-query", # Search everywhere
num_results=2, # But take only 2 documents
)
for doc in retriever.get_relevant_documents("machine learning"):
print("---")
print(doc.page_content.strip()[:60] + "...")
Else, you can customize the prompt with a specialized PromptTemplate
from langchain_core.prompts import PromptTemplate
retriever = GoogleDriveRetriever(
template=PromptTemplate(
input_variables=["query"],
# See https://developers.google.com/drive/api/guides/search-files
template="(fullText contains '{query}') "
"and mimeType='application/vnd.google-apps.document' "
"and modifiedTime > '2000-01-01T00:00:00' "
"and trashed=false",
),
num_results=2,
# See https://developers.google.com/drive/api/v3/reference/files/list
includeItemsFromAllDrives=False,
supportsAllDrives=False,
)
for doc in retriever.get_relevant_documents("machine learning"):
print(f"{doc.metadata['name']}:")
print("---")
print(doc.page_content.strip()[:60] + "...")
Each Google Drive has a description field in metadata (see the details of a file). Use the snippets mode to return the description of selected files.
retriever = GoogleDriveRetriever(
template="gdrive-mime-type-in-folder",
folder_id=folder_id,
mime_type="application/vnd.google-apps.document", # Only Google Docs
num_results=2,
mode="snippets",
includeItemsFromAllDrives=False,
supportsAllDrives=False,
)
retriever.get_relevant_documents("machine learning") |
https://python.langchain.com/docs/integrations/retrievers/jaguar/ | ## JaguarDB Vector Database
> \[JaguarDB Vector Database\]([http://www.jaguardb.com/windex.html](http://www.jaguardb.com/windex.html)
>
> 1. It is a distributed vector database
> 2. The “ZeroMove” feature of JaguarDB enables instant horizontal scalability
> 3. Multimodal: embeddings, text, images, videos, PDFs, audio, time series, and geospatial
> 4. All-masters: allows both parallel reads and writes
> 5. Anomaly detection capabilities
> 6. RAG support: combines LLM with proprietary and real-time data
> 7. Shared metadata: sharing of metadata across multiple vector indexes
> 8. Distance metrics: Euclidean, Cosine, InnerProduct, Manhatten, Chebyshev, Hamming, Jeccard, Minkowski
## Prerequisites[](#prerequisites "Direct link to Prerequisites")
There are two requirements for running the examples in this file. 1. You must install and set up the JaguarDB server and its HTTP gateway server. Please refer to the instructions in: [www.jaguardb.com](http://www.jaguardb.com/)
1. You must install the http client package for JaguarDB:
```
pip install -U jaguardb-http-client
```
## RAG With Langchain[](#rag-with-langchain "Direct link to RAG With Langchain")
This section demonstrates chatting with LLM together with Jaguar in the langchain software stack.
```
from langchain_community.document_loaders import TextLoaderfrom langchain_community.vectorstores.jaguar import Jaguarfrom langchain_openai import OpenAIEmbeddingsfrom langchain_text_splitters import CharacterTextSplitter""" Load a text file into a set of documents """loader = TextLoader("../../modules/state_of_the_union.txt")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=300)docs = text_splitter.split_documents(documents)"""Instantiate a Jaguar vector store"""### Jaguar HTTP endpointurl = "http://192.168.5.88:8080/fwww/"### Use OpenAI embedding modelembeddings = OpenAIEmbeddings()### Pod is a database for vectorspod = "vdb"### Vector store namestore = "langchain_rag_store"### Vector index namevector_index = "v"### Type of the vector index# cosine: distance metric# fraction: embedding vectors are decimal numbers# float: values stored with floating-point numbersvector_type = "cosine_fraction_float"### Dimension of each embedding vectorvector_dimension = 1536### Instantiate a Jaguar store objectvectorstore = Jaguar( pod, store, vector_index, vector_type, vector_dimension, url, embeddings)"""Login must be performed to authorize the client.The environment variable JAGUAR_API_KEY or file $HOME/.jagrcshould contain the API key for accessing JaguarDB servers."""vectorstore.login()"""Create vector store on the JaguarDB database server.This should be done only once."""# Extra metadata fields for the vector storemetadata = "category char(16)"# Number of characters for the text field of the storetext_size = 4096# Create a vector store on the servervectorstore.create(metadata, text_size)"""Add the texts from the text splitter to our vectorstore"""vectorstore.add_documents(docs)""" Get the retriever object """retriever = vectorstore.as_retriever()# retriever = vectorstore.as_retriever(search_kwargs={"where": "m1='123' and m2='abc'"})""" The retriever object can be used with LangChain and LLM """
```
## Interaction With Jaguar Vector Store[](#interaction-with-jaguar-vector-store "Direct link to Interaction With Jaguar Vector Store")
Users can interact directly with the Jaguar vector store for similarity search and anomaly detection.
```
from langchain_community.vectorstores.jaguar import Jaguarfrom langchain_openai import OpenAIEmbeddings# Instantiate a Jaguar vector store objecturl = "http://192.168.3.88:8080/fwww/"pod = "vdb"store = "langchain_test_store"vector_index = "v"vector_type = "cosine_fraction_float"vector_dimension = 10embeddings = OpenAIEmbeddings()vectorstore = Jaguar( pod, store, vector_index, vector_type, vector_dimension, url, embeddings)# Login for authorizationvectorstore.login()# Create the vector store with two metadata fields# This needs to be run only once.metadata_str = "author char(32), category char(16)"vectorstore.create(metadata_str, 1024)# Add a list of textstexts = ["foo", "bar", "baz"]metadatas = [ {"author": "Adam", "category": "Music"}, {"author": "Eve", "category": "Music"}, {"author": "John", "category": "History"},]ids = vectorstore.add_texts(texts=texts, metadatas=metadatas)# Search similar textoutput = vectorstore.similarity_search( query="foo", k=1, metadatas=["author", "category"],)assert output[0].page_content == "foo"assert output[0].metadata["author"] == "Adam"assert output[0].metadata["category"] == "Music"assert len(output) == 1# Search with filtering (where)where = "author='Eve'"output = vectorstore.similarity_search( query="foo", k=3, fetch_k=9, where=where, metadatas=["author", "category"],)assert output[0].page_content == "bar"assert output[0].metadata["author"] == "Eve"assert output[0].metadata["category"] == "Music"assert len(output) == 1# Anomaly detectionresult = vectorstore.is_anomalous( query="dogs can jump high",)assert result is False# Remove all data in the storevectorstore.clear()assert vectorstore.count() == 0# Remove the store completelyvectorstore.drop()# Logoutvectorstore.logout()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:11.458Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/jaguar/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/jaguar/",
"description": "\\[JaguarDB Vector Database\\](http://www.jaguardb.com/windex.html",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3579",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"jaguar\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:10 GMT",
"etag": "W/\"e6d036fcf1c8ce859f71160c145b5c55\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::h7kk6-1713753730816-d886b8c8593d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/jaguar/",
"property": "og:url"
},
{
"content": "JaguarDB Vector Database | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "\\[JaguarDB Vector Database\\](http://www.jaguardb.com/windex.html",
"property": "og:description"
}
],
"title": "JaguarDB Vector Database | 🦜️🔗 LangChain"
} | JaguarDB Vector Database
[JaguarDB Vector Database](http://www.jaguardb.com/windex.html
It is a distributed vector database
The “ZeroMove” feature of JaguarDB enables instant horizontal scalability
Multimodal: embeddings, text, images, videos, PDFs, audio, time series, and geospatial
All-masters: allows both parallel reads and writes
Anomaly detection capabilities
RAG support: combines LLM with proprietary and real-time data
Shared metadata: sharing of metadata across multiple vector indexes
Distance metrics: Euclidean, Cosine, InnerProduct, Manhatten, Chebyshev, Hamming, Jeccard, Minkowski
Prerequisites
There are two requirements for running the examples in this file. 1. You must install and set up the JaguarDB server and its HTTP gateway server. Please refer to the instructions in: www.jaguardb.com
You must install the http client package for JaguarDB:
pip install -U jaguardb-http-client
RAG With Langchain
This section demonstrates chatting with LLM together with Jaguar in the langchain software stack.
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores.jaguar import Jaguar
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter
"""
Load a text file into a set of documents
"""
loader = TextLoader("../../modules/state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=300)
docs = text_splitter.split_documents(documents)
"""
Instantiate a Jaguar vector store
"""
### Jaguar HTTP endpoint
url = "http://192.168.5.88:8080/fwww/"
### Use OpenAI embedding model
embeddings = OpenAIEmbeddings()
### Pod is a database for vectors
pod = "vdb"
### Vector store name
store = "langchain_rag_store"
### Vector index name
vector_index = "v"
### Type of the vector index
# cosine: distance metric
# fraction: embedding vectors are decimal numbers
# float: values stored with floating-point numbers
vector_type = "cosine_fraction_float"
### Dimension of each embedding vector
vector_dimension = 1536
### Instantiate a Jaguar store object
vectorstore = Jaguar(
pod, store, vector_index, vector_type, vector_dimension, url, embeddings
)
"""
Login must be performed to authorize the client.
The environment variable JAGUAR_API_KEY or file $HOME/.jagrc
should contain the API key for accessing JaguarDB servers.
"""
vectorstore.login()
"""
Create vector store on the JaguarDB database server.
This should be done only once.
"""
# Extra metadata fields for the vector store
metadata = "category char(16)"
# Number of characters for the text field of the store
text_size = 4096
# Create a vector store on the server
vectorstore.create(metadata, text_size)
"""
Add the texts from the text splitter to our vectorstore
"""
vectorstore.add_documents(docs)
""" Get the retriever object """
retriever = vectorstore.as_retriever()
# retriever = vectorstore.as_retriever(search_kwargs={"where": "m1='123' and m2='abc'"})
""" The retriever object can be used with LangChain and LLM """
Interaction With Jaguar Vector Store
Users can interact directly with the Jaguar vector store for similarity search and anomaly detection.
from langchain_community.vectorstores.jaguar import Jaguar
from langchain_openai import OpenAIEmbeddings
# Instantiate a Jaguar vector store object
url = "http://192.168.3.88:8080/fwww/"
pod = "vdb"
store = "langchain_test_store"
vector_index = "v"
vector_type = "cosine_fraction_float"
vector_dimension = 10
embeddings = OpenAIEmbeddings()
vectorstore = Jaguar(
pod, store, vector_index, vector_type, vector_dimension, url, embeddings
)
# Login for authorization
vectorstore.login()
# Create the vector store with two metadata fields
# This needs to be run only once.
metadata_str = "author char(32), category char(16)"
vectorstore.create(metadata_str, 1024)
# Add a list of texts
texts = ["foo", "bar", "baz"]
metadatas = [
{"author": "Adam", "category": "Music"},
{"author": "Eve", "category": "Music"},
{"author": "John", "category": "History"},
]
ids = vectorstore.add_texts(texts=texts, metadatas=metadatas)
# Search similar text
output = vectorstore.similarity_search(
query="foo",
k=1,
metadatas=["author", "category"],
)
assert output[0].page_content == "foo"
assert output[0].metadata["author"] == "Adam"
assert output[0].metadata["category"] == "Music"
assert len(output) == 1
# Search with filtering (where)
where = "author='Eve'"
output = vectorstore.similarity_search(
query="foo",
k=3,
fetch_k=9,
where=where,
metadatas=["author", "category"],
)
assert output[0].page_content == "bar"
assert output[0].metadata["author"] == "Eve"
assert output[0].metadata["category"] == "Music"
assert len(output) == 1
# Anomaly detection
result = vectorstore.is_anomalous(
query="dogs can jump high",
)
assert result is False
# Remove all data in the store
vectorstore.clear()
assert vectorstore.count() == 0
# Remove the store completely
vectorstore.drop()
# Logout
vectorstore.logout() |
https://python.langchain.com/docs/integrations/retrievers/google_vertex_ai_search/ | ## Google Vertex AI Search
> [Google Vertex AI Search](https://cloud.google.com/enterprise-search) (formerly known as `Enterprise Search` on `Generative AI App Builder`) is a part of the [Vertex AI](https://cloud.google.com/vertex-ai) machine learning platform offered by `Google Cloud`.
>
> `Vertex AI Search` lets organizations quickly build generative AI-powered search engines for customers and employees. It’s underpinned by a variety of `Google Search` technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user’s query input. Vertex AI Search also benefits from Google’s expertise in understanding how users search and factors in content relevance to order displayed results.
> `Vertex AI Search` is available in the `Google Cloud Console` and via an API for enterprise workflow integration.
This notebook demonstrates how to configure `Vertex AI Search` and use the Vertex AI Search retriever. The Vertex AI Search retriever encapsulates the [Python client library](https://cloud.google.com/generative-ai-app-builder/docs/libraries#client-libraries-install-python) and uses it to access the [Search Service API](https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1beta.services.search_service).
## Install pre-requisites[](#install-pre-requisites "Direct link to Install pre-requisites")
You need to install the `google-cloud-discoveryengine` package to use the Vertex AI Search retriever.
```
%pip install --upgrade --quiet google-cloud-discoveryengine
```
## Configure access to Google Cloud and Vertex AI Search[](#configure-access-to-google-cloud-and-vertex-ai-search "Direct link to Configure access to Google Cloud and Vertex AI Search")
Vertex AI Search is generally available without allowlist as of August 2023.
Before you can use the retriever, you need to complete the following steps:
### Create a search engine and populate an unstructured data store[](#create-a-search-engine-and-populate-an-unstructured-data-store "Direct link to Create a search engine and populate an unstructured data store")
* Follow the instructions in the [Vertex AI Search Getting Started guide](https://cloud.google.com/generative-ai-app-builder/docs/try-enterprise-search) to set up a Google Cloud project and Vertex AI Search.
* [Use the Google Cloud Console to create an unstructured data store](https://cloud.google.com/generative-ai-app-builder/docs/create-engine-es#unstructured-data)
* Populate it with the example PDF documents from the `gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs` Cloud Storage folder.
* Make sure to use the `Cloud Storage (without metadata)` option.
### Set credentials to access Vertex AI Search API[](#set-credentials-to-access-vertex-ai-search-api "Direct link to Set credentials to access Vertex AI Search API")
The [Vertex AI Search client libraries](https://cloud.google.com/generative-ai-app-builder/docs/libraries) used by the Vertex AI Search retriever provide high-level language support for authenticating to Google Cloud programmatically. Client libraries support [Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/application-default-credentials); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API. With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code.
If running in [Google Colab](https://colab.google/) authenticate with `google.colab.google.auth` otherwise follow one of the [supported methods](https://cloud.google.com/docs/authentication/application-default-credentials) to make sure that you Application Default Credentials are properly set.
```
import sysif "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user()
```
## Configure and use the Vertex AI Search retriever[](#configure-and-use-the-vertex-ai-search-retriever "Direct link to Configure and use the Vertex AI Search retriever")
The Vertex AI Search retriever is implemented in the `langchain.retriever.GoogleVertexAISearchRetriever` class. The `get_relevant_documents` method returns a list of `langchain.schema.Document` documents where the `page_content` field of each document is populated the document content. Depending on the data type used in Vertex AI Search (website, structured or unstructured) the `page_content` field is populated as follows:
* Website with advanced indexing: an `extractive answer` that matches a query. The `metadata` field is populated with metadata (if any) of the document from which the segments or answers were extracted.
* Unstructured data source: either an `extractive segment` or an `extractive answer` that matches a query. The `metadata` field is populated with metadata (if any) of the document from which the segments or answers were extracted.
* Structured data source: a string json containing all the fields returned from the structured data source. The `metadata` field is populated with metadata (if any) of the document
An extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.
An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search.
For more information about extractive segments and extractive answers refer to [product documentation](https://cloud.google.com/generative-ai-app-builder/docs/snippets).
NOTE: Extractive segments require the [Enterprise edition](https://cloud.google.com/generative-ai-app-builder/docs/about-advanced-features#enterprise-features) features to be enabled.
When creating an instance of the retriever you can specify a number of parameters that control which data store to access and how a natural language query is processed, including configurations for extractive answers and segments.
### The mandatory parameters are:[](#the-mandatory-parameters-are "Direct link to The mandatory parameters are:")
* `project_id` - Your Google Cloud Project ID.
* `location_id` - The location of the data store.
* `global` (default)
* `us`
* `eu`
One of: - `search_engine_id` - The ID of the search app you want to use. (Required for Blended Search) - `data_store_id` - The ID of the data store you want to use.
The `project_id`, `search_engine_id` and `data_store_id` parameters can be provided explicitly in the retriever’s constructor or through the environment variables - `PROJECT_ID`, `SEARCH_ENGINE_ID` and `DATA_STORE_ID`.
You can also configure a number of optional parameters, including:
* `max_documents` - The maximum number of documents used to provide extractive segments or extractive answers
* `get_extractive_answers` - By default, the retriever is configured to return extractive segments.
* Set this field to `True` to return extractive answers. This is used only when `engine_data_type` set to `0` (unstructured)
* `max_extractive_answer_count` - The maximum number of extractive answers returned in each search result.
* At most 5 answers will be returned. This is used only when `engine_data_type` set to `0` (unstructured).
* `max_extractive_segment_count` - The maximum number of extractive segments returned in each search result.
* Currently one segment will be returned. This is used only when `engine_data_type` set to `0` (unstructured).
* `filter` - The filter expression for the search results based on the metadata associated with the documents in the data store.
* `query_expansion_condition` - Specification to determine under which conditions query expansion should occur.
* `0` - Unspecified query expansion condition. In this case, server behavior defaults to disabled.
* `1` - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total\_size is zero.
* `2` - Automatic query expansion built by the Search API.
* `engine_data_type` - Defines the Vertex AI Search data type
* `0` - Unstructured data
* `1` - Structured data
* `2` - Website data
* `3` - [Blended search](https://cloud.google.com/generative-ai-app-builder/docs/create-data-store-es#multi-data-stores)
### Migration guide for `GoogleCloudEnterpriseSearchRetriever`[](#migration-guide-for-googlecloudenterprisesearchretriever "Direct link to migration-guide-for-googlecloudenterprisesearchretriever")
In previous versions, this retriever was called `GoogleCloudEnterpriseSearchRetriever`.
To update to the new retriever, make the following changes:
* Change the import from: `from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever` -\>`from langchain.retrievers import GoogleVertexAISearchRetriever`.
* Change all class references from `GoogleCloudEnterpriseSearchRetriever` -\>`GoogleVertexAISearchRetriever`.
### Configure and use the retriever for **unstructured** data with extractive segments[](#configure-and-use-the-retriever-for-unstructured-data-with-extractive-segments "Direct link to configure-and-use-the-retriever-for-unstructured-data-with-extractive-segments")
```
from langchain_community.retrievers import ( GoogleVertexAIMultiTurnSearchRetriever, GoogleVertexAISearchRetriever,)PROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project IDLOCATION_ID = "<YOUR LOCATION>" # Set to your data store locationSEARCH_ENGINE_ID = "<YOUR SEARCH APP ID>" # Set to your search app IDDATA_STORE_ID = "<YOUR DATA STORE ID>" # Set to your data store ID
```
```
retriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3,)
```
```
query = "What are Alphabet's Other Bets?"result = retriever.get_relevant_documents(query)for doc in result: print(doc)
```
### Configure and use the retriever for **unstructured** data with extractive answers[](#configure-and-use-the-retriever-for-unstructured-data-with-extractive-answers "Direct link to configure-and-use-the-retriever-for-unstructured-data-with-extractive-answers")
```
retriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)
```
### Configure and use the retriever for **structured** data[](#configure-and-use-the-retriever-for-structured-data "Direct link to configure-and-use-the-retriever-for-structured-data")
```
retriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3, engine_data_type=1,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)
```
### Configure and use the retriever for **website** data with Advanced Website Indexing[](#configure-and-use-the-retriever-for-website-data-with-advanced-website-indexing "Direct link to configure-and-use-the-retriever-for-website-data-with-advanced-website-indexing")
```
retriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True, engine_data_type=2,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)
```
### Configure and use the retriever for **blended** data[](#configure-and-use-the-retriever-for-blended-data "Direct link to configure-and-use-the-retriever-for-blended-data")
```
retriever = GoogleVertexAISearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, engine_data_type=3,)result = retriever.get_relevant_documents(query)for doc in result: print(doc)
```
### Configure and use the retriever for multi-turn search[](#configure-and-use-the-retriever-for-multi-turn-search "Direct link to Configure and use the retriever for multi-turn search")
[Search with follow-ups](https://cloud.google.com/generative-ai-app-builder/docs/multi-turn-search) is based on generative AI models and it is different from the regular unstructured data search.
```
retriever = GoogleVertexAIMultiTurnSearchRetriever( project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID)result = retriever.get_relevant_documents(query)for doc in result: print(doc)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:11.955Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/google_vertex_ai_search/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/google_vertex_ai_search/",
"description": "Google Vertex AI Search",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4600",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"google_vertex_ai_search\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:11 GMT",
"etag": "W/\"21121f9af5fc02131df1d7d7851d5232\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::4hr64-1713753731752-54e474dd19e1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/google_vertex_ai_search/",
"property": "og:url"
},
{
"content": "Google Vertex AI Search | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Google Vertex AI Search",
"property": "og:description"
}
],
"title": "Google Vertex AI Search | 🦜️🔗 LangChain"
} | Google Vertex AI Search
Google Vertex AI Search (formerly known as Enterprise Search on Generative AI App Builder) is a part of the Vertex AI machine learning platform offered by Google Cloud.
Vertex AI Search lets organizations quickly build generative AI-powered search engines for customers and employees. It’s underpinned by a variety of Google Search technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user’s query input. Vertex AI Search also benefits from Google’s expertise in understanding how users search and factors in content relevance to order displayed results.
Vertex AI Search is available in the Google Cloud Console and via an API for enterprise workflow integration.
This notebook demonstrates how to configure Vertex AI Search and use the Vertex AI Search retriever. The Vertex AI Search retriever encapsulates the Python client library and uses it to access the Search Service API.
Install pre-requisites
You need to install the google-cloud-discoveryengine package to use the Vertex AI Search retriever.
%pip install --upgrade --quiet google-cloud-discoveryengine
Configure access to Google Cloud and Vertex AI Search
Vertex AI Search is generally available without allowlist as of August 2023.
Before you can use the retriever, you need to complete the following steps:
Create a search engine and populate an unstructured data store
Follow the instructions in the Vertex AI Search Getting Started guide to set up a Google Cloud project and Vertex AI Search.
Use the Google Cloud Console to create an unstructured data store
Populate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder.
Make sure to use the Cloud Storage (without metadata) option.
Set credentials to access Vertex AI Search API
The Vertex AI Search client libraries used by the Vertex AI Search retriever provide high-level language support for authenticating to Google Cloud programmatically. Client libraries support Application Default Credentials (ADC); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API. With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code.
If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of the supported methods to make sure that you Application Default Credentials are properly set.
import sys
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
Configure and use the Vertex AI Search retriever
The Vertex AI Search retriever is implemented in the langchain.retriever.GoogleVertexAISearchRetriever class. The get_relevant_documents method returns a list of langchain.schema.Document documents where the page_content field of each document is populated the document content. Depending on the data type used in Vertex AI Search (website, structured or unstructured) the page_content field is populated as follows:
Website with advanced indexing: an extractive answer that matches a query. The metadata field is populated with metadata (if any) of the document from which the segments or answers were extracted.
Unstructured data source: either an extractive segment or an extractive answer that matches a query. The metadata field is populated with metadata (if any) of the document from which the segments or answers were extracted.
Structured data source: a string json containing all the fields returned from the structured data source. The metadata field is populated with metadata (if any) of the document
An extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search.
An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search.
For more information about extractive segments and extractive answers refer to product documentation.
NOTE: Extractive segments require the Enterprise edition features to be enabled.
When creating an instance of the retriever you can specify a number of parameters that control which data store to access and how a natural language query is processed, including configurations for extractive answers and segments.
The mandatory parameters are:
project_id - Your Google Cloud Project ID.
location_id - The location of the data store.
global (default)
us
eu
One of: - search_engine_id - The ID of the search app you want to use. (Required for Blended Search) - data_store_id - The ID of the data store you want to use.
The project_id, search_engine_id and data_store_id parameters can be provided explicitly in the retriever’s constructor or through the environment variables - PROJECT_ID, SEARCH_ENGINE_ID and DATA_STORE_ID.
You can also configure a number of optional parameters, including:
max_documents - The maximum number of documents used to provide extractive segments or extractive answers
get_extractive_answers - By default, the retriever is configured to return extractive segments.
Set this field to True to return extractive answers. This is used only when engine_data_type set to 0 (unstructured)
max_extractive_answer_count - The maximum number of extractive answers returned in each search result.
At most 5 answers will be returned. This is used only when engine_data_type set to 0 (unstructured).
max_extractive_segment_count - The maximum number of extractive segments returned in each search result.
Currently one segment will be returned. This is used only when engine_data_type set to 0 (unstructured).
filter - The filter expression for the search results based on the metadata associated with the documents in the data store.
query_expansion_condition - Specification to determine under which conditions query expansion should occur.
0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled.
1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero.
2 - Automatic query expansion built by the Search API.
engine_data_type - Defines the Vertex AI Search data type
0 - Unstructured data
1 - Structured data
2 - Website data
3 - Blended search
Migration guide for GoogleCloudEnterpriseSearchRetriever
In previous versions, this retriever was called GoogleCloudEnterpriseSearchRetriever.
To update to the new retriever, make the following changes:
Change the import from: from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever ->from langchain.retrievers import GoogleVertexAISearchRetriever.
Change all class references from GoogleCloudEnterpriseSearchRetriever ->GoogleVertexAISearchRetriever.
Configure and use the retriever for unstructured data with extractive segments
from langchain_community.retrievers import (
GoogleVertexAIMultiTurnSearchRetriever,
GoogleVertexAISearchRetriever,
)
PROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project ID
LOCATION_ID = "<YOUR LOCATION>" # Set to your data store location
SEARCH_ENGINE_ID = "<YOUR SEARCH APP ID>" # Set to your search app ID
DATA_STORE_ID = "<YOUR DATA STORE ID>" # Set to your data store ID
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
location_id=LOCATION_ID,
data_store_id=DATA_STORE_ID,
max_documents=3,
)
query = "What are Alphabet's Other Bets?"
result = retriever.get_relevant_documents(query)
for doc in result:
print(doc)
Configure and use the retriever for unstructured data with extractive answers
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
location_id=LOCATION_ID,
data_store_id=DATA_STORE_ID,
max_documents=3,
max_extractive_answer_count=3,
get_extractive_answers=True,
)
result = retriever.get_relevant_documents(query)
for doc in result:
print(doc)
Configure and use the retriever for structured data
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
location_id=LOCATION_ID,
data_store_id=DATA_STORE_ID,
max_documents=3,
engine_data_type=1,
)
result = retriever.get_relevant_documents(query)
for doc in result:
print(doc)
Configure and use the retriever for website data with Advanced Website Indexing
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
location_id=LOCATION_ID,
data_store_id=DATA_STORE_ID,
max_documents=3,
max_extractive_answer_count=3,
get_extractive_answers=True,
engine_data_type=2,
)
result = retriever.get_relevant_documents(query)
for doc in result:
print(doc)
Configure and use the retriever for blended data
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
location_id=LOCATION_ID,
search_engine_id=SEARCH_ENGINE_ID,
max_documents=3,
engine_data_type=3,
)
result = retriever.get_relevant_documents(query)
for doc in result:
print(doc)
Configure and use the retriever for multi-turn search
Search with follow-ups is based on generative AI models and it is different from the regular unstructured data search.
retriever = GoogleVertexAIMultiTurnSearchRetriever(
project_id=PROJECT_ID, location_id=LOCATION_ID, data_store_id=DATA_STORE_ID
)
result = retriever.get_relevant_documents(query)
for doc in result:
print(doc) |
https://python.langchain.com/docs/integrations/providers/wandb_tracking/ | ## Weights & Biases
This notebook goes over how to track your LangChain experiments into one centralized Weights and Biases dashboard. To learn more about prompt engineering and the callback please refer to this Report which explains both alongside the resultant dashboards you can expect to see.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1DXH4beT4HFaRKy_Vm4PoxhXVDRf7Ym8L?usp=sharing)
[View Report](https://wandb.ai/a-sh0ts/langchain_callback_demo/reports/Prompt-Engineering-LLMs-with-LangChain-and-W-B--VmlldzozNjk1NTUw#%F0%9F%91%8B-how-to-build-a-callback-in-langchain-for-better-prompt-engineering)
**Note**: _the `WandbCallbackHandler` is being deprecated in favour of the `WandbTracer`_ . In future please use the `WandbTracer` as it is more flexible and allows for more granular logging. To know more about the `WandbTracer` refer to the [agent\_with\_wandb\_tracing](https://python.langchain.com/docs/integrations/providers/wandb_tracing/) notebook or use the following [colab notebook](http://wandb.me/prompts-quickstart). To know more about Weights & Biases Prompts refer to the following [prompts documentation](https://docs.wandb.ai/guides/prompts).
```
%pip install --upgrade --quiet wandb%pip install --upgrade --quiet pandas%pip install --upgrade --quiet textstat%pip install --upgrade --quiet spacy!python -m spacy download en_core_web_sm
```
```
import osos.environ["WANDB_API_KEY"] = ""# os.environ["OPENAI_API_KEY"] = ""# os.environ["SERPAPI_API_KEY"] = ""
```
```
from datetime import datetimefrom langchain.callbacks import StdOutCallbackHandler, WandbCallbackHandlerfrom langchain_openai import OpenAI
```
```
Callback Handler that logs to Weights and Biases.Parameters: job_type (str): The type of job. project (str): The project to log to. entity (str): The entity to log to. tags (list): The tags to log. group (str): The group to log to. name (str): The name of the run. notes (str): The notes to log. visualize (bool): Whether to visualize the run. complexity_metrics (bool): Whether to log complexity metrics. stream_logs (bool): Whether to stream callback actions to W&B
```
```
Default values for WandbCallbackHandler(...)visualize: bool = False,complexity_metrics: bool = False,stream_logs: bool = False,
```
NOTE: For beta workflows we have made the default analysis based on textstat and the visualizations based on spacy
```
"""Main function.This function is used to try the callback handler.Scenarios:1. OpenAI LLM2. Chain with multiple SubChains on multiple generations3. Agent with Tools"""session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")wandb_callback = WandbCallbackHandler( job_type="inference", project="langchain_callback_demo", group=f"minimal_{session_group}", name="llm", tags=["test"],)callbacks = [StdOutCallbackHandler(), wandb_callback]llm = OpenAI(temperature=0, callbacks=callbacks)
```
```
wandb: Currently logged in as: harrison-chase. Use `wandb login --relogin` to force reloginwandb: WARNING The wandb callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.
```
Tracking run with wandb version 0.14.0
Run data is saved locally in `/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150408-e47j1914`
```
# Defaults for WandbCallbackHandler.flush_tracker(...)reset: bool = True,finish: bool = False,
```
The `flush_tracker` function is used to log LangChain sessions to Weights & Biases. It takes in the LangChain module or agent, and logs at minimum the prompts and generations alongside the serialized form of the LangChain module to the specified Weights & Biases project. By default we reset the session as opposed to concluding the session outright.
```
# SCENARIO 1 - LLMllm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)wandb_callback.flush_tracker(llm, name="simple_sequential")
```
Waiting for W&B process to finish... **(success).**
Find logs at: `./wandb/run-20230318_150408-e47j1914/logs`
```
VBox(children=(Label(value='Waiting for wandb.init()...\r'), FloatProgress(value=0.016745895149999985, max=1.0…
```
Tracking run with wandb version 0.14.0
Run data is saved locally in `/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150534-jyxma7hu`
```
from langchain.chains import LLMChainfrom langchain_core.prompts import PromptTemplate
```
```
# SCENARIO 2 - Chaintemplate = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [ { "title": "documentary about good video games that push the boundary of game design" }, {"title": "cocaine bear vs heroin wolf"}, {"title": "the best in class mlops tooling"},]synopsis_chain.apply(test_prompts)wandb_callback.flush_tracker(synopsis_chain, name="agent")
```
Waiting for W&B process to finish... **(success).**
Find logs at: `./wandb/run-20230318_150534-jyxma7hu/logs`
```
VBox(children=(Label(value='Waiting for wandb.init()...\r'), FloatProgress(value=0.016736786816666675, max=1.0…
```
Tracking run with wandb version 0.14.0
Run data is saved locally in `/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150550-wzy59zjq`
```
from langchain.agents import AgentType, initialize_agent, load_tools
```
```
# SCENARIO 3 - Agent with Toolstools = load_tools(["serpapi", "llm-math"], llm=llm)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", callbacks=callbacks,)wandb_callback.flush_tracker(agent, reset=False, finish=True)
```
```
> Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.Action: SearchAction Input: "Leo DiCaprio girlfriend"Observation: DiCaprio had a steady girlfriend in Camila Morrone. He had been with the model turned actress for nearly five years, as they were first said to be dating at the end of 2017. And the now 26-year-old Morrone is no stranger to Hollywood.Thought: I need to calculate her age raised to the 0.43 power.Action: CalculatorAction Input: 26^0.43Observation: Answer: 4.059182145592686Thought: I now know the final answer.Final Answer: Leo DiCaprio's girlfriend is Camila Morrone and her current age raised to the 0.43 power is 4.059182145592686.> Finished chain.
```
Waiting for W&B process to finish... **(success).**
Find logs at: `./wandb/run-20230318_150550-wzy59zjq/logs` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:12.329Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/wandb_tracking/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/wandb_tracking/",
"description": "This notebook goes over how to track your LangChain experiments into one",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3584",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"wandb_tracking\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:12 GMT",
"etag": "W/\"8466650738120bdd7d4a248a6d4b4f12\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::l9cgv-1713753732274-ea00078869ee"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/wandb_tracking/",
"property": "og:url"
},
{
"content": "Weights & Biases | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook goes over how to track your LangChain experiments into one",
"property": "og:description"
}
],
"title": "Weights & Biases | 🦜️🔗 LangChain"
} | Weights & Biases
This notebook goes over how to track your LangChain experiments into one centralized Weights and Biases dashboard. To learn more about prompt engineering and the callback please refer to this Report which explains both alongside the resultant dashboards you can expect to see.
View Report
Note: the WandbCallbackHandler is being deprecated in favour of the WandbTracer . In future please use the WandbTracer as it is more flexible and allows for more granular logging. To know more about the WandbTracer refer to the agent_with_wandb_tracing notebook or use the following colab notebook. To know more about Weights & Biases Prompts refer to the following prompts documentation.
%pip install --upgrade --quiet wandb
%pip install --upgrade --quiet pandas
%pip install --upgrade --quiet textstat
%pip install --upgrade --quiet spacy
!python -m spacy download en_core_web_sm
import os
os.environ["WANDB_API_KEY"] = ""
# os.environ["OPENAI_API_KEY"] = ""
# os.environ["SERPAPI_API_KEY"] = ""
from datetime import datetime
from langchain.callbacks import StdOutCallbackHandler, WandbCallbackHandler
from langchain_openai import OpenAI
Callback Handler that logs to Weights and Biases.
Parameters:
job_type (str): The type of job.
project (str): The project to log to.
entity (str): The entity to log to.
tags (list): The tags to log.
group (str): The group to log to.
name (str): The name of the run.
notes (str): The notes to log.
visualize (bool): Whether to visualize the run.
complexity_metrics (bool): Whether to log complexity metrics.
stream_logs (bool): Whether to stream callback actions to W&B
Default values for WandbCallbackHandler(...)
visualize: bool = False,
complexity_metrics: bool = False,
stream_logs: bool = False,
NOTE: For beta workflows we have made the default analysis based on textstat and the visualizations based on spacy
"""Main function.
This function is used to try the callback handler.
Scenarios:
1. OpenAI LLM
2. Chain with multiple SubChains on multiple generations
3. Agent with Tools
"""
session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")
wandb_callback = WandbCallbackHandler(
job_type="inference",
project="langchain_callback_demo",
group=f"minimal_{session_group}",
name="llm",
tags=["test"],
)
callbacks = [StdOutCallbackHandler(), wandb_callback]
llm = OpenAI(temperature=0, callbacks=callbacks)
wandb: Currently logged in as: harrison-chase. Use `wandb login --relogin` to force relogin
wandb: WARNING The wandb callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.
Tracking run with wandb version 0.14.0
Run data is saved locally in /Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150408-e47j1914
# Defaults for WandbCallbackHandler.flush_tracker(...)
reset: bool = True,
finish: bool = False,
The flush_tracker function is used to log LangChain sessions to Weights & Biases. It takes in the LangChain module or agent, and logs at minimum the prompts and generations alongside the serialized form of the LangChain module to the specified Weights & Biases project. By default we reset the session as opposed to concluding the session outright.
# SCENARIO 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
wandb_callback.flush_tracker(llm, name="simple_sequential")
Waiting for W&B process to finish... (success).
Find logs at: ./wandb/run-20230318_150408-e47j1914/logs
VBox(children=(Label(value='Waiting for wandb.init()...\r'), FloatProgress(value=0.016745895149999985, max=1.0…
Tracking run with wandb version 0.14.0
Run data is saved locally in /Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150534-jyxma7hu
from langchain.chains import LLMChain
from langchain_core.prompts import PromptTemplate
# SCENARIO 2 - Chain
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)
test_prompts = [
{
"title": "documentary about good video games that push the boundary of game design"
},
{"title": "cocaine bear vs heroin wolf"},
{"title": "the best in class mlops tooling"},
]
synopsis_chain.apply(test_prompts)
wandb_callback.flush_tracker(synopsis_chain, name="agent")
Waiting for W&B process to finish... (success).
Find logs at: ./wandb/run-20230318_150534-jyxma7hu/logs
VBox(children=(Label(value='Waiting for wandb.init()...\r'), FloatProgress(value=0.016736786816666675, max=1.0…
Tracking run with wandb version 0.14.0
Run data is saved locally in /Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150550-wzy59zjq
from langchain.agents import AgentType, initialize_agent, load_tools
# SCENARIO 3 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?",
callbacks=callbacks,
)
wandb_callback.flush_tracker(agent, reset=False, finish=True)
> Entering new AgentExecutor chain...
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: DiCaprio had a steady girlfriend in Camila Morrone. He had been with the model turned actress for nearly five years, as they were first said to be dating at the end of 2017. And the now 26-year-old Morrone is no stranger to Hollywood.
Thought: I need to calculate her age raised to the 0.43 power.
Action: Calculator
Action Input: 26^0.43
Observation: Answer: 4.059182145592686
Thought: I now know the final answer.
Final Answer: Leo DiCaprio's girlfriend is Camila Morrone and her current age raised to the 0.43 power is 4.059182145592686.
> Finished chain.
Waiting for W&B process to finish... (success).
Find logs at: ./wandb/run-20230318_150550-wzy59zjq/logs |
https://python.langchain.com/docs/integrations/retrievers/self_query/ | * * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:12.516Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/",
"description": "Learn about how the self-querying retriever works here.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:12 GMT",
"etag": "W/\"75dfb08f0afe29fc7141bba0aefdf8ec\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::zxq9h-1713753732293-3853a444bc08"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/",
"property": "og:url"
},
{
"content": "Self-querying retrievers | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Learn about how the self-querying retriever works here.",
"property": "og:description"
}
],
"title": "Self-querying retrievers | 🦜️🔗 LangChain"
} | Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/retrievers/sec_filings/ | ## SEC filing
> [SEC filing](https://www.sec.gov/edgar) is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular `SEC filings`. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes.
>
> `SEC filings` data powered by [Kay.ai](https://kay.ai/) and [Cybersyn](https://www.cybersyn.com/) via [Snowflake Marketplace](https://app.snowflake.com/marketplace/providers/GZTSZAS2KCS/Cybersyn%2C%20Inc).
## Setup[](#setup "Direct link to Setup")
First, you will need to install the `kay` package. You will also need an API key: you can get one for free at [https://kay.ai](https://kay.ai/). Once you have an API key, you must set it as an environment variable `KAY_API_KEY`.
In this example, we’re going to use the `KayAiRetriever`. Take a look at the [kay notebook](https://python.langchain.com/docs/integrations/retrievers/kay/) for more detailed information for the parameters that it accepts.\`
```
# Setup API keys for Kay and OpenAIfrom getpass import getpassKAY_API_KEY = getpass()OPENAI_API_KEY = getpass()
```
```
import osos.environ["KAY_API_KEY"] = KAY_API_KEYos.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
```
## Example[](#example "Direct link to Example")
```
from langchain.chains import ConversationalRetrievalChainfrom langchain_community.retrievers import KayAiRetrieverfrom langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-3.5-turbo")retriever = KayAiRetriever.create( dataset_id="company", data_types=["10-K", "10-Q"], num_contexts=6)qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)
```
```
questions = [ "What are patterns in Nvidia's spend over the past three quarters?", # "What are some recent challenges faced by the renewable energy sector?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n")
```
```
-> **Question**: What are patterns in Nvidia's spend over the past three quarters? **Answer**: Based on the provided information, here are the patterns in NVIDIA's spend over the past three quarters:1. Research and Development Expenses: - Q3 2022: Increased by 34% compared to Q3 2021. - Q1 2023: Increased by 40% compared to Q1 2022. - Q2 2022: Increased by 25% compared to Q2 2021. Overall, research and development expenses have been consistently increasing over the past three quarters.2. Sales, General and Administrative Expenses: - Q3 2022: Increased by 8% compared to Q3 2021. - Q1 2023: Increased by 14% compared to Q1 2022. - Q2 2022: Decreased by 16% compared to Q2 2021. The pattern for sales, general and administrative expenses is not as consistent, with some quarters showing an increase and others showing a decrease.3. Total Operating Expenses: - Q3 2022: Increased by 25% compared to Q3 2021. - Q1 2023: Increased by 113% compared to Q1 2022. - Q2 2022: Increased by 9% compared to Q2 2021. Total operating expenses have generally been increasing over the past three quarters, with a significant increase in Q1 2023.Overall, the pattern indicates a consistent increase in research and development expenses and total operating expenses, while sales, general and administrative expenses show some fluctuations.
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:12.620Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/sec_filings/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/sec_filings/",
"description": "SEC filing is a financial statement or",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3579",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"sec_filings\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:12 GMT",
"etag": "W/\"e8081822edb4918267aefddae35b2d21\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::82lsb-1713753732344-bfdaedc1dc19"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/sec_filings/",
"property": "og:url"
},
{
"content": "SEC filing | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "SEC filing is a financial statement or",
"property": "og:description"
}
],
"title": "SEC filing | 🦜️🔗 LangChain"
} | SEC filing
SEC filing is a financial statement or other formal document submitted to the U.S. Securities and Exchange Commission (SEC). Public companies, certain insiders, and broker-dealers are required to make regular SEC filings. Investors and financial professionals rely on these filings for information about companies they are evaluating for investment purposes.
SEC filings data powered by Kay.ai and Cybersyn via Snowflake Marketplace.
Setup
First, you will need to install the kay package. You will also need an API key: you can get one for free at https://kay.ai. Once you have an API key, you must set it as an environment variable KAY_API_KEY.
In this example, we’re going to use the KayAiRetriever. Take a look at the kay notebook for more detailed information for the parameters that it accepts.`
# Setup API keys for Kay and OpenAI
from getpass import getpass
KAY_API_KEY = getpass()
OPENAI_API_KEY = getpass()
import os
os.environ["KAY_API_KEY"] = KAY_API_KEY
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
Example
from langchain.chains import ConversationalRetrievalChain
from langchain_community.retrievers import KayAiRetriever
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-3.5-turbo")
retriever = KayAiRetriever.create(
dataset_id="company", data_types=["10-K", "10-Q"], num_contexts=6
)
qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)
questions = [
"What are patterns in Nvidia's spend over the past three quarters?",
# "What are some recent challenges faced by the renewable energy sector?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result["answer"]))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are patterns in Nvidia's spend over the past three quarters?
**Answer**: Based on the provided information, here are the patterns in NVIDIA's spend over the past three quarters:
1. Research and Development Expenses:
- Q3 2022: Increased by 34% compared to Q3 2021.
- Q1 2023: Increased by 40% compared to Q1 2022.
- Q2 2022: Increased by 25% compared to Q2 2021.
Overall, research and development expenses have been consistently increasing over the past three quarters.
2. Sales, General and Administrative Expenses:
- Q3 2022: Increased by 8% compared to Q3 2021.
- Q1 2023: Increased by 14% compared to Q1 2022.
- Q2 2022: Decreased by 16% compared to Q2 2021.
The pattern for sales, general and administrative expenses is not as consistent, with some quarters showing an increase and others showing a decrease.
3. Total Operating Expenses:
- Q3 2022: Increased by 25% compared to Q3 2021.
- Q1 2023: Increased by 113% compared to Q1 2022.
- Q2 2022: Increased by 9% compared to Q2 2021.
Total operating expenses have generally been increasing over the past three quarters, with a significant increase in Q1 2023.
Overall, the pattern indicates a consistent increase in research and development expenses and total operating expenses, while sales, general and administrative expenses show some fluctuations. |
https://python.langchain.com/docs/integrations/providers/wandb_tracing/ | ## WandB Tracing
There are two recommended ways to trace your LangChains:
1. Setting the `LANGCHAIN_WANDB_TRACING` environment variable to “true”.
2. Using a context manager with tracing\_enabled() to trace a particular block of code.
**Note** if the environment variable is set, all code will be traced, regardless of whether or not it’s within the context manager.
```
import osos.environ["LANGCHAIN_WANDB_TRACING"] = "true"# wandb documentation to configure wandb using env variables# https://docs.wandb.ai/guides/track/advanced/environment-variables# here we are configuring the wandb project nameos.environ["WANDB_PROJECT"] = "langchain-tracing"from langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import wandb_tracing_enabledfrom langchain_openai import OpenAI
```
```
# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.llm = OpenAI(temperature=0)tools = load_tools(["llm-math"], llm=llm)
```
```
agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("What is 2 raised to .123243 power?") # this should be traced# A url with for the trace sesion like the following should print in your console:# https://wandb.ai/<wandb_entity>/<wandb_project>/runs/<run_id># The url can be used to view the trace session in wandb.
```
```
# Now, we unset the environment variable and use a context manager.if "LANGCHAIN_WANDB_TRACING" in os.environ: del os.environ["LANGCHAIN_WANDB_TRACING"]# enable tracing using a context managerwith wandb_tracing_enabled(): agent.run("What is 5 raised to .123243 power?") # this should be tracedagent.run("What is 2 raised to .123243 power?") # this should not be traced
```
```
> Entering new AgentExecutor chain... I need to use a calculator to solve this.Action: CalculatorAction Input: 5^.123243Observation: Answer: 1.2193914912400514Thought: I now know the final answer.Final Answer: 1.2193914912400514> Finished chain.> Entering new AgentExecutor chain... I need to use a calculator to solve this.Action: CalculatorAction Input: 2^.123243Observation: Answer: 1.0891804557407723Thought: I now know the final answer.Final Answer: 1.0891804557407723> Finished chain.
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:12.731Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/wandb_tracing/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/wandb_tracing/",
"description": "There are two recommended ways to trace your LangChains:",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4631",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"wandb_tracing\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:12 GMT",
"etag": "W/\"85c439a6eb768a1e04134434ab19b77a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::xvkrm-1713753732272-bc79f24e6123"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/wandb_tracing/",
"property": "og:url"
},
{
"content": "WandB Tracing | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "There are two recommended ways to trace your LangChains:",
"property": "og:description"
}
],
"title": "WandB Tracing | 🦜️🔗 LangChain"
} | WandB Tracing
There are two recommended ways to trace your LangChains:
Setting the LANGCHAIN_WANDB_TRACING environment variable to “true”.
Using a context manager with tracing_enabled() to trace a particular block of code.
Note if the environment variable is set, all code will be traced, regardless of whether or not it’s within the context manager.
import os
os.environ["LANGCHAIN_WANDB_TRACING"] = "true"
# wandb documentation to configure wandb using env variables
# https://docs.wandb.ai/guides/track/advanced/environment-variables
# here we are configuring the wandb project name
os.environ["WANDB_PROJECT"] = "langchain-tracing"
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import wandb_tracing_enabled
from langchain_openai import OpenAI
# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
agent.run("What is 2 raised to .123243 power?") # this should be traced
# A url with for the trace sesion like the following should print in your console:
# https://wandb.ai/<wandb_entity>/<wandb_project>/runs/<run_id>
# The url can be used to view the trace session in wandb.
# Now, we unset the environment variable and use a context manager.
if "LANGCHAIN_WANDB_TRACING" in os.environ:
del os.environ["LANGCHAIN_WANDB_TRACING"]
# enable tracing using a context manager
with wandb_tracing_enabled():
agent.run("What is 5 raised to .123243 power?") # this should be traced
agent.run("What is 2 raised to .123243 power?") # this should not be traced
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 5^.123243
Observation: Answer: 1.2193914912400514
Thought: I now know the final answer.
Final Answer: 1.2193914912400514
> Finished chain.
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 2^.123243
Observation: Answer: 1.0891804557407723
Thought: I now know the final answer.
Final Answer: 1.0891804557407723
> Finished chain.
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/retrievers/knn/ | ## kNN
> In statistics, the [k-nearest neighbours algorithm (k-NN)](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) is a non-parametric supervised learning method first developed by `Evelyn Fix` and `Joseph Hodges` in 1951, and later expanded by `Thomas Cover`. It is used for classification and regression.
This notebook goes over how to use a retriever that under the hood uses a kNN.
Largely based on the code of [Andrej Karpathy](https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.html).
```
from langchain_community.retrievers import KNNRetrieverfrom langchain_openai import OpenAIEmbeddings
```
## Create New Retriever with Texts[](#create-new-retriever-with-texts "Direct link to Create New Retriever with Texts")
```
retriever = KNNRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())
```
## Use Retriever[](#use-retriever "Direct link to Use Retriever")
We can now use the retriever!
```
result = retriever.get_relevant_documents("foo")
```
```
[Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='bar', metadata={})]
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:13.108Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/knn/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/knn/",
"description": "In statistics, the [k-nearest neighbours algorithm",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4034",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"knn\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:12 GMT",
"etag": "W/\"df5af415793caae051eef246f7a243df\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::k52mr-1713753732956-df42e249354d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/knn/",
"property": "og:url"
},
{
"content": "kNN | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "In statistics, the [k-nearest neighbours algorithm",
"property": "og:description"
}
],
"title": "kNN | 🦜️🔗 LangChain"
} | kNN
In statistics, the k-nearest neighbours algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression.
This notebook goes over how to use a retriever that under the hood uses a kNN.
Largely based on the code of Andrej Karpathy.
from langchain_community.retrievers import KNNRetriever
from langchain_openai import OpenAIEmbeddings
Create New Retriever with Texts
retriever = KNNRetriever.from_texts(
["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings()
)
Use Retriever
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='bar', metadata={})]
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/retrievers/kay/ | This notebook shows you how to retrieve datasets supported by [Kay](https://kay.ai/). You can currently search `SEC Filings` and `Press Releases of US companies`. Visit [kay.ai](https://kay.ai/) for the latest data drops. For any questions, join our [discord](https://discord.gg/hAnE4e5T6M) or [tweet at us](https://twitter.com/vishalrohra_)
You will also need an API key: you can get one for free at [https://kay.ai](https://kay.ai/). Once you have an API key, you must set it as an environment variable `KAY_API_KEY`.
`KayAiRetriever` has a static `.create()` factory method that takes the following arguments:
```
[Document(page_content='Company Name: ROKU INC\nCompany Industry: CABLE & OTHER PAY TELEVISION SERVICES\nArticle Title: Roku Is One of Fast Company\'s Most Innovative Companies for 2023\nText: The company launched several new devices, including the Roku Voice Remote Pro; upgraded its most premium player, the Roku Ultra; and expanded its products with a new line of smart home devices such as video doorbells, lights, and plugs integrated into the Roku ecosystem. Recently, the company announced it will launch Roku-branded TVs this spring to offer more choice and innovation to both consumers and Roku TV partners. Throughout 2022, Roku also updated its operating system (OS), the only OS purpose-built for TV, with more personalization features and enhancements across search, audio, and content discovery, launching The Buzz, Sports, and What to Watch, which provides tailored movie and TV recommendations on the Home Screen Menu. The company also released a new feature for streamers, Photo Streams, that allows customers to display and share photo albums through Roku streaming devices. Additionally, Roku unveiled Shoppable Ads, a new ad innovation that makes shopping on TV streaming as easy as it is on social media. Viewers simply press "OK" with their Roku remote on a shoppable ad and proceed to check out with their shipping and payment details pre-populated from Roku Pay, its proprietary payments platform. Walmart was the exclusive retailer for the launch, a first-of-its-kind partnership.', metadata={'chunk_type': 'text', 'chunk_years_mentioned': [2022, 2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': 'PressRelease', 'data_source_link': 'https://newsroom.roku.com/press-releases', 'data_source_publish_date': '2023-03-02T09:30:00-04:00', 'data_source_uid': '963d4a81-f58e-3093-af68-987fb1758c15', 'title': "ROKU INC | Roku Is One of Fast Company's Most Innovative Companies for 2023"}), Document(page_content='Company Name: ROKU INC\nCompany Industry: CABLE & OTHER PAY TELEVISION SERVICES\nArticle Title: Roku Is One of Fast Company\'s Most Innovative Companies for 2023\nText: Finally, Roku grew its content offering with thousands of apps and watching options for users, including content on The Roku Channel, a top five app by reach and engagement on the Roku platform in the U.S. in 2022. In November, Roku released its first feature film, "WEIRD: The Weird Al\' Yankovic Story," a biopic starring Daniel Radcliffe. Throughout the year, The Roku Channel added FAST channels from NBCUniversal and the National Hockey League, as well as an exclusive AMC channel featuring its signature drama "Mad Men." This year, the company announced a deal with Warner Bros. Discovery, launching new channels that will include "Westworld" and "The Bachelor," in addition to 2,000 hours of on-demand content. Read more about Roku\'s journey here . Fast Company\'s Most Innovative Companies issue (March/April 2023) is available online here , as well as in-app via iTunes and on newsstands beginning March 14. About Roku, Inc.\nRoku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetize large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and TV-related audio devices are available in the U.S. and in select countries through direct retail sales and licensing arrangements with service operators. Roku TV models are available in the U.S. and select countries through licensing arrangements with TV OEM brands.', metadata={'chunk_type': 'text', 'chunk_years_mentioned': [2022, 2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': 'PressRelease', 'data_source_link': 'https://newsroom.roku.com/press-releases', 'data_source_publish_date': '2023-03-02T09:30:00-04:00', 'data_source_uid': '963d4a81-f58e-3093-af68-987fb1758c15', 'title': "ROKU INC | Roku Is One of Fast Company's Most Innovative Companies for 2023"}), Document(page_content='Company Name: ROKU INC\nCompany Industry: CABLE & OTHER PAY TELEVISION SERVICES\nArticle Title: Roku\'s New NFL Zone Gives Fans Easy Access to NFL Games Right On Time for 2023 Season\nText: In partnership with the NFL, the new NFL Zone offers viewers an easy way to find where to watch NFL live games Today, Roku (NASDAQ: ROKU ) and the National Football League (NFL) announced the recently launched NFL Zone within the Roku Sports experience to kick off the 2023 NFL season. This strategic partnership between Roku and the NFL marks the first official league-branded zone within Roku\'s Sports experience. Available now, the NFL Zone offers football fans a centralized location to find live and upcoming games, so they can spend less time figuring out where to watch the game and more time rooting for their favorite teams. Users can also tune in for weekly game previews, League highlights, and additional NFL content, all within the zone. This press release features multimedia. View the full release here: In partnership with the NFL, Roku\'s new NFL Zone offers viewers an easy way to find where to watch NFL live games (Photo: Business Wire) "Last year we introduced the Sports experience for our highly engaged sports audience, making it simpler for Roku users to watch sports programming," said Gidon Katz, President, Consumer Experience, at Roku. "As we start the biggest sports season of the year, providing easy access to NFL games and content to our millions of users is a top priority for us. We look forward to fans immersing themselves within the NFL Zone and making it their destination to find NFL games.', metadata={'chunk_type': 'text', 'chunk_years_mentioned': [2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': 'PressRelease', 'data_source_link': 'https://newsroom.roku.com/press-releases', 'data_source_publish_date': '2023-09-12T09:00:00-04:00', 'data_source_uid': '963d4a81-f58e-3093-af68-987fb1758c15', 'title': "ROKU INC | Roku's New NFL Zone Gives Fans Easy Access to NFL Games Right On Time for 2023 Season"})]
```
```
-> **Question**: What were the biggest strategy changes and partnerships made by Roku in 2023? **Answer**: In 2023, Roku made a strategic partnership with FreeWheel to bring Roku's leading ad tech to FreeWheel customers. This partnership aimed to drive greater interoperability and automation in the advertising-based video on demand (AVOD) space. Key highlights of this collaboration include streamlined integration of Roku's demand application programming interface (dAPI) with FreeWheel's TV platform, allowing for better inventory quality control and improved publisher yield and revenue. Additionally, publishers can now use Roku platform signals to enable advertisers to target audiences and measure campaign performance without relying on cookies. This partnership also involves the use of data clean room technology to enable the activation of additional data sets for better measurement and monetization for publishers and agencies. These partnerships and strategies aim to support Roku's growth in the AVOD market.
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:13.425Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/kay/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/kay/",
"description": "Kai Data API built for RAG 🕵️ We are curating",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4034",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"kay\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:13 GMT",
"etag": "W/\"2d6aee007df746287a8339cb9b3c32f4\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::s68rf-1713753733154-f14a09b4d0d3"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/kay/",
"property": "og:url"
},
{
"content": "Kay.ai | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Kai Data API built for RAG 🕵️ We are curating",
"property": "og:description"
}
],
"title": "Kay.ai | 🦜️🔗 LangChain"
} | This notebook shows you how to retrieve datasets supported by Kay. You can currently search SEC Filings and Press Releases of US companies. Visit kay.ai for the latest data drops. For any questions, join our discord or tweet at us
You will also need an API key: you can get one for free at https://kay.ai. Once you have an API key, you must set it as an environment variable KAY_API_KEY.
KayAiRetriever has a static .create() factory method that takes the following arguments:
[Document(page_content='Company Name: ROKU INC\nCompany Industry: CABLE & OTHER PAY TELEVISION SERVICES\nArticle Title: Roku Is One of Fast Company\'s Most Innovative Companies for 2023\nText: The company launched several new devices, including the Roku Voice Remote Pro; upgraded its most premium player, the Roku Ultra; and expanded its products with a new line of smart home devices such as video doorbells, lights, and plugs integrated into the Roku ecosystem. Recently, the company announced it will launch Roku-branded TVs this spring to offer more choice and innovation to both consumers and Roku TV partners. Throughout 2022, Roku also updated its operating system (OS), the only OS purpose-built for TV, with more personalization features and enhancements across search, audio, and content discovery, launching The Buzz, Sports, and What to Watch, which provides tailored movie and TV recommendations on the Home Screen Menu. The company also released a new feature for streamers, Photo Streams, that allows customers to display and share photo albums through Roku streaming devices. Additionally, Roku unveiled Shoppable Ads, a new ad innovation that makes shopping on TV streaming as easy as it is on social media. Viewers simply press "OK" with their Roku remote on a shoppable ad and proceed to check out with their shipping and payment details pre-populated from Roku Pay, its proprietary payments platform. Walmart was the exclusive retailer for the launch, a first-of-its-kind partnership.', metadata={'chunk_type': 'text', 'chunk_years_mentioned': [2022, 2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': 'PressRelease', 'data_source_link': 'https://newsroom.roku.com/press-releases', 'data_source_publish_date': '2023-03-02T09:30:00-04:00', 'data_source_uid': '963d4a81-f58e-3093-af68-987fb1758c15', 'title': "ROKU INC | Roku Is One of Fast Company's Most Innovative Companies for 2023"}),
Document(page_content='Company Name: ROKU INC\nCompany Industry: CABLE & OTHER PAY TELEVISION SERVICES\nArticle Title: Roku Is One of Fast Company\'s Most Innovative Companies for 2023\nText: Finally, Roku grew its content offering with thousands of apps and watching options for users, including content on The Roku Channel, a top five app by reach and engagement on the Roku platform in the U.S. in 2022. In November, Roku released its first feature film, "WEIRD: The Weird Al\' Yankovic Story," a biopic starring Daniel Radcliffe. Throughout the year, The Roku Channel added FAST channels from NBCUniversal and the National Hockey League, as well as an exclusive AMC channel featuring its signature drama "Mad Men." This year, the company announced a deal with Warner Bros. Discovery, launching new channels that will include "Westworld" and "The Bachelor," in addition to 2,000 hours of on-demand content. Read more about Roku\'s journey here . Fast Company\'s Most Innovative Companies issue (March/April 2023) is available online here , as well as in-app via iTunes and on newsstands beginning March 14. About Roku, Inc.\nRoku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetize large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and TV-related audio devices are available in the U.S. and in select countries through direct retail sales and licensing arrangements with service operators. Roku TV models are available in the U.S. and select countries through licensing arrangements with TV OEM brands.', metadata={'chunk_type': 'text', 'chunk_years_mentioned': [2022, 2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': 'PressRelease', 'data_source_link': 'https://newsroom.roku.com/press-releases', 'data_source_publish_date': '2023-03-02T09:30:00-04:00', 'data_source_uid': '963d4a81-f58e-3093-af68-987fb1758c15', 'title': "ROKU INC | Roku Is One of Fast Company's Most Innovative Companies for 2023"}),
Document(page_content='Company Name: ROKU INC\nCompany Industry: CABLE & OTHER PAY TELEVISION SERVICES\nArticle Title: Roku\'s New NFL Zone Gives Fans Easy Access to NFL Games Right On Time for 2023 Season\nText: In partnership with the NFL, the new NFL Zone offers viewers an easy way to find where to watch NFL live games Today, Roku (NASDAQ: ROKU ) and the National Football League (NFL) announced the recently launched NFL Zone within the Roku Sports experience to kick off the 2023 NFL season. This strategic partnership between Roku and the NFL marks the first official league-branded zone within Roku\'s Sports experience. Available now, the NFL Zone offers football fans a centralized location to find live and upcoming games, so they can spend less time figuring out where to watch the game and more time rooting for their favorite teams. Users can also tune in for weekly game previews, League highlights, and additional NFL content, all within the zone. This press release features multimedia. View the full release here: In partnership with the NFL, Roku\'s new NFL Zone offers viewers an easy way to find where to watch NFL live games (Photo: Business Wire) "Last year we introduced the Sports experience for our highly engaged sports audience, making it simpler for Roku users to watch sports programming," said Gidon Katz, President, Consumer Experience, at Roku. "As we start the biggest sports season of the year, providing easy access to NFL games and content to our millions of users is a top priority for us. We look forward to fans immersing themselves within the NFL Zone and making it their destination to find NFL games.', metadata={'chunk_type': 'text', 'chunk_years_mentioned': [2023], 'company_name': 'ROKU INC', 'company_sic_code_description': 'CABLE & OTHER PAY TELEVISION SERVICES', 'data_source': 'PressRelease', 'data_source_link': 'https://newsroom.roku.com/press-releases', 'data_source_publish_date': '2023-09-12T09:00:00-04:00', 'data_source_uid': '963d4a81-f58e-3093-af68-987fb1758c15', 'title': "ROKU INC | Roku's New NFL Zone Gives Fans Easy Access to NFL Games Right On Time for 2023 Season"})]
-> **Question**: What were the biggest strategy changes and partnerships made by Roku in 2023?
**Answer**: In 2023, Roku made a strategic partnership with FreeWheel to bring Roku's leading ad tech to FreeWheel customers. This partnership aimed to drive greater interoperability and automation in the advertising-based video on demand (AVOD) space. Key highlights of this collaboration include streamlined integration of Roku's demand application programming interface (dAPI) with FreeWheel's TV platform, allowing for better inventory quality control and improved publisher yield and revenue. Additionally, publishers can now use Roku platform signals to enable advertisers to target audiences and measure campaign performance without relying on cookies. This partnership also involves the use of data clean room technology to enable the activation of additional data sets for better measurement and monetization for publishers and agencies. These partnerships and strategies aim to support Roku's growth in the AVOD market. |
https://python.langchain.com/docs/integrations/providers/weather/ | We must set up the `OpenWeatherMap API token`.
```
from langchain_community.document_loaders import WeatherDataLoader
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:13.584Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/weather/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/weather/",
"description": "OpenWeatherMap is an open-source weather service provider.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3585",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"weather\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:13 GMT",
"etag": "W/\"dc3c47665e8856984e4f8406e18ef36a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::vp7cr-1713753733514-39e0ff208938"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/weather/",
"property": "og:url"
},
{
"content": "Weather | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "OpenWeatherMap is an open-source weather service provider.",
"property": "og:description"
}
],
"title": "Weather | 🦜️🔗 LangChain"
} | We must set up the OpenWeatherMap API token.
from langchain_community.document_loaders import WeatherDataLoader |
https://python.langchain.com/docs/integrations/retrievers/llmlingua/ | This notebook shows how to use LLMLingua as a document compressor.
```
[notice] A new release of pip is available: 23.3.2 -> 24.0[notice] To update, run: python -m pip install --upgrade pipNote: you may need to restart the kernel to use updated packages.
```
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.
```
Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.----------------------------------------------------------------------------------------------------Document 3:A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.----------------------------------------------------------------------------------------------------Document 4:He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.----------------------------------------------------------------------------------------------------Document 5:But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down.----------------------------------------------------------------------------------------------------Document 6:And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. I’m a capitalist, but capitalism without competition isn’t capitalism. It’s exploitation—and it drives up prices.----------------------------------------------------------------------------------------------------Document 7:I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. So let’s not abandon our streets. Or choose between safety and equal justice.----------------------------------------------------------------------------------------------------Document 8:As I’ve told Xi Jinping, it is never a good bet to bet against the American people. We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.----------------------------------------------------------------------------------------------------Document 9:Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny.----------------------------------------------------------------------------------------------------Document 10:As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. Inflation is robbing them of the gains they might otherwise feel. I get it. That’s why my top priority is getting prices under control.----------------------------------------------------------------------------------------------------Document 11:I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease.----------------------------------------------------------------------------------------------------Document 12:Headaches. Numbness. Dizziness. A cancer that would put them in a flag-draped coffin. I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson.----------------------------------------------------------------------------------------------------Document 13:He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand.----------------------------------------------------------------------------------------------------Document 14:When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America. For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. And I know you’re tired, frustrated, and exhausted. But I also know this.----------------------------------------------------------------------------------------------------Document 15:My plan to fight inflation will lower your costs and lower the deficit. 17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And here’s the plan: First – cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.----------------------------------------------------------------------------------------------------Document 16:And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.----------------------------------------------------------------------------------------------------Document 17:My plan will not only lower costs to give families a fair shot, it will lower the deficit. The previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted. But in my administration, the watchdogs have been welcomed back. We’re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans.----------------------------------------------------------------------------------------------------Document 18:So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.----------------------------------------------------------------------------------------------------Document 19:I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.----------------------------------------------------------------------------------------------------Document 20:And we will, as one people. One America. The United States of America. May God bless you all. May God protect our troops.
```
Now let’s wrap our base retriever with a `ContextualCompressionRetriever`, using `LLMLinguaCompressor` as a compressor.
```
Document 1:One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.----------------------------------------------------------------------------------------------------Document 2:. Numbness. Dizziness.A that would them in a-draped coffin. I One of those soldiers was my Biden We don’t know for sure if a burn pit the cause of brain, or the diseases of so many of our troops But I’m committed to finding out everything we can Committed to military families like Danielle Robinson from Ohio The widow of First Robinson.----------------------------------------------------------------------------------------------------Document 3:<ref#> let� Or between equal Let’ to protect, restore law accountable why the Justice Department cameras bannedhold and restricted its officers. <----------------------------------------------------------------------------------------------------Document 4:<# The Sergeant Class Combat froms widow us toBut burn pits ravaged Heath’s lungs and body. Danielle says Heath was a fighter to the very end.
```
```
{'query': 'What did the president say about Ketanji Brown Jackson', 'result': "The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds and will continue Justice Breyer's legacy of excellence."}
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:13.631Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/llmlingua/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/llmlingua/",
"description": "LLMLingua utilizes a",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3581",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"llmlingua\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:13 GMT",
"etag": "W/\"5185d3c0c92d70ba095841110da537b1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::68vtp-1713753733560-b9e15f8e9c20"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/llmlingua/",
"property": "og:url"
},
{
"content": "LLMLingua Document Compressor | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "LLMLingua utilizes a",
"property": "og:description"
}
],
"title": "LLMLingua Document Compressor | 🦜️🔗 LangChain"
} | This notebook shows how to use LLMLingua as a document compressor.
[notice] A new release of pip is available: 23.3.2 -> 24.0
[notice] To update, run: python -m pip install --upgrade pip
Note: you may need to restart the kernel to use updated packages.
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs.
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
----------------------------------------------------------------------------------------------------
Document 4:
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.
In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.
----------------------------------------------------------------------------------------------------
Document 5:
But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century.
Vice President Harris and I ran for office with a new economic vision for America.
Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up
and the middle out, not from the top down.
----------------------------------------------------------------------------------------------------
Document 6:
And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud.
By the end of this year, the deficit will be down to less than half what it was before I took office.
The only president ever to cut the deficit by more than one trillion dollars in a single year.
Lowering your costs also means demanding more competition.
I’m a capitalist, but capitalism without competition isn’t capitalism.
It’s exploitation—and it drives up prices.
----------------------------------------------------------------------------------------------------
Document 7:
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime prevention and community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
----------------------------------------------------------------------------------------------------
Document 8:
As I’ve told Xi Jinping, it is never a good bet to bet against the American people.
We’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America.
And we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice.
----------------------------------------------------------------------------------------------------
Document 9:
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
----------------------------------------------------------------------------------------------------
Document 10:
As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.”
It’s time.
But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills.
Inflation is robbing them of the gains they might otherwise feel.
I get it. That’s why my top priority is getting prices under control.
----------------------------------------------------------------------------------------------------
Document 11:
I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve.
And fourth, let’s end cancer as we know it.
This is personal to me and Jill, to Kamala, and to so many of you.
Cancer is the #2 cause of death in America–second only to heart disease.
----------------------------------------------------------------------------------------------------
Document 12:
Headaches. Numbness. Dizziness.
A cancer that would put them in a flag-draped coffin.
I know.
One of those soldiers was my son Major Beau Biden.
We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops.
But I’m committed to finding out everything we can.
Committed to military families like Danielle Robinson from Ohio.
The widow of Sergeant First Class Heath Robinson.
----------------------------------------------------------------------------------------------------
Document 13:
He will never extinguish their love of freedom. He will never weaken the resolve of the free world.
We meet tonight in an America that has lived through two of the hardest years this nation has ever faced.
The pandemic has been punishing.
And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more.
I understand.
----------------------------------------------------------------------------------------------------
Document 14:
When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America.
For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation.
And I know you’re tired, frustrated, and exhausted.
But I also know this.
----------------------------------------------------------------------------------------------------
Document 15:
My plan to fight inflation will lower your costs and lower the deficit.
17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And here’s the plan:
First – cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.
----------------------------------------------------------------------------------------------------
Document 16:
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.
----------------------------------------------------------------------------------------------------
Document 17:
My plan will not only lower costs to give families a fair shot, it will lower the deficit.
The previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted.
But in my administration, the watchdogs have been welcomed back.
We’re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans.
----------------------------------------------------------------------------------------------------
Document 18:
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
----------------------------------------------------------------------------------------------------
Document 19:
I understand.
I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it.
That’s why one of the first things I did as President was fight to pass the American Rescue Plan.
Because people were hurting. We needed to act, and we did.
Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.
----------------------------------------------------------------------------------------------------
Document 20:
And we will, as one people.
One America.
The United States of America.
May God bless you all. May God protect our troops.
Now let’s wrap our base retriever with a ContextualCompressionRetriever, using LLMLinguaCompressor as a compressor.
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
. Numbness. Dizziness.A that would them in a-draped coffin. I One of those soldiers was my Biden We don’t know for sure if a burn pit the cause of brain, or the diseases of so many of our troops But I’m committed to finding out everything we can Committed to military families like Danielle Robinson from Ohio The widow of First Robinson.
----------------------------------------------------------------------------------------------------
Document 3:
<ref#> let� Or between equal Let’ to protect, restore law accountable why the Justice Department cameras bannedhold and restricted its officers. <
----------------------------------------------------------------------------------------------------
Document 4:
<# The Sergeant Class Combat froms widow us toBut burn pits ravaged Heath’s lungs and body.
Danielle says Heath was a fighter to the very end.
{'query': 'What did the president say about Ketanji Brown Jackson',
'result': "The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds and will continue Justice Breyer's legacy of excellence."} |
https://python.langchain.com/docs/integrations/retrievers/self_query/astradb/ | ## Astra DB (Cassandra)
> [DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on `Cassandra` and made conveniently available through an easy-to-use JSON API.
In the walkthrough, we’ll demo the `SelfQueryRetriever` with an `Astra DB` vector store.
## Creating an Astra DB vector store[](#creating-an-astra-db-vector-store "Direct link to Creating an Astra DB vector store")
First we’ll want to create an Astra DB VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `astrapy` package.
```
%pip install --upgrade --quiet lark astrapy langchain-openai
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import osfrom getpass import getpassfrom langchain_openai.embeddings import OpenAIEmbeddingsos.environ["OPENAI_API_KEY"] = getpass("OpenAI API Key:")embeddings = OpenAIEmbeddings()
```
Create the Astra DB VectorStore:
* the API Endpoint looks like `https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com`
* the Token looks like `AstraCS:6gBhNmsk135....`
```
ASTRA_DB_API_ENDPOINT = input("ASTRA_DB_API_ENDPOINT = ")ASTRA_DB_APPLICATION_TOKEN = getpass("ASTRA_DB_APPLICATION_TOKEN = ")
```
```
from langchain.vectorstores import AstraDBfrom langchain_core.documents import Documentdocs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = AstraDB.from_documents( docs, embeddings, collection_name="astra_self_query_demo", api_endpoint=ASTRA_DB_API_ENDPOINT, token=ASTRA_DB_APPLICATION_TOKEN,)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.llms import OpenAIfrom langchain.retrievers.self_query.base import SelfQueryRetrievermetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs?")
```
```
# This example specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
# This example only specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5), science fiction movie ?")
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie about toys after 1990 but before 2005, and is animated")
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True, enable_limit=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are two movies about dinosaurs?")
```
## Cleanup[](#cleanup "Direct link to Cleanup")
If you want to completely delete the collection from your Astra DB instance, run this.
_(You will lose the data you stored in it.)_
```
vectorstore.delete_collection()
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:14.006Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/astradb/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/astradb/",
"description": "[DataStax Astra",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "8723",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"astradb\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:13 GMT",
"etag": "W/\"23e72f7e1b67d25970dc90f28ed866f4\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::dxnkq-1713753733565-deda0b6c5715"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/astradb/",
"property": "og:url"
},
{
"content": "Astra DB (Cassandra) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[DataStax Astra",
"property": "og:description"
}
],
"title": "Astra DB (Cassandra) | 🦜️🔗 LangChain"
} | Astra DB (Cassandra)
DataStax Astra DB is a serverless vector-capable database built on Cassandra and made conveniently available through an easy-to-use JSON API.
In the walkthrough, we’ll demo the SelfQueryRetriever with an Astra DB vector store.
Creating an Astra DB vector store
First we’ll want to create an Astra DB VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the astrapy package.
%pip install --upgrade --quiet lark astrapy langchain-openai
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
from getpass import getpass
from langchain_openai.embeddings import OpenAIEmbeddings
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API Key:")
embeddings = OpenAIEmbeddings()
Create the Astra DB VectorStore:
the API Endpoint looks like https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com
the Token looks like AstraCS:6gBhNmsk135....
ASTRA_DB_API_ENDPOINT = input("ASTRA_DB_API_ENDPOINT = ")
ASTRA_DB_APPLICATION_TOKEN = getpass("ASTRA_DB_APPLICATION_TOKEN = ")
from langchain.vectorstores import AstraDB
from langchain_core.documents import Document
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
"rating": 9.9,
},
),
]
vectorstore = AstraDB.from_documents(
docs,
embeddings,
collection_name="astra_self_query_demo",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs?")
# This example specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
# This example only specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5), science fiction movie ?"
)
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie about toys after 1990 but before 2005, and is animated"
)
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True,
enable_limit=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("What are two movies about dinosaurs?")
Cleanup
If you want to completely delete the collection from your Astra DB instance, run this.
(You will lose the data you stored in it.)
vectorstore.delete_collection() |
https://python.langchain.com/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query/ | ## Deep Lake
> [Deep Lake](https://www.activeloop.ai/) is a multimodal database for building AI applications [Deep Lake](https://github.com/activeloopai/deeplake) is a database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real time to PyTorch/TensorFlow.
In the notebook, we’ll demo the `SelfQueryRetriever` wrapped around a `Deep Lake` vector store.
## Creating a Deep Lake vector store[](#creating-a-deep-lake-vector-store "Direct link to Creating a Deep Lake vector store")
First we’ll want to create a Deep Lake vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `deeplake` package.
```
%pip install --upgrade --quiet lark
```
```
# in case if some queries fail consider installing libdeeplake manually%pip install --upgrade --quiet libdeeplake
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass("Activeloop token:")
```
```
from langchain_community.vectorstores import DeepLakefrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]username_or_org = "<USERNAME_OR_ORG>"vectorstore = DeepLake.from_documents( docs, embeddings, dataset_path=f"hub://{username_or_org}/self_queery", overwrite=True,)
```
```
Your Deep Lake dataset has been successfully created!Dataset(path='hub://adilkhan/self_queery', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (6, 1536) float32 None id text (6, 1) str None metadata json (6, 1) str None text text (6, 1) str None
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
/home/ubuntu/langchain_activeloop/langchain/libs/langchain/langchain/chains/llm.py:279: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn(
```
```
query='dinosaur' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")# in case if this example errored out, consider installing libdeeplake manually: `pip install libdeeplake`, and then restart notebook.
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
```
```
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
```
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:14.302Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query/",
"description": "Deep Lake is a multimodal database for",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3580",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"activeloop_deeplake_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:13 GMT",
"etag": "W/\"90231e13fc72bc30f1f05af2190410d1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dhf8l-1713753733753-5c4e8236e9f4"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query/",
"property": "og:url"
},
{
"content": "Deep Lake | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Deep Lake is a multimodal database for",
"property": "og:description"
}
],
"title": "Deep Lake | 🦜️🔗 LangChain"
} | Deep Lake
Deep Lake is a multimodal database for building AI applications Deep Lake is a database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real time to PyTorch/TensorFlow.
In the notebook, we’ll demo the SelfQueryRetriever wrapped around a Deep Lake vector store.
Creating a Deep Lake vector store
First we’ll want to create a Deep Lake vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the deeplake package.
%pip install --upgrade --quiet lark
# in case if some queries fail consider installing libdeeplake manually
%pip install --upgrade --quiet libdeeplake
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass("Activeloop token:")
from langchain_community.vectorstores import DeepLake
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
"rating": 9.9,
},
),
]
username_or_org = "<USERNAME_OR_ORG>"
vectorstore = DeepLake.from_documents(
docs,
embeddings,
dataset_path=f"hub://{username_or_org}/self_queery",
overwrite=True,
)
Your Deep Lake dataset has been successfully created!
Dataset(path='hub://adilkhan/self_queery', tensors=['embedding', 'id', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding embedding (6, 1536) float32 None
id text (6, 1) str None
metadata json (6, 1) str None
text text (6, 1) str None
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
/home/ubuntu/langchain_activeloop/langchain/libs/langchain/langchain/chains/llm.py:279: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
# in case if this example errored out, consider installing libdeeplake manually: `pip install libdeeplake`, and then restart notebook.
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})] |
https://python.langchain.com/docs/integrations/providers/whatsapp/ | [WhatsApp](https://www.whatsapp.com/) (also called `WhatsApp Messenger`) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.
There isn't any special setup for it.
```
from langchain_community.document_loaders import WhatsAppChatLoader
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:15.028Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/whatsapp/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/whatsapp/",
"description": "WhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4633",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"whatsapp\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:14 GMT",
"etag": "W/\"ac92364a10b698ec0188444d9b1f552a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wpm5b-1713753734300-0cd65cd7548b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/whatsapp/",
"property": "og:url"
},
{
"content": "WhatsApp | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "WhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.",
"property": "og:description"
}
],
"title": "WhatsApp | 🦜️🔗 LangChain"
} | WhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.
There isn't any special setup for it.
from langchain_community.document_loaders import WhatsAppChatLoader |
https://python.langchain.com/docs/integrations/providers/weaviate/ | `Weaviate` is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
```
pip install langchain-weaviate
```
There exists a wrapper around `Weaviate` indexes, allowing you to use it as a vectorstore, whether for semantic search or example selection. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:15.099Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/weaviate/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/weaviate/",
"description": "Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4633",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"weaviate\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:14 GMT",
"etag": "W/\"acf6445aecedd080b0cc8e22a76a344a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::tqd6x-1713753734294-f2e5fc6e2b4f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/weaviate/",
"property": "og:url"
},
{
"content": "Weaviate | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from",
"property": "og:description"
}
],
"title": "Weaviate | 🦜️🔗 LangChain"
} | Weaviate is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.
pip install langchain-weaviate
There exists a wrapper around Weaviate indexes, allowing you to use it as a vectorstore, whether for semantic search or example selection. |
https://python.langchain.com/docs/integrations/retrievers/merger_retriever/ | `Lord of the Retrievers (LOTR)`, also known as `MergerRetriever`, takes a list of retrievers as input and merges the results of their get\_relevant\_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.
The `MergerRetriever` class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first.
```
import osimport chromadbfrom langchain.retrievers import ( ContextualCompressionRetriever, DocumentCompressorPipeline, MergerRetriever,)from langchain_chroma import Chromafrom langchain_community.document_transformers import ( EmbeddingsClusteringFilter, EmbeddingsRedundantFilter,)from langchain_community.embeddings import HuggingFaceEmbeddingsfrom langchain_openai import OpenAIEmbeddings# Get 3 diff embeddings.all_mini = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")multi_qa_mini = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-dot-v1")filter_embeddings = OpenAIEmbeddings()ABS_PATH = os.path.dirname(os.path.abspath(__file__))DB_DIR = os.path.join(ABS_PATH, "db")# Instantiate 2 diff chromadb indexes, each one with a diff embedding.client_settings = chromadb.config.Settings( is_persistent=True, persist_directory=DB_DIR, anonymized_telemetry=False,)db_all = Chroma( collection_name="project_store_all", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=all_mini,)db_multi_qa = Chroma( collection_name="project_store_multi", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=multi_qa_mini,)# Define 2 diff retrievers with 2 diff embeddings and diff search type.retriever_all = db_all.as_retriever( search_type="similarity", search_kwargs={"k": 5, "include_metadata": True})retriever_multi_qa = db_multi_qa.as_retriever( search_type="mmr", search_kwargs={"k": 5, "include_metadata": True})# The Lord of the Retrievers will hold the output of both retrievers and can be used as any other# retriever on different types of chains.lotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa])
```
```
# We can remove redundant results from both retrievers using yet another embedding.# Using multiples embeddings in diff steps could help reduce biases.filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)pipeline = DocumentCompressorPipeline(transformers=[filter])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)
```
```
# This filter will divide the documents vectors into clusters or "centers" of meaning.# Then it will pick the closest document to that center for the final results.# By default the result document will be ordered/grouped by clusters.filter_ordered_cluster = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1,)# If you want the final document to be ordered by the original retriever scores# you need to add the "sorted" parameter.filter_ordered_by_retriever = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1, sorted=True,)pipeline = DocumentCompressorPipeline(transformers=[filter_ordered_by_retriever])compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)
```
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. In brief: When models must access relevant information in the middle of long contexts, then tend to ignore the provided documents. See: [https://arxiv.org/abs//2307.03172](https://arxiv.org/abs//2307.03172)
```
# You can use an additional document transformer to reorder documents after removing redundancy.from langchain_community.document_transformers import LongContextReorderfilter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)reordering = LongContextReorder()pipeline = DocumentCompressorPipeline(transformers=[filter, reordering])compression_retriever_reordered = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:15.187Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/merger_retriever/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/merger_retriever/",
"description": "Lord of the Retrievers (LOTR), also known as MergerRetriever,",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3582",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"merger_retriever\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:15 GMT",
"etag": "W/\"f3e14017703218a2dcf13b277f8e8e4d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::v2ch6-1713753735049-70ef1529c6d7"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/merger_retriever/",
"property": "og:url"
},
{
"content": "LOTR (Merger Retriever) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Lord of the Retrievers (LOTR), also known as MergerRetriever,",
"property": "og:description"
}
],
"title": "LOTR (Merger Retriever) | 🦜️🔗 LangChain"
} | Lord of the Retrievers (LOTR), also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.
The MergerRetriever class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first.
import os
import chromadb
from langchain.retrievers import (
ContextualCompressionRetriever,
DocumentCompressorPipeline,
MergerRetriever,
)
from langchain_chroma import Chroma
from langchain_community.document_transformers import (
EmbeddingsClusteringFilter,
EmbeddingsRedundantFilter,
)
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_openai import OpenAIEmbeddings
# Get 3 diff embeddings.
all_mini = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
multi_qa_mini = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-dot-v1")
filter_embeddings = OpenAIEmbeddings()
ABS_PATH = os.path.dirname(os.path.abspath(__file__))
DB_DIR = os.path.join(ABS_PATH, "db")
# Instantiate 2 diff chromadb indexes, each one with a diff embedding.
client_settings = chromadb.config.Settings(
is_persistent=True,
persist_directory=DB_DIR,
anonymized_telemetry=False,
)
db_all = Chroma(
collection_name="project_store_all",
persist_directory=DB_DIR,
client_settings=client_settings,
embedding_function=all_mini,
)
db_multi_qa = Chroma(
collection_name="project_store_multi",
persist_directory=DB_DIR,
client_settings=client_settings,
embedding_function=multi_qa_mini,
)
# Define 2 diff retrievers with 2 diff embeddings and diff search type.
retriever_all = db_all.as_retriever(
search_type="similarity", search_kwargs={"k": 5, "include_metadata": True}
)
retriever_multi_qa = db_multi_qa.as_retriever(
search_type="mmr", search_kwargs={"k": 5, "include_metadata": True}
)
# The Lord of the Retrievers will hold the output of both retrievers and can be used as any other
# retriever on different types of chains.
lotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa])
# We can remove redundant results from both retrievers using yet another embedding.
# Using multiples embeddings in diff steps could help reduce biases.
filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)
pipeline = DocumentCompressorPipeline(transformers=[filter])
compression_retriever = ContextualCompressionRetriever(
base_compressor=pipeline, base_retriever=lotr
)
# This filter will divide the documents vectors into clusters or "centers" of meaning.
# Then it will pick the closest document to that center for the final results.
# By default the result document will be ordered/grouped by clusters.
filter_ordered_cluster = EmbeddingsClusteringFilter(
embeddings=filter_embeddings,
num_clusters=10,
num_closest=1,
)
# If you want the final document to be ordered by the original retriever scores
# you need to add the "sorted" parameter.
filter_ordered_by_retriever = EmbeddingsClusteringFilter(
embeddings=filter_embeddings,
num_clusters=10,
num_closest=1,
sorted=True,
)
pipeline = DocumentCompressorPipeline(transformers=[filter_ordered_by_retriever])
compression_retriever = ContextualCompressionRetriever(
base_compressor=pipeline, base_retriever=lotr
)
No matter the architecture of your model, there is a substantial performance degradation when you include 10+ retrieved documents. In brief: When models must access relevant information in the middle of long contexts, then tend to ignore the provided documents. See: https://arxiv.org/abs//2307.03172
# You can use an additional document transformer to reorder documents after removing redundancy.
from langchain_community.document_transformers import LongContextReorder
filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)
reordering = LongContextReorder()
pipeline = DocumentCompressorPipeline(transformers=[filter, reordering])
compression_retriever_reordered = ContextualCompressionRetriever(
base_compressor=pipeline, base_retriever=lotr
) |
https://python.langchain.com/docs/integrations/retrievers/self_query/chroma_self_query/ | ## Chroma
> [Chroma](https://docs.trychroma.com/getting-started) is a vector database for building AI applications with embeddings.
In the notebook, we’ll demo the `SelfQueryRetriever` wrapped around a `Chroma` vector store.
## Creating a Chroma vector store[](#creating-a-chroma-vector-store "Direct link to Creating a Chroma vector store")
First we’ll want to create a Chroma vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `langchain-chroma` package.
```
%pip install --upgrade --quiet lark
```
```
%pip install --upgrade --quiet langchain-chroma
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
```
```
from langchain_chroma import Chromafrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = Chroma.from_documents(docs, embeddings)
```
```
Using embedded DuckDB without persistence: data will be transient
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
query='dinosaur' filter=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
```
```
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
```
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
query='dinosaur' filter=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:16.460Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/chroma_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/chroma_self_query/",
"description": "Chroma is a vector",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "1334",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"chroma_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:16 GMT",
"etag": "W/\"2e4959fc47cd2f09416997771e0608d8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::jv828-1713753736286-63b7332864fb"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/chroma_self_query/",
"property": "og:url"
},
{
"content": "Chroma | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Chroma is a vector",
"property": "og:description"
}
],
"title": "Chroma | 🦜️🔗 LangChain"
} | Chroma
Chroma is a vector database for building AI applications with embeddings.
In the notebook, we’ll demo the SelfQueryRetriever wrapped around a Chroma vector store.
Creating a Chroma vector store
First we’ll want to create a Chroma vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the langchain-chroma package.
%pip install --upgrade --quiet lark
%pip install --upgrade --quiet langchain-chroma
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
from langchain_chroma import Chroma
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
"rating": 9.9,
},
),
]
vectorstore = Chroma.from_documents(docs, embeddings)
Using embedded DuckDB without persistence: data will be transient
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})] |
https://python.langchain.com/docs/integrations/retrievers/self_query/dashvector/ | ## DashVector
> [DashVector](https://help.aliyun.com/document_detail/2510225.html) is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. The vector retrieval service `DashVector` is based on the `Proxima` core of the efficient vector engine independently developed by `DAMO Academy`, and provides a cloud-native, fully managed vector retrieval service with horizontal expansion capabilities. `DashVector` exposes its powerful vector management, vector query and other diversified capabilities through a simple and easy-to-use SDK/API interface, which can be quickly integrated by upper-layer AI applications, thereby providing services including large model ecology, multi-modal AI search, molecular structure A variety of application scenarios, including analysis, provide the required efficient vector retrieval capabilities.
In this notebook, we’ll demo the `SelfQueryRetriever` with a `DashVector` vector store.
## Create DashVector vectorstore[](#create-dashvector-vectorstore "Direct link to Create DashVector vectorstore")
First we’ll want to create a `DashVector` VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
To use DashVector, you have to have `dashvector` package installed, and you must have an API key and an Environment. Here are the [installation instructions](https://help.aliyun.com/document_detail/2510223.html).
NOTE: The self-query retriever requires you to have `lark` package installed.
```
%pip install --upgrade --quiet lark dashvector
```
```
import osimport dashvectorclient = dashvector.Client(api_key=os.environ["DASHVECTOR_API_KEY"])
```
```
from langchain_community.embeddings import DashScopeEmbeddingsfrom langchain_community.vectorstores import DashVectorfrom langchain_core.documents import Documentembeddings = DashScopeEmbeddings()# create DashVector collectionclient.create("langchain-self-retriever-demo", dimension=1536)
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "action"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = DashVector.from_documents( docs, embeddings, collection_name="langchain-self-retriever-demo")
```
## Create your self-querying retriever[](#create-your-self-querying-retriever "Direct link to Create your self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_community.llms import Tongyimetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = Tongyi(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
query='dinosaurs' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.699999809265137, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.199999809265137}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.600000381469727})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'rating': 9.899999618530273, 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.600000381469727})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='Greta Gerwig' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.300000190734863})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query='science fiction' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'rating': 9.899999618530273, 'genre': 'science fiction'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
query='dinosaurs' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.699999809265137, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:17.606Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/dashvector/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/dashvector/",
"description": "DashVector is",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3584",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"dashvector\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:17 GMT",
"etag": "W/\"97171c81f6878dac7288283c1018721a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::tlvfk-1713753737541-d26faa2b0d28"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/dashvector/",
"property": "og:url"
},
{
"content": "DashVector | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "DashVector is",
"property": "og:description"
}
],
"title": "DashVector | 🦜️🔗 LangChain"
} | DashVector
DashVector is a fully managed vector DB service that supports high-dimension dense and sparse vectors, real-time insertion and filtered search. It is built to scale automatically and can adapt to different application requirements. The vector retrieval service DashVector is based on the Proxima core of the efficient vector engine independently developed by DAMO Academy, and provides a cloud-native, fully managed vector retrieval service with horizontal expansion capabilities. DashVector exposes its powerful vector management, vector query and other diversified capabilities through a simple and easy-to-use SDK/API interface, which can be quickly integrated by upper-layer AI applications, thereby providing services including large model ecology, multi-modal AI search, molecular structure A variety of application scenarios, including analysis, provide the required efficient vector retrieval capabilities.
In this notebook, we’ll demo the SelfQueryRetriever with a DashVector vector store.
Create DashVector vectorstore
First we’ll want to create a DashVector VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
To use DashVector, you have to have dashvector package installed, and you must have an API key and an Environment. Here are the installation instructions.
NOTE: The self-query retriever requires you to have lark package installed.
%pip install --upgrade --quiet lark dashvector
import os
import dashvector
client = dashvector.Client(api_key=os.environ["DASHVECTOR_API_KEY"])
from langchain_community.embeddings import DashScopeEmbeddings
from langchain_community.vectorstores import DashVector
from langchain_core.documents import Document
embeddings = DashScopeEmbeddings()
# create DashVector collection
client.create("langchain-self-retriever-demo", dimension=1536)
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "action"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
"rating": 9.9,
},
),
]
vectorstore = DashVector.from_documents(
docs, embeddings, collection_name="langchain-self-retriever-demo"
)
Create your self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_community.llms import Tongyi
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = Tongyi(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaurs' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.699999809265137, 'genre': 'action'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.199999809265137}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.600000381469727})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'rating': 9.899999618530273, 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.600000381469727})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='Greta Gerwig' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.300000190734863})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query='science fiction' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'director': 'Andrei Tarkovsky', 'rating': 9.899999618530273, 'genre': 'science fiction'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaurs' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.699999809265137, 'genre': 'action'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})] |
https://python.langchain.com/docs/integrations/retrievers/self_query/databricks_vector_search/ | ## Databricks Vector Search
> [Databricks Vector Search](https://docs.databricks.com/en/generative-ai/vector-search.html) is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database. With Vector Search, you can create auto-updating vector search indexes from Delta tables managed by Unity Catalog and query them with a simple API to return the most similar vectors.
In the walkthrough, we’ll demo the `SelfQueryRetriever` with a Databricks Vector Search.
## create Databricks vector store index[](#create-databricks-vector-store-index "Direct link to create Databricks vector store index")
First we’ll want to create a databricks vector store index and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`) along with integration-specific requirements.
```
%pip install --upgrade --quiet langchain-core databricks-vectorsearch langchain-openai tiktoken
```
```
Note: you may need to restart the kernel to use updated packages.
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")databricks_host = getpass.getpass("Databricks host:")databricks_token = getpass.getpass("Databricks token:")
```
```
OpenAI API Key: ········Databricks host: ········Databricks token: ········
```
```
from databricks.vector_search.client import VectorSearchClientfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()emb_dim = len(embeddings.embed_query("hello"))vector_search_endpoint_name = "vector_search_demo_endpoint"vsc = VectorSearchClient( workspace_url=databricks_host, personal_access_token=databricks_token)vsc.create_endpoint(name=vector_search_endpoint_name, endpoint_type="STANDARD")
```
```
[NOTICE] Using a Personal Authentication Token (PAT). Recommended for development only. For improved performance, please use Service Principal based authentication. To disable this message, pass disable_notice=True to VectorSearchClient().
```
```
index_name = "udhay_demo.10x.demo_index"index = vsc.create_direct_access_index( endpoint_name=vector_search_endpoint_name, index_name=index_name, primary_key="id", embedding_dimension=emb_dim, embedding_vector_column="text_vector", schema={ "id": "string", "page_content": "string", "year": "int", "rating": "float", "genre": "string", "text_vector": "array<float>", },)index.describe()
```
```
index = vsc.get_index(endpoint_name=vector_search_endpoint_name, index_name=index_name)index.describe()
```
```
from langchain_core.documents import Documentdocs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"id": 1, "year": 1993, "rating": 7.7, "genre": "action"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"id": 2, "year": 2010, "genre": "thriller", "rating": 8.2}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"id": 3, "year": 2019, "rating": 8.3, "genre": "drama"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"id": 4, "year": 1979, "rating": 9.9, "genre": "science fiction"}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"id": 5, "year": 2006, "genre": "thriller", "rating": 9.0}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"id": 6, "year": 1995, "genre": "animated", "rating": 9.3}, ),]
```
```
from langchain_community.vectorstores import DatabricksVectorSearchvector_store = DatabricksVectorSearch( index, text_column="page_content", embedding=embeddings, columns=["year", "rating", "genre"],)
```
```
vector_store.add_documents(docs)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vector_store, document_content_description, metadata_field_info, verbose=True)
```
## Test it out[](#test-it-out "Direct link to Test it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993.0, 'rating': 7.7, 'genre': 'action', 'id': 1.0}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995.0, 'rating': 9.3, 'genre': 'animated', 'id': 6.0}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979.0, 'rating': 9.9, 'genre': 'science fiction', 'id': 4.0}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006.0, 'rating': 9.0, 'genre': 'thriller', 'id': 5.0})]
```
```
# This example specifies a filterretriever.get_relevant_documents("What are some highly rated movies (above 9)?")
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995.0, 'rating': 9.3, 'genre': 'animated', 'id': 6.0}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979.0, 'rating': 9.9, 'genre': 'science fiction', 'id': 4.0})]
```
```
# This example specifies both a relevant query and a filterretriever.get_relevant_documents("What are the thriller movies that are highly rated?")
```
```
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006.0, 'rating': 9.0, 'genre': 'thriller', 'id': 5.0}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010.0, 'rating': 8.2, 'genre': 'thriller', 'id': 2.0})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about dinosaurs, \ and preferably has a lot of action")
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993.0, 'rating': 7.7, 'genre': 'action', 'id': 1.0})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
## Filter k[](#filter-k-1 "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vector_store, document_content_description, metadata_field_info, verbose=True, enable_limit=True,)
```
```
retriever.get_relevant_documents("What are two movies about dinosaurs?")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:17.963Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/databricks_vector_search/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/databricks_vector_search/",
"description": "[Databricks Vector",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3584",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"databricks_vector_search\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:17 GMT",
"etag": "W/\"09db03a210b214d9969e54675d337c6c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8krzg-1713753737565-ee5b697ada0e"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/databricks_vector_search/",
"property": "og:url"
},
{
"content": "Databricks Vector Search | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Databricks Vector",
"property": "og:description"
}
],
"title": "Databricks Vector Search | 🦜️🔗 LangChain"
} | Databricks Vector Search
Databricks Vector Search is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database. With Vector Search, you can create auto-updating vector search indexes from Delta tables managed by Unity Catalog and query them with a simple API to return the most similar vectors.
In the walkthrough, we’ll demo the SelfQueryRetriever with a Databricks Vector Search.
create Databricks vector store index
First we’ll want to create a databricks vector store index and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark) along with integration-specific requirements.
%pip install --upgrade --quiet langchain-core databricks-vectorsearch langchain-openai tiktoken
Note: you may need to restart the kernel to use updated packages.
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
databricks_host = getpass.getpass("Databricks host:")
databricks_token = getpass.getpass("Databricks token:")
OpenAI API Key: ········
Databricks host: ········
Databricks token: ········
from databricks.vector_search.client import VectorSearchClient
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
emb_dim = len(embeddings.embed_query("hello"))
vector_search_endpoint_name = "vector_search_demo_endpoint"
vsc = VectorSearchClient(
workspace_url=databricks_host, personal_access_token=databricks_token
)
vsc.create_endpoint(name=vector_search_endpoint_name, endpoint_type="STANDARD")
[NOTICE] Using a Personal Authentication Token (PAT). Recommended for development only. For improved performance, please use Service Principal based authentication. To disable this message, pass disable_notice=True to VectorSearchClient().
index_name = "udhay_demo.10x.demo_index"
index = vsc.create_direct_access_index(
endpoint_name=vector_search_endpoint_name,
index_name=index_name,
primary_key="id",
embedding_dimension=emb_dim,
embedding_vector_column="text_vector",
schema={
"id": "string",
"page_content": "string",
"year": "int",
"rating": "float",
"genre": "string",
"text_vector": "array<float>",
},
)
index.describe()
index = vsc.get_index(endpoint_name=vector_search_endpoint_name, index_name=index_name)
index.describe()
from langchain_core.documents import Document
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"id": 1, "year": 1993, "rating": 7.7, "genre": "action"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"id": 2, "year": 2010, "genre": "thriller", "rating": 8.2},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"id": 3, "year": 2019, "rating": 8.3, "genre": "drama"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={"id": 4, "year": 1979, "rating": 9.9, "genre": "science fiction"},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"id": 5, "year": 2006, "genre": "thriller", "rating": 9.0},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"id": 6, "year": 1995, "genre": "animated", "rating": 9.3},
),
]
from langchain_community.vectorstores import DatabricksVectorSearch
vector_store = DatabricksVectorSearch(
index,
text_column="page_content",
embedding=embeddings,
columns=["year", "rating", "genre"],
)
vector_store.add_documents(docs)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vector_store, document_content_description, metadata_field_info, verbose=True
)
Test it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993.0, 'rating': 7.7, 'genre': 'action', 'id': 1.0}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995.0, 'rating': 9.3, 'genre': 'animated', 'id': 6.0}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979.0, 'rating': 9.9, 'genre': 'science fiction', 'id': 4.0}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006.0, 'rating': 9.0, 'genre': 'thriller', 'id': 5.0})]
# This example specifies a filter
retriever.get_relevant_documents("What are some highly rated movies (above 9)?")
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995.0, 'rating': 9.3, 'genre': 'animated', 'id': 6.0}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979.0, 'rating': 9.9, 'genre': 'science fiction', 'id': 4.0})]
# This example specifies both a relevant query and a filter
retriever.get_relevant_documents("What are the thriller movies that are highly rated?")
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006.0, 'rating': 9.0, 'genre': 'thriller', 'id': 5.0}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010.0, 'rating': 8.2, 'genre': 'thriller', 'id': 2.0})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about dinosaurs, \
and preferably has a lot of action"
)
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993.0, 'rating': 7.7, 'genre': 'action', 'id': 1.0})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vector_store,
document_content_description,
metadata_field_info,
verbose=True,
enable_limit=True,
)
retriever.get_relevant_documents("What are two movies about dinosaurs?") |
https://python.langchain.com/docs/integrations/providers/whylabs_profiling/ | ## WhyLabs
> [WhyLabs](https://docs.whylabs.ai/docs/) is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called `whylogs`, the platform enables Data Scientists and Engineers to: - Set up in minutes: Begin generating statistical profiles of any dataset using whylogs, the lightweight open-source library. - Upload dataset profiles to the WhyLabs platform for centralized and customizable monitoring/alerting of dataset features as well as model inputs, outputs, and performance. - Integrate seamlessly: interoperable with any data pipeline, ML infrastructure, or framework. Generate real-time insights into your existing data flow. See more about our integrations here. - Scale to terabytes: handle your large-scale data, keeping compute requirements low. Integrate with either batch or streaming data pipelines. - Maintain data privacy: WhyLabs relies statistical profiles created via whylogs so your actual data never leaves your environment! Enable observability to detect inputs and LLM issues faster, deliver continuous improvements, and avoid costly incidents.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
```
%pip install --upgrade --quiet langkit langchain-openai langchain
```
Make sure to set the required API keys and config required to send telemetry to WhyLabs:
* WhyLabs API Key: [https://whylabs.ai/whylabs-free-sign-up](https://whylabs.ai/whylabs-free-sign-up)
* Org and Dataset [https://docs.whylabs.ai/docs/whylabs-onboarding](https://docs.whylabs.ai/docs/whylabs-onboarding#upload-a-profile-to-a-whylabs-project)
* OpenAI: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)
Then you can set them like this:
```
import osos.environ["OPENAI_API_KEY"] = ""os.environ["WHYLABS_DEFAULT_ORG_ID"] = ""os.environ["WHYLABS_DEFAULT_DATASET_ID"] = ""os.environ["WHYLABS_API_KEY"] = ""
```
> _Note_: the callback supports directly passing in these variables to the callback, when no auth is directly passed in it will default to the environment. Passing in auth directly allows for writing profiles to multiple projects or organizations in WhyLabs.
## Callbacks[](#callbacks "Direct link to Callbacks")
Here’s a single LLM integration with OpenAI, which will log various out of the box metrics and send telemetry to WhyLabs for monitoring.
```
from langchain.callbacks import WhyLabsCallbackHandler
```
```
from langchain_openai import OpenAIwhylabs = WhyLabsCallbackHandler.from_params()llm = OpenAI(temperature=0, callbacks=[whylabs])result = llm.generate(["Hello, World!"])print(result)
```
```
generations=[[Generation(text="\n\nMy name is John and I'm excited to learn more about programming.", generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 20, 'prompt_tokens': 4, 'completion_tokens': 16}, 'model_name': 'text-davinci-003'}
```
```
result = llm.generate( [ "Can you give me 3 SSNs so I can understand the format?", "Can you give me 3 fake email addresses?", "Can you give me 3 fake US mailing addresses?", ])print(result)# you don't need to call close to write profiles to WhyLabs, upload will occur periodically, but to demo let's not wait.whylabs.close()
```
```
generations=[[Generation(text='\n\n1. 123-45-6789\n2. 987-65-4321\n3. 456-78-9012', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. johndoe@example.com\n2. janesmith@example.com\n3. johnsmith@example.com', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. 123 Main Street, Anytown, USA 12345\n2. 456 Elm Street, Nowhere, USA 54321\n3. 789 Pine Avenue, Somewhere, USA 98765', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 137, 'prompt_tokens': 33, 'completion_tokens': 104}, 'model_name': 'text-davinci-003'}
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:18.392Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/whylabs_profiling/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/whylabs_profiling/",
"description": "WhyLabs is an observability platform",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4636",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"whylabs_profiling\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:17 GMT",
"etag": "W/\"f7522285804f8331fac02425622b2917\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::kbrfh-1713753737581-453ec924b3da"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/whylabs_profiling/",
"property": "og:url"
},
{
"content": "WhyLabs | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "WhyLabs is an observability platform",
"property": "og:description"
}
],
"title": "WhyLabs | 🦜️🔗 LangChain"
} | WhyLabs
WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to: - Set up in minutes: Begin generating statistical profiles of any dataset using whylogs, the lightweight open-source library. - Upload dataset profiles to the WhyLabs platform for centralized and customizable monitoring/alerting of dataset features as well as model inputs, outputs, and performance. - Integrate seamlessly: interoperable with any data pipeline, ML infrastructure, or framework. Generate real-time insights into your existing data flow. See more about our integrations here. - Scale to terabytes: handle your large-scale data, keeping compute requirements low. Integrate with either batch or streaming data pipelines. - Maintain data privacy: WhyLabs relies statistical profiles created via whylogs so your actual data never leaves your environment! Enable observability to detect inputs and LLM issues faster, deliver continuous improvements, and avoid costly incidents.
Installation and Setup
%pip install --upgrade --quiet langkit langchain-openai langchain
Make sure to set the required API keys and config required to send telemetry to WhyLabs:
WhyLabs API Key: https://whylabs.ai/whylabs-free-sign-up
Org and Dataset https://docs.whylabs.ai/docs/whylabs-onboarding
OpenAI: https://platform.openai.com/account/api-keys
Then you can set them like this:
import os
os.environ["OPENAI_API_KEY"] = ""
os.environ["WHYLABS_DEFAULT_ORG_ID"] = ""
os.environ["WHYLABS_DEFAULT_DATASET_ID"] = ""
os.environ["WHYLABS_API_KEY"] = ""
Note: the callback supports directly passing in these variables to the callback, when no auth is directly passed in it will default to the environment. Passing in auth directly allows for writing profiles to multiple projects or organizations in WhyLabs.
Callbacks
Here’s a single LLM integration with OpenAI, which will log various out of the box metrics and send telemetry to WhyLabs for monitoring.
from langchain.callbacks import WhyLabsCallbackHandler
from langchain_openai import OpenAI
whylabs = WhyLabsCallbackHandler.from_params()
llm = OpenAI(temperature=0, callbacks=[whylabs])
result = llm.generate(["Hello, World!"])
print(result)
generations=[[Generation(text="\n\nMy name is John and I'm excited to learn more about programming.", generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 20, 'prompt_tokens': 4, 'completion_tokens': 16}, 'model_name': 'text-davinci-003'}
result = llm.generate(
[
"Can you give me 3 SSNs so I can understand the format?",
"Can you give me 3 fake email addresses?",
"Can you give me 3 fake US mailing addresses?",
]
)
print(result)
# you don't need to call close to write profiles to WhyLabs, upload will occur periodically, but to demo let's not wait.
whylabs.close()
generations=[[Generation(text='\n\n1. 123-45-6789\n2. 987-65-4321\n3. 456-78-9012', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. johndoe@example.com\n2. janesmith@example.com\n3. johnsmith@example.com', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. 123 Main Street, Anytown, USA 12345\n2. 456 Elm Street, Nowhere, USA 54321\n3. 789 Pine Avenue, Somewhere, USA 98765', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 137, 'prompt_tokens': 33, 'completion_tokens': 104}, 'model_name': 'text-davinci-003'} |
https://python.langchain.com/docs/integrations/retrievers/outline/ | This notebook shows how to retrieve documents from your Outline instance into the Document format that is used downstream.
You first need to [create an api key](https://www.getoutline.com/developers#section/Authentication) for your Outline instance. Then you need to set the following environment variables:
`OutlineRetriever` has these arguments: - optional `top_k_results`: default=3. Use it to limit number of documents retrieved. - optional `load_all_available_meta`: default=False. By default only the most important fields retrieved: `title`, `source` (the url of the document). If True, other fields also retrieved. - optional `doc_content_chars_max` default=4000. Use it to limit the number of characters for each document retrieved.
`get_relevant_documents()` has one argument, `query`: free text which used to find documents in your Outline instance.
```
[Document(page_content='This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.\n\nIf we compare it to the standard ReAct agent, the main difference is the prompt. We want it to be much more conversational.\n\nfrom langchain.agents import AgentType, Tool, initialize_agent\n\nfrom langchain_openai import OpenAI\n\nfrom langchain.memory import ConversationBufferMemory\n\nfrom langchain_community.utilities import SerpAPIWrapper\n\nsearch = SerpAPIWrapper() tools = \\[ Tool( name="Current Search", func=search.run, description="useful for when you need to answer questions about current events or the current state of the world", ), \\]\n\n\\\nllm = OpenAI(temperature=0)\n\nUsing LCEL\n\nWe will first show how to create this agent using LCEL\n\nfrom langchain import hub\n\nfrom langchain.agents.format_scratchpad import format_log_to_str\n\nfrom langchain.agents.output_parsers import ReActSingleInputOutputParser\n\nfrom langchain.tools.render import render_text_description\n\nprompt = hub.pull("hwchase17/react-chat")\n\nprompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join(\\[[t.name](http://t.name) for t in tools\\]), )\n\nllm_with_stop = llm.bind(stop=\\["\\nObservation"\\])\n\nagent = ( { "input": lambda x: x\\["input"\\], "agent_scratchpad": lambda x: format_log_to_str(x\\["intermediate_steps"\\]), "chat_history": lambda x: x\\["chat_history"\\], } | prompt | llm_with_stop | ReActSingleInputOutputParser() )\n\nfrom langchain.agents import AgentExecutor\n\nmemory = ConversationBufferMemory(memory_key="chat_history") agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)\n\nagent_executor.invoke({"input": "hi, i am bob"})\\["output"\\]\n\n```\n> Entering new AgentExecutor chain...\n\nThought: Do I need to use a tool? No\nFinal Answer: Hi Bob, nice to meet you! How can I help you today?\n\n> Finished chain.\n```\n\n\\\n\'Hi Bob, nice to meet you! How can I help you today?\'\n\nagent_executor.invoke({"input": "whats my name?"})\\["output"\\]\n\n```\n> Entering new AgentExecutor chain...\n\nThought: Do I need to use a tool? No\nFinal Answer: Your name is Bob.\n\n> Finished chain.\n```\n\n\\\n\'Your name is Bob.\'\n\nagent_executor.invoke({"input": "what are some movies showing 9/21/2023?"})\\["output"\\]\n\n```\n> Entering new AgentExecutor chain...\n\nThought: Do I need to use a tool? Yes\nAction: Current Search\nAction Input: Movies showing 9/21/2023[\'September 2023 Movies: The Creator • Dumb Money • Expend4bles • The Kill Room • The Inventor • The Equalizer 3 • PAW Patrol: The Mighty Movie, ...\'] Do I need to use a tool? No\nFinal Answer: According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie.\n\n> Finished chain.\n```\n\n\\\n\'According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie.\'\n\n\\\nUse the off-the-shelf agent\n\nWe can also create this agent using the off-the-shelf agent class\n\nagent_executor = initialize_agent( tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory, )\n\nUse a chat model\n\nWe can also use a chat model here. The main difference here is in the prompts used.\n\nfrom langchain import hub\n\nfrom langchain_openai import ChatOpenAI\n\nprompt = hub.pull("hwchase17/react-chat-json") chat_model = ChatOpenAI(temperature=0, model="gpt-4")\n\nprompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join(\\[[t.name](http://t.name) for t in tools\\]), )\n\nchat_model_with_stop = chat_model.bind(stop=\\["\\nObservation"\\])\n\nfrom langchain.agents.format_scratchpad import format_log_to_messages\n\nfrom langchain.agents.output_parsers import JSONAgentOutputParser\n\n# We need some extra steering, or the c', metadata={'title': 'Conversational', 'source': 'https://d01.getoutline.com/doc/conversational-B5dBkUgQ4b'}), Document(page_content='Quickstart\n\nIn this quickstart we\'ll show you how to:\n\nGet setup with LangChain, LangSmith and LangServe\n\nUse the most basic and common components of LangChain: prompt templates, models, and output parsers\n\nUse LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining\n\nBuild a simple application with LangChain\n\nTrace your application with LangSmith\n\nServe your application with LangServe\n\nThat\'s a fair amount to cover! Let\'s dive in.\n\nSetup\n\nInstallation\n\nTo install LangChain run:\n\nPip\n\nConda\n\npip install langchain\n\nFor more details, see our Installation guide.\n\nEnvironment\n\nUsing LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we\'ll use OpenAI\'s model APIs.\n\nFirst we\'ll need to install their Python package:\n\npip install openai\n\nAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we\'ll want to set it as an environment variable by running:\n\nexport OPENAI_API_KEY="..."\n\nIf you\'d prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:\n\nfrom langchain_openai import ChatOpenAI\n\nllm = ChatOpenAI(openai_api_key="...")\n\nLangSmith\n\nMany of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.\n\nNote that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:\n\nexport LANGCHAIN_TRACING_V2="true" export LANGCHAIN_API_KEY=...\n\nLangServe\n\nLangServe helps developers deploy LangChain chains as a REST API. You do not need to use LangServe to use LangChain, but in this guide we\'ll show how you can deploy your app with LangServe.\n\nInstall with:\n\npip install "langserve\\[all\\]"\n\nBuilding with LangChain\n\nLangChain provides many modules that can be used to build language model applications. Modules can be used as standalones in simple applications and they can be composed for more complex use cases. Composition is powered by LangChain Expression Language (LCEL), which defines a unified Runnable interface that many modules implement, making it possible to seamlessly chain components.\n\nThe simplest and most common chain contains three things:\n\nLLM/Chat Model: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them. Prompt Template: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial. Output Parser: These translate the raw response from the language model to a more workable format, making it easy to use the output downstream. In this guide we\'ll cover those three components individually, and then go over how to combine them. Understanding these concepts will set you up well for being able to use and customize LangChain applications. Most LangChain applications allow you to configure the model and/or the prompt, so knowing how to take advantage of this will be a big enabler.\n\nLLM / Chat Model\n\nThere are two types of language models:\n\nLLM: underlying model takes a string as input and returns a string\n\nChatModel: underlying model takes a list of messages as input and returns a message\n\nStrings are simple, but what exactly are messages? The base message interface is defined by BaseMessage, which has two required attributes:\n\ncontent: The content of the message. Usually a string. role: The entity from which the BaseMessage is coming. LangChain provides several ob', metadata={'title': 'Quick Start', 'source': 'https://d01.getoutline.com/doc/quick-start-jGuGGGOTuL'}), Document(page_content='This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic.\n\n```javascript\nfrom langchain.agents import AgentType, initialize_agent, load_tools\nfrom langchain_openai import OpenAI\n```\n\nFirst, let\'s load the language model we\'re going to use to control the agent.\n\n```javascript\nllm = OpenAI(temperature=0)\n```\n\nNext, let\'s load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in.\n\n```javascript\ntools = load_tools(["serpapi", "llm-math"], llm=llm)\n```\n\n## Using LCEL[\u200b](/docs/modules/agents/agent_types/react#using-lcel "Direct link to Using LCEL")\n\nWe will first show how to create the agent using LCEL\n\n```javascript\nfrom langchain import hub\nfrom langchain.agents.format_scratchpad import format_log_to_str\nfrom langchain.agents.output_parsers import ReActSingleInputOutputParser\nfrom langchain.tools.render import render_text_description\n```\n\n```javascript\nprompt = hub.pull("hwchase17/react")\nprompt = prompt.partial(\n tools=render_text_description(tools),\n tool_names=", ".join([t.name for t in tools]),\n)\n```\n\n```javascript\nllm_with_stop = llm.bind(stop=["\\nObservation"])\n```\n\n```javascript\nagent = (\n {\n "input": lambda x: x["input"],\n "agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),\n }\n | prompt\n | llm_with_stop\n | ReActSingleInputOutputParser()\n)\n```\n\n```javascript\nfrom langchain.agents import AgentExecutor\n```\n\n```javascript\nagent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n```\n\n```javascript\nagent_executor.invoke(\n {\n "input": "Who is Leo DiCaprio\'s girlfriend? What is her current age raised to the 0.43 power?"\n }\n)\n```\n\n```javascript\n \n \n > Entering new AgentExecutor chain...\n I need to find out who Leo DiCaprio\'s girlfriend is and then calculate her age raised to the 0.43 power.\n Action: Search\n Action Input: "Leo DiCaprio girlfriend"model Vittoria Ceretti I need to find out Vittoria Ceretti\'s age\n Action: Search\n Action Input: "Vittoria Ceretti age"25 years I need to calculate 25 raised to the 0.43 power\n Action: Calculator\n Action Input: 25^0.43Answer: 3.991298452658078 I now know the final answer\n Final Answer: Leo DiCaprio\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\n \n > Finished chain.\n\n\n\n\n\n {\'input\': "Who is Leo DiCaprio\'s girlfriend? What is her current age raised to the 0.43 power?",\n \'output\': "Leo DiCaprio\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078."}\n```\n\n## Using ZeroShotReactAgent[\u200b](/docs/modules/agents/agent_types/react#using-zeroshotreactagent "Direct link to Using ZeroShotReactAgent")\n\nWe will now show how to use the agent with an off-the-shelf agent implementation\n\n```javascript\nagent_executor = initialize_agent(\n tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n)\n```\n\n```javascript\nagent_executor.invoke(\n {\n "input": "Who is Leo DiCaprio\'s girlfriend? What is her current age raised to the 0.43 power?"\n }\n)\n```\n\n```javascript\n \n \n > Entering new AgentExecutor chain...\n I need to find out who Leo DiCaprio\'s girlfriend is and then calculate her age raised to the 0.43 power.\n Action: Search\n Action Input: "Leo DiCaprio girlfriend"\n Observation: model Vittoria Ceretti\n Thought: I need to find out Vittoria Ceretti\'s age\n Action: Search\n Action Input: "Vittoria Ceretti age"\n Observation: 25 years\n Thought: I need to calculate 25 raised to the 0.43 power\n Action: Calculator\n Action Input: 25^0.43\n Observation: Answer: 3.991298452658078\n Thought: I now know the final answer\n Final Answer: Leo DiCaprio\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\n \n > Finished chain.\n\n\n\n\n\n {\'input\': "Who is L', metadata={'title': 'ReAct', 'source': 'https://d01.getoutline.com/doc/react-d6rxRS1MHk'})]
```
```
{'question': 'what is langchain?', 'chat_history': {}, 'answer': "LangChain is a framework for developing applications powered by language models. It provides a set of libraries and tools that enable developers to build context-aware and reasoning-based applications. LangChain allows you to connect language models to various sources of context, such as prompt instructions, few-shot examples, and content, to enhance the model's responses. It also supports the composition of multiple language model components using LangChain Expression Language (LCEL). Additionally, LangChain offers off-the-shelf chains, templates, and integrations for easy application development. LangChain can be used in conjunction with LangSmith for debugging and monitoring chains, and with LangServe for deploying applications as a REST API."}
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:18.709Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/outline/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/outline/",
"description": "Outline is an open-source collaborative",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "2681",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"outline\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:17 GMT",
"etag": "W/\"49b7ccbf01a4c3af81d37718b28d58a4\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::j9qpx-1713753737974-90e5b4566048"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/outline/",
"property": "og:url"
},
{
"content": "Outline | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Outline is an open-source collaborative",
"property": "og:description"
}
],
"title": "Outline | 🦜️🔗 LangChain"
} | This notebook shows how to retrieve documents from your Outline instance into the Document format that is used downstream.
You first need to create an api key for your Outline instance. Then you need to set the following environment variables:
OutlineRetriever has these arguments: - optional top_k_results: default=3. Use it to limit number of documents retrieved. - optional load_all_available_meta: default=False. By default only the most important fields retrieved: title, source (the url of the document). If True, other fields also retrieved. - optional doc_content_chars_max default=4000. Use it to limit the number of characters for each document retrieved.
get_relevant_documents() has one argument, query: free text which used to find documents in your Outline instance.
[Document(page_content='This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.\n\nIf we compare it to the standard ReAct agent, the main difference is the prompt. We want it to be much more conversational.\n\nfrom langchain.agents import AgentType, Tool, initialize_agent\n\nfrom langchain_openai import OpenAI\n\nfrom langchain.memory import ConversationBufferMemory\n\nfrom langchain_community.utilities import SerpAPIWrapper\n\nsearch = SerpAPIWrapper() tools = \\[ Tool( name="Current Search", func=search.run, description="useful for when you need to answer questions about current events or the current state of the world", ), \\]\n\n\\\nllm = OpenAI(temperature=0)\n\nUsing LCEL\n\nWe will first show how to create this agent using LCEL\n\nfrom langchain import hub\n\nfrom langchain.agents.format_scratchpad import format_log_to_str\n\nfrom langchain.agents.output_parsers import ReActSingleInputOutputParser\n\nfrom langchain.tools.render import render_text_description\n\nprompt = hub.pull("hwchase17/react-chat")\n\nprompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join(\\[[t.name](http://t.name) for t in tools\\]), )\n\nllm_with_stop = llm.bind(stop=\\["\\nObservation"\\])\n\nagent = ( { "input": lambda x: x\\["input"\\], "agent_scratchpad": lambda x: format_log_to_str(x\\["intermediate_steps"\\]), "chat_history": lambda x: x\\["chat_history"\\], } | prompt | llm_with_stop | ReActSingleInputOutputParser() )\n\nfrom langchain.agents import AgentExecutor\n\nmemory = ConversationBufferMemory(memory_key="chat_history") agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)\n\nagent_executor.invoke({"input": "hi, i am bob"})\\["output"\\]\n\n```\n> Entering new AgentExecutor chain...\n\nThought: Do I need to use a tool? No\nFinal Answer: Hi Bob, nice to meet you! How can I help you today?\n\n> Finished chain.\n```\n\n\\\n\'Hi Bob, nice to meet you! How can I help you today?\'\n\nagent_executor.invoke({"input": "whats my name?"})\\["output"\\]\n\n```\n> Entering new AgentExecutor chain...\n\nThought: Do I need to use a tool? No\nFinal Answer: Your name is Bob.\n\n> Finished chain.\n```\n\n\\\n\'Your name is Bob.\'\n\nagent_executor.invoke({"input": "what are some movies showing 9/21/2023?"})\\["output"\\]\n\n```\n> Entering new AgentExecutor chain...\n\nThought: Do I need to use a tool? Yes\nAction: Current Search\nAction Input: Movies showing 9/21/2023[\'September 2023 Movies: The Creator • Dumb Money • Expend4bles • The Kill Room • The Inventor • The Equalizer 3 • PAW Patrol: The Mighty Movie, ...\'] Do I need to use a tool? No\nFinal Answer: According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie.\n\n> Finished chain.\n```\n\n\\\n\'According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie.\'\n\n\\\nUse the off-the-shelf agent\n\nWe can also create this agent using the off-the-shelf agent class\n\nagent_executor = initialize_agent( tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory, )\n\nUse a chat model\n\nWe can also use a chat model here. The main difference here is in the prompts used.\n\nfrom langchain import hub\n\nfrom langchain_openai import ChatOpenAI\n\nprompt = hub.pull("hwchase17/react-chat-json") chat_model = ChatOpenAI(temperature=0, model="gpt-4")\n\nprompt = prompt.partial( tools=render_text_description(tools), tool_names=", ".join(\\[[t.name](http://t.name) for t in tools\\]), )\n\nchat_model_with_stop = chat_model.bind(stop=\\["\\nObservation"\\])\n\nfrom langchain.agents.format_scratchpad import format_log_to_messages\n\nfrom langchain.agents.output_parsers import JSONAgentOutputParser\n\n# We need some extra steering, or the c', metadata={'title': 'Conversational', 'source': 'https://d01.getoutline.com/doc/conversational-B5dBkUgQ4b'}),
Document(page_content='Quickstart\n\nIn this quickstart we\'ll show you how to:\n\nGet setup with LangChain, LangSmith and LangServe\n\nUse the most basic and common components of LangChain: prompt templates, models, and output parsers\n\nUse LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining\n\nBuild a simple application with LangChain\n\nTrace your application with LangSmith\n\nServe your application with LangServe\n\nThat\'s a fair amount to cover! Let\'s dive in.\n\nSetup\n\nInstallation\n\nTo install LangChain run:\n\nPip\n\nConda\n\npip install langchain\n\nFor more details, see our Installation guide.\n\nEnvironment\n\nUsing LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we\'ll use OpenAI\'s model APIs.\n\nFirst we\'ll need to install their Python package:\n\npip install openai\n\nAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we\'ll want to set it as an environment variable by running:\n\nexport OPENAI_API_KEY="..."\n\nIf you\'d prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:\n\nfrom langchain_openai import ChatOpenAI\n\nllm = ChatOpenAI(openai_api_key="...")\n\nLangSmith\n\nMany of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.\n\nNote that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:\n\nexport LANGCHAIN_TRACING_V2="true" export LANGCHAIN_API_KEY=...\n\nLangServe\n\nLangServe helps developers deploy LangChain chains as a REST API. You do not need to use LangServe to use LangChain, but in this guide we\'ll show how you can deploy your app with LangServe.\n\nInstall with:\n\npip install "langserve\\[all\\]"\n\nBuilding with LangChain\n\nLangChain provides many modules that can be used to build language model applications. Modules can be used as standalones in simple applications and they can be composed for more complex use cases. Composition is powered by LangChain Expression Language (LCEL), which defines a unified Runnable interface that many modules implement, making it possible to seamlessly chain components.\n\nThe simplest and most common chain contains three things:\n\nLLM/Chat Model: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them. Prompt Template: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial. Output Parser: These translate the raw response from the language model to a more workable format, making it easy to use the output downstream. In this guide we\'ll cover those three components individually, and then go over how to combine them. Understanding these concepts will set you up well for being able to use and customize LangChain applications. Most LangChain applications allow you to configure the model and/or the prompt, so knowing how to take advantage of this will be a big enabler.\n\nLLM / Chat Model\n\nThere are two types of language models:\n\nLLM: underlying model takes a string as input and returns a string\n\nChatModel: underlying model takes a list of messages as input and returns a message\n\nStrings are simple, but what exactly are messages? The base message interface is defined by BaseMessage, which has two required attributes:\n\ncontent: The content of the message. Usually a string. role: The entity from which the BaseMessage is coming. LangChain provides several ob', metadata={'title': 'Quick Start', 'source': 'https://d01.getoutline.com/doc/quick-start-jGuGGGOTuL'}),
Document(page_content='This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic.\n\n```javascript\nfrom langchain.agents import AgentType, initialize_agent, load_tools\nfrom langchain_openai import OpenAI\n```\n\nFirst, let\'s load the language model we\'re going to use to control the agent.\n\n```javascript\nllm = OpenAI(temperature=0)\n```\n\nNext, let\'s load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in.\n\n```javascript\ntools = load_tools(["serpapi", "llm-math"], llm=llm)\n```\n\n## Using LCEL[\u200b](/docs/modules/agents/agent_types/react#using-lcel "Direct link to Using LCEL")\n\nWe will first show how to create the agent using LCEL\n\n```javascript\nfrom langchain import hub\nfrom langchain.agents.format_scratchpad import format_log_to_str\nfrom langchain.agents.output_parsers import ReActSingleInputOutputParser\nfrom langchain.tools.render import render_text_description\n```\n\n```javascript\nprompt = hub.pull("hwchase17/react")\nprompt = prompt.partial(\n tools=render_text_description(tools),\n tool_names=", ".join([t.name for t in tools]),\n)\n```\n\n```javascript\nllm_with_stop = llm.bind(stop=["\\nObservation"])\n```\n\n```javascript\nagent = (\n {\n "input": lambda x: x["input"],\n "agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),\n }\n | prompt\n | llm_with_stop\n | ReActSingleInputOutputParser()\n)\n```\n\n```javascript\nfrom langchain.agents import AgentExecutor\n```\n\n```javascript\nagent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n```\n\n```javascript\nagent_executor.invoke(\n {\n "input": "Who is Leo DiCaprio\'s girlfriend? What is her current age raised to the 0.43 power?"\n }\n)\n```\n\n```javascript\n \n \n > Entering new AgentExecutor chain...\n I need to find out who Leo DiCaprio\'s girlfriend is and then calculate her age raised to the 0.43 power.\n Action: Search\n Action Input: "Leo DiCaprio girlfriend"model Vittoria Ceretti I need to find out Vittoria Ceretti\'s age\n Action: Search\n Action Input: "Vittoria Ceretti age"25 years I need to calculate 25 raised to the 0.43 power\n Action: Calculator\n Action Input: 25^0.43Answer: 3.991298452658078 I now know the final answer\n Final Answer: Leo DiCaprio\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\n \n > Finished chain.\n\n\n\n\n\n {\'input\': "Who is Leo DiCaprio\'s girlfriend? What is her current age raised to the 0.43 power?",\n \'output\': "Leo DiCaprio\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078."}\n```\n\n## Using ZeroShotReactAgent[\u200b](/docs/modules/agents/agent_types/react#using-zeroshotreactagent "Direct link to Using ZeroShotReactAgent")\n\nWe will now show how to use the agent with an off-the-shelf agent implementation\n\n```javascript\nagent_executor = initialize_agent(\n tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n)\n```\n\n```javascript\nagent_executor.invoke(\n {\n "input": "Who is Leo DiCaprio\'s girlfriend? What is her current age raised to the 0.43 power?"\n }\n)\n```\n\n```javascript\n \n \n > Entering new AgentExecutor chain...\n I need to find out who Leo DiCaprio\'s girlfriend is and then calculate her age raised to the 0.43 power.\n Action: Search\n Action Input: "Leo DiCaprio girlfriend"\n Observation: model Vittoria Ceretti\n Thought: I need to find out Vittoria Ceretti\'s age\n Action: Search\n Action Input: "Vittoria Ceretti age"\n Observation: 25 years\n Thought: I need to calculate 25 raised to the 0.43 power\n Action: Calculator\n Action Input: 25^0.43\n Observation: Answer: 3.991298452658078\n Thought: I now know the final answer\n Final Answer: Leo DiCaprio\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\n \n > Finished chain.\n\n\n\n\n\n {\'input\': "Who is L', metadata={'title': 'ReAct', 'source': 'https://d01.getoutline.com/doc/react-d6rxRS1MHk'})]
{'question': 'what is langchain?',
'chat_history': {},
'answer': "LangChain is a framework for developing applications powered by language models. It provides a set of libraries and tools that enable developers to build context-aware and reasoning-based applications. LangChain allows you to connect language models to various sources of context, such as prompt instructions, few-shot examples, and content, to enhance the model's responses. It also supports the composition of multiple language model components using LangChain Expression Language (LCEL). Additionally, LangChain offers off-the-shelf chains, templates, and integrations for easy application development. LangChain can be used in conjunction with LangSmith for debugging and monitoring chains, and with LangServe for deploying applications as a REST API."} |
https://python.langchain.com/docs/integrations/retrievers/metal/ | ## Metal
> [Metal](https://github.com/getmetal/metal-python) is a managed service for ML Embeddings.
This notebook shows how to use [Metal’s](https://docs.getmetal.io/introduction) retriever.
First, you will need to sign up for Metal and get an API key. You can do so [here](https://docs.getmetal.io/misc-create-app)
```
%pip install --upgrade --quiet metal_sdk
```
```
from metal_sdk.metal import MetalAPI_KEY = ""CLIENT_ID = ""INDEX_ID = ""metal = Metal(API_KEY, CLIENT_ID, INDEX_ID)
```
## Ingest Documents[](#ingest-documents "Direct link to Ingest Documents")
You only need to do this if you haven’t already set up an index
```
metal.index({"text": "foo1"})metal.index({"text": "foo"})
```
```
{'data': {'id': '642739aa7559b026b4430e42', 'text': 'foo', 'createdAt': '2023-03-31T19:51:06.748Z'}}
```
## Query[](#query "Direct link to Query")
Now that our index is set up, we can set up a retriever and start querying it.
```
from langchain_community.retrievers import MetalRetriever
```
```
retriever = MetalRetriever(metal, params={"limit": 2})
```
```
retriever.get_relevant_documents("foo1")
```
```
[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}), Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:18.956Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/metal/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/metal/",
"description": "Metal is a managed service",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4039",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"metal\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:18 GMT",
"etag": "W/\"afc5c8d4ad2a86202c5f6f40a2450a3a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::2ndz7-1713753738422-a5f287424c3b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/metal/",
"property": "og:url"
},
{
"content": "Metal | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Metal is a managed service",
"property": "og:description"
}
],
"title": "Metal | 🦜️🔗 LangChain"
} | Metal
Metal is a managed service for ML Embeddings.
This notebook shows how to use Metal’s retriever.
First, you will need to sign up for Metal and get an API key. You can do so here
%pip install --upgrade --quiet metal_sdk
from metal_sdk.metal import Metal
API_KEY = ""
CLIENT_ID = ""
INDEX_ID = ""
metal = Metal(API_KEY, CLIENT_ID, INDEX_ID)
Ingest Documents
You only need to do this if you haven’t already set up an index
metal.index({"text": "foo1"})
metal.index({"text": "foo"})
{'data': {'id': '642739aa7559b026b4430e42',
'text': 'foo',
'createdAt': '2023-03-31T19:51:06.748Z'}}
Query
Now that our index is set up, we can set up a retriever and start querying it.
from langchain_community.retrievers import MetalRetriever
retriever = MetalRetriever(metal, params={"limit": 2})
retriever.get_relevant_documents("foo1")
[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}),
Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})] |
https://python.langchain.com/docs/integrations/providers/wikipedia/ | ## Wikipedia
> [Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
## Document Loader[](#document-loader "Direct link to Document Loader")
See a [usage example](https://python.langchain.com/docs/integrations/document_loaders/wikipedia/).
```
from langchain_community.document_loaders import WikipediaLoader
```
## Retriever[](#retriever "Direct link to Retriever")
See a [usage example](https://python.langchain.com/docs/integrations/retrievers/wikipedia/).
```
from langchain.retrievers import WikipediaRetriever
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:19.437Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/wikipedia/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/wikipedia/",
"description": "Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3590",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"wikipedia\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:19 GMT",
"etag": "W/\"967456c0fe48be3ccb3728aec35c8d63\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::rgmpg-1713753739160-6ff7db8b1255"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/wikipedia/",
"property": "og:url"
},
{
"content": "Wikipedia | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.",
"property": "og:description"
}
],
"title": "Wikipedia | 🦜️🔗 LangChain"
} | Wikipedia
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
Installation and Setup
Document Loader
See a usage example.
from langchain_community.document_loaders import WikipediaLoader
Retriever
See a usage example.
from langchain.retrievers import WikipediaRetriever
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/retrievers/pubmed/ | [PubMed®](https://pubmed.ncbi.nlm.nih.gov/) by `The National Center for Biotechnology Information, National Library of Medicine` comprises more than 35 million citations for biomedical literature from `MEDLINE`, life science journals, and online books. Citations may include links to full text content from `PubMed Central` and publisher web sites.
```
[Document(page_content='', metadata={'uid': '37549050', 'Title': 'ChatGPT: "To Be or Not to Be" in Bikini Bottom.', 'Published': '--', 'Copyright Information': ''}), Document(page_content="BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics.", metadata={'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'}), Document(page_content='', metadata={'uid': '37548971', 'Title': "Large Language Models Answer Medical Questions Accurately, but Can't Match Clinicians' Knowledge.", 'Published': '2023-08-07', 'Copyright Information': ''})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:19.544Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/pubmed/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/pubmed/",
"description": "PubMed® by",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3586",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"pubmed\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:19 GMT",
"etag": "W/\"5a0d47adc564f386c51d351910f8c0c1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::6kzcc-1713753739436-77ce17bbd15a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/pubmed/",
"property": "og:url"
},
{
"content": "PubMed | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "PubMed® by",
"property": "og:description"
}
],
"title": "PubMed | 🦜️🔗 LangChain"
} | PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.
[Document(page_content='', metadata={'uid': '37549050', 'Title': 'ChatGPT: "To Be or Not to Be" in Bikini Bottom.', 'Published': '--', 'Copyright Information': ''}),
Document(page_content="BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics.", metadata={'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'}),
Document(page_content='', metadata={'uid': '37548971', 'Title': "Large Language Models Answer Medical Questions Accurately, but Can't Match Clinicians' Knowledge.", 'Published': '2023-08-07', 'Copyright Information': ''})] |
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search/ | ## Pinecone Hybrid Search
> [Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality.
This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.
The logic of this retriever is taken from [this documentation](https://docs.pinecone.io/docs/hybrid-search)
To use Pinecone, you must have an API key and an Environment. Here are the [installation instructions](https://docs.pinecone.io/docs/quickstart).
```
%pip install --upgrade --quiet pinecone-client pinecone-text
```
```
import getpassimport osos.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:")
```
```
from langchain_community.retrievers import ( PineconeHybridSearchRetriever,)
```
```
os.environ["PINECONE_ENVIRONMENT"] = getpass.getpass("Pinecone Environment:")
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
```
## Setup Pinecone[](#setup-pinecone "Direct link to Setup Pinecone")
You should only have to do this part once.
Note: it’s important to make sure that the “context” field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone’s [docs](https://docs.pinecone.io/docs/manage-indexes#selective-metadata-indexing).
```
import osimport pineconeapi_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY"index_name = "langchain-pinecone-hybrid-search"
```
```
WhoAmIResponse(username='load', user_label='label', projectname='load-test')
```
```
# create the indexpinecone.create_index( name=index_name, dimension=1536, # dimensionality of dense model metric="dotproduct", # sparse values supported only for dotproduct pod_type="s1", metadata_config={"indexed": []}, # see explanation above)
```
Now that its created, we can use it
```
index = pinecone.Index(index_name)
```
## Get embeddings and sparse encoders[](#get-embeddings-and-sparse-encoders "Direct link to Get embeddings and sparse encoders")
Embeddings are used for the dense vectors, tokenizer is used for the sparse vector
```
from langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()
```
To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.
For more information about the sparse encoders you can checkout pinecone-text library [docs](https://pinecone-io.github.io/pinecone-text/pinecone_text.html).
```
from pinecone_text.sparse import BM25Encoder# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE# use default tf-idf valuesbm25_encoder = BM25Encoder().default()
```
The above code is using default tfids values. It’s highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:
```
corpus = ["foo", "bar", "world", "hello"]# fit tf-idf values on your corpusbm25_encoder.fit(corpus)# store the values to a json filebm25_encoder.dump("bm25_values.json")# load to your BM25Encoder objectbm25_encoder = BM25Encoder().load("bm25_values.json")
```
## Load Retriever[](#load-retriever "Direct link to Load Retriever")
We can now construct the retriever!
```
retriever = PineconeHybridSearchRetriever( embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)
```
## Add texts (if necessary)[](#add-texts-if-necessary "Direct link to Add texts (if necessary)")
We can optionally add texts to the retriever (if they aren’t already in there)
```
retriever.add_texts(["foo", "bar", "world", "hello"])
```
```
100%|██████████| 1/1 [00:02<00:00, 2.27s/it]
```
## Use Retriever[](#use-retriever "Direct link to Use Retriever")
We can now use the retriever!
```
result = retriever.get_relevant_documents("foo")
```
```
Document(page_content='foo', metadata={})
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:19.840Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search/",
"description": "Pinecone is a vector",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3587",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"pinecone_hybrid_search\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:19 GMT",
"etag": "W/\"422e93914b1141dc039d2493f7e6aa71\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::cc8bg-1713753739630-ad85d0839cf9"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search/",
"property": "og:url"
},
{
"content": "Pinecone Hybrid Search | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Pinecone is a vector",
"property": "og:description"
}
],
"title": "Pinecone Hybrid Search | 🦜️🔗 LangChain"
} | Pinecone Hybrid Search
Pinecone is a vector database with broad functionality.
This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.
The logic of this retriever is taken from this documentation
To use Pinecone, you must have an API key and an Environment. Here are the installation instructions.
%pip install --upgrade --quiet pinecone-client pinecone-text
import getpass
import os
os.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:")
from langchain_community.retrievers import (
PineconeHybridSearchRetriever,
)
os.environ["PINECONE_ENVIRONMENT"] = getpass.getpass("Pinecone Environment:")
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
Setup Pinecone
You should only have to do this part once.
Note: it’s important to make sure that the “context” field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone’s docs.
import os
import pinecone
api_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY"
index_name = "langchain-pinecone-hybrid-search"
WhoAmIResponse(username='load', user_label='label', projectname='load-test')
# create the index
pinecone.create_index(
name=index_name,
dimension=1536, # dimensionality of dense model
metric="dotproduct", # sparse values supported only for dotproduct
pod_type="s1",
metadata_config={"indexed": []}, # see explanation above
)
Now that its created, we can use it
index = pinecone.Index(index_name)
Get embeddings and sparse encoders
Embeddings are used for the dense vectors, tokenizer is used for the sparse vector
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.
For more information about the sparse encoders you can checkout pinecone-text library docs.
from pinecone_text.sparse import BM25Encoder
# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE
# use default tf-idf values
bm25_encoder = BM25Encoder().default()
The above code is using default tfids values. It’s highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:
corpus = ["foo", "bar", "world", "hello"]
# fit tf-idf values on your corpus
bm25_encoder.fit(corpus)
# store the values to a json file
bm25_encoder.dump("bm25_values.json")
# load to your BM25Encoder object
bm25_encoder = BM25Encoder().load("bm25_values.json")
Load Retriever
We can now construct the retriever!
retriever = PineconeHybridSearchRetriever(
embeddings=embeddings, sparse_encoder=bm25_encoder, index=index
)
Add texts (if necessary)
We can optionally add texts to the retriever (if they aren’t already in there)
retriever.add_texts(["foo", "bar", "world", "hello"])
100%|██████████| 1/1 [00:02<00:00, 2.27s/it]
Use Retriever
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
Document(page_content='foo', metadata={}) |
https://python.langchain.com/docs/integrations/providers/wolfram_alpha/ | This page covers how to use the `Wolfram Alpha API` within LangChain.
There exists a WolframAlphaAPIWrapper utility which wraps this API. To import this utility:
```
from langchain_community.utilities.wolfram_alpha import WolframAlphaAPIWrapper
```
You can also easily load this wrapper as a Tool (to use with an Agent). You can do this with: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:20.101Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/wolfram_alpha/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/wolfram_alpha/",
"description": "WolframAlpha is an answer engine developed by Wolfram Research.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3590",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"wolfram_alpha\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:19 GMT",
"etag": "W/\"e2642e9b193c182bcff20506a97091b6\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::jrbzs-1713753739622-d125697c1e90"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/wolfram_alpha/",
"property": "og:url"
},
{
"content": "Wolfram Alpha | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "WolframAlpha is an answer engine developed by Wolfram Research.",
"property": "og:description"
}
],
"title": "Wolfram Alpha | 🦜️🔗 LangChain"
} | This page covers how to use the Wolfram Alpha API within LangChain.
There exists a WolframAlphaAPIWrapper utility which wraps this API. To import this utility:
from langchain_community.utilities.wolfram_alpha import WolframAlphaAPIWrapper
You can also easily load this wrapper as a Tool (to use with an Agent). You can do this with: |
https://python.langchain.com/docs/integrations/retrievers/self_query/dingo/ | ## DingoDB
> [DingoDB](https://dingodb.readthedocs.io/en/latest/) is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data.
In the walkthrough, we’ll demo the `SelfQueryRetriever` with a `DingoDB` vector store.
## Creating a DingoDB index[](#creating-a-dingodb-index "Direct link to Creating a DingoDB index")
First we’ll want to create a `DingoDB` vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
To use DingoDB, you should have a [DingoDB instance up and running](https://github.com/dingodb/dingo-deploy/blob/main/README.md).
**Note:** The self-query retriever requires you to have `lark` package installed.
```
%pip install --upgrade --quiet dingodb# or install latest:%pip install --upgrade --quiet git+https://git@github.com/dingodb/pydingo.git
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import osOPENAI_API_KEY = ""os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
```
```
from langchain.schema import Documentfrom langchain_community.vectorstores import Dingofrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()# create new indexfrom dingodb import DingoDBindex_name = "langchain_demo"dingo_client = DingoDB(user="", password="", host=["172.30.14.221:13000"])# First, check if our index already exists. If it doesn't, we create itif ( index_name not in dingo_client.get_index() and index_name.upper() not in dingo_client.get_index()): # we create a new index, modify to your own dingo_client.create_index( index_name=index_name, dimension=1536, metric_type="cosine", auto_id=False )
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": '"action", "science fiction"'}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": '"science fiction", "thriller"', "rating": 9.9, }, ),]vectorstore = Dingo.from_documents( docs, embeddings, index_name=index_name, client=dingo_client)
```
```
dingo_client.get_index()dingo_client.delete_index("langchain_demo")
```
```
dingo_client.vector_count("langchain_demo")
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
query='dinosaurs' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 1183188982475, 'text': 'A bunch of scientists bring back dinosaurs and mayhem breaks loose', 'score': 0.13397777, 'year': {'value': 1993}, 'rating': {'value': 7.7}, 'genre': '"action", "science fiction"'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 1183189196391, 'text': 'Toys come alive and have a blast doing so', 'score': 0.18994397, 'year': {'value': 1995}, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 1183189220159, 'text': 'Three men walk into the Zone, three men walk out of the Zone', 'score': 0.23288351, 'year': {'value': 1979}, 'director': 'Andrei Tarkovsky', 'rating': {'value': 9.9}, 'genre': '"science fiction", "thriller"'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 1183189148854, 'text': 'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', 'score': 0.24421334, 'year': {'value': 2006}, 'director': 'Satoshi Kon', 'rating': {'value': 8.6}})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=Nonecomparator=<Comparator.GT: 'gt'> attribute='rating' value=8.5
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 1183189220159, 'text': 'Three men walk into the Zone, three men walk out of the Zone', 'score': 0.25033575, 'year': {'value': 1979}, 'director': 'Andrei Tarkovsky', 'genre': '"science fiction", "thriller"', 'rating': {'value': 9.9}}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 1183189148854, 'text': 'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', 'score': 0.26431882, 'year': {'value': 2006}, 'director': 'Satoshi Kon', 'rating': {'value': 8.6}})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=Nonecomparator=<Comparator.EQ: 'eq'> attribute='director' value='Greta Gerwig'
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'id': 1183189172623, 'text': 'A bunch of normal-sized women are supremely wholesome and some men pine after them', 'score': 0.19482517, 'year': {'value': 2019}, 'director': 'Greta Gerwig', 'rating': {'value': 8.3}})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query='science fiction' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=Nonecomparator=<Comparator.GT: 'gt'> attribute='rating' value=8.5
```
```
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 1183189148854, 'text': 'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', 'score': 0.19805312, 'year': {'value': 2006}, 'director': 'Satoshi Kon', 'rating': {'value': 8.6}}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 1183189220159, 'text': 'Three men walk into the Zone, three men walk out of the Zone', 'score': 0.225586, 'year': {'value': 1979}, 'director': 'Andrei Tarkovsky', 'rating': {'value': 9.9}, 'genre': '"science fiction", "thriller"'})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
```
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005)]), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=Noneoperator=<Operator.AND: 'and'> arguments=[Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005)]), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 1183189196391, 'text': 'Toys come alive and have a blast doing so', 'score': 0.133829, 'year': {'value': 1995}, 'genre': 'animated'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are two movies about dinosaurs")
```
```
query='dinosaurs' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 1183188982475, 'text': 'A bunch of scientists bring back dinosaurs and mayhem breaks loose', 'score': 0.13394928, 'year': {'value': 1993}, 'rating': {'value': 7.7}, 'genre': '"action", "science fiction"'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 1183189196391, 'text': 'Toys come alive and have a blast doing so', 'score': 0.1899159, 'year': {'value': 1995}, 'genre': 'animated'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:20.200Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/dingo/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/dingo/",
"description": "DingoDB is a distributed",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"dingo\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:19 GMT",
"etag": "W/\"c07d6ecaa33e764a3a60e553f6be1105\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::cz6f4-1713753739854-ee705162e9eb"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/dingo/",
"property": "og:url"
},
{
"content": "DingoDB | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "DingoDB is a distributed",
"property": "og:description"
}
],
"title": "DingoDB | 🦜️🔗 LangChain"
} | DingoDB
DingoDB is a distributed multi-mode vector database, which combines the characteristics of data lakes and vector databases, and can store data of any type and size (Key-Value, PDF, audio, video, etc.). It has real-time low-latency processing capabilities to achieve rapid insight and response, and can efficiently conduct instant analysis and process multi-modal data.
In the walkthrough, we’ll demo the SelfQueryRetriever with a DingoDB vector store.
Creating a DingoDB index
First we’ll want to create a DingoDB vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
To use DingoDB, you should have a DingoDB instance up and running.
Note: The self-query retriever requires you to have lark package installed.
%pip install --upgrade --quiet dingodb
# or install latest:
%pip install --upgrade --quiet git+https://git@github.com/dingodb/pydingo.git
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
OPENAI_API_KEY = ""
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.schema import Document
from langchain_community.vectorstores import Dingo
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
# create new index
from dingodb import DingoDB
index_name = "langchain_demo"
dingo_client = DingoDB(user="", password="", host=["172.30.14.221:13000"])
# First, check if our index already exists. If it doesn't, we create it
if (
index_name not in dingo_client.get_index()
and index_name.upper() not in dingo_client.get_index()
):
# we create a new index, modify to your own
dingo_client.create_index(
index_name=index_name, dimension=1536, metric_type="cosine", auto_id=False
)
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": '"action", "science fiction"'},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": '"science fiction", "thriller"',
"rating": 9.9,
},
),
]
vectorstore = Dingo.from_documents(
docs, embeddings, index_name=index_name, client=dingo_client
)
dingo_client.get_index()
dingo_client.delete_index("langchain_demo")
dingo_client.vector_count("langchain_demo")
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaurs' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 1183188982475, 'text': 'A bunch of scientists bring back dinosaurs and mayhem breaks loose', 'score': 0.13397777, 'year': {'value': 1993}, 'rating': {'value': 7.7}, 'genre': '"action", "science fiction"'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 1183189196391, 'text': 'Toys come alive and have a blast doing so', 'score': 0.18994397, 'year': {'value': 1995}, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 1183189220159, 'text': 'Three men walk into the Zone, three men walk out of the Zone', 'score': 0.23288351, 'year': {'value': 1979}, 'director': 'Andrei Tarkovsky', 'rating': {'value': 9.9}, 'genre': '"science fiction", "thriller"'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 1183189148854, 'text': 'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', 'score': 0.24421334, 'year': {'value': 2006}, 'director': 'Satoshi Kon', 'rating': {'value': 8.6}})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
comparator=<Comparator.GT: 'gt'> attribute='rating' value=8.5
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 1183189220159, 'text': 'Three men walk into the Zone, three men walk out of the Zone', 'score': 0.25033575, 'year': {'value': 1979}, 'director': 'Andrei Tarkovsky', 'genre': '"science fiction", "thriller"', 'rating': {'value': 9.9}}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 1183189148854, 'text': 'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', 'score': 0.26431882, 'year': {'value': 2006}, 'director': 'Satoshi Kon', 'rating': {'value': 8.6}})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
comparator=<Comparator.EQ: 'eq'> attribute='director' value='Greta Gerwig'
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'id': 1183189172623, 'text': 'A bunch of normal-sized women are supremely wholesome and some men pine after them', 'score': 0.19482517, 'year': {'value': 2019}, 'director': 'Greta Gerwig', 'rating': {'value': 8.3}})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query='science fiction' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
comparator=<Comparator.GT: 'gt'> attribute='rating' value=8.5
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 1183189148854, 'text': 'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', 'score': 0.19805312, 'year': {'value': 2006}, 'director': 'Satoshi Kon', 'rating': {'value': 8.6}}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 1183189220159, 'text': 'Three men walk into the Zone, three men walk out of the Zone', 'score': 0.225586, 'year': {'value': 1979}, 'director': 'Andrei Tarkovsky', 'rating': {'value': 9.9}, 'genre': '"science fiction", "thriller"'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005)]), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None
operator=<Operator.AND: 'and'> arguments=[Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005)]), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]
[Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 1183189196391, 'text': 'Toys come alive and have a blast doing so', 'score': 0.133829, 'year': {'value': 1995}, 'genre': 'animated'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("What are two movies about dinosaurs")
query='dinosaurs' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 1183188982475, 'text': 'A bunch of scientists bring back dinosaurs and mayhem breaks loose', 'score': 0.13394928, 'year': {'value': 1993}, 'rating': {'value': 7.7}, 'genre': '"action", "science fiction"'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 1183189196391, 'text': 'Toys come alive and have a blast doing so', 'score': 0.1899159, 'year': {'value': 1995}, 'genre': 'animated'})] |
https://python.langchain.com/docs/integrations/retrievers/self_query/elasticsearch_self_query/ | ## Elasticsearch
> [Elasticsearch](https://www.elastic.co/elasticsearch/) is a distributed, RESTful search and analytics engine. It provides a distributed, multi-tenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
In this notebook, we’ll demo the `SelfQueryRetriever` with an `Elasticsearch` vector store.
## Creating an Elasticsearch vector store[](#creating-an-elasticsearch-vector-store "Direct link to Creating an Elasticsearch vector store")
First, we’ll want to create an `Elasticsearch` vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `elasticsearch` package.
```
%pip install --upgrade --quiet U lark langchain langchain-elasticsearch
```
```
WARNING: You are using pip version 22.0.4; however, version 23.3 is available.You should consider upgrading via the '/Users/joe/projects/elastic/langchain/libs/langchain/.venv/bin/python3 -m pip install --upgrade pip' command.
```
```
import getpassimport osfrom langchain_core.documents import Documentfrom langchain_elasticsearch import ElasticsearchStorefrom langchain_openai import OpenAIEmbeddingsos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")embeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = ElasticsearchStore.from_documents( docs, embeddings, index_name="elasticsearch-self-query-demo", es_url="http://localhost:9200",)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
## Complex queries in Action![](#complex-queries-in-action "Direct link to Complex queries in Action!")
We’ve tried out some simple queries, but what about more complex ones? Let’s try out a few more complex queries that utilize the full power of Elasticsearch.
```
retriever.get_relevant_documents( "what animated or comedy movies have been released in the last 30 years about animated toys?")
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
```
vectorstore.client.indices.delete(index="elasticsearch-self-query-demo")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:21.009Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/elasticsearch_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/elasticsearch_self_query/",
"description": "Elasticsearch is a",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "7871",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"elasticsearch_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:20 GMT",
"etag": "W/\"0c47b6cdc5fe81a30c80157f4bfe7dde\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::4vch7-1713753740728-7babdd13dbc4"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/elasticsearch_self_query/",
"property": "og:url"
},
{
"content": "Elasticsearch | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Elasticsearch is a",
"property": "og:description"
}
],
"title": "Elasticsearch | 🦜️🔗 LangChain"
} | Elasticsearch
Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multi-tenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
In this notebook, we’ll demo the SelfQueryRetriever with an Elasticsearch vector store.
Creating an Elasticsearch vector store
First, we’ll want to create an Elasticsearch vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the elasticsearch package.
%pip install --upgrade --quiet U lark langchain langchain-elasticsearch
WARNING: You are using pip version 22.0.4; however, version 23.3 is available.
You should consider upgrading via the '/Users/joe/projects/elastic/langchain/libs/langchain/.venv/bin/python3 -m pip install --upgrade pip' command.
import getpass
import os
from langchain_core.documents import Document
from langchain_elasticsearch import ElasticsearchStore
from langchain_openai import OpenAIEmbeddings
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
"rating": 9.9,
},
),
]
vectorstore = ElasticsearchStore.from_documents(
docs,
embeddings,
index_name="elasticsearch-self-query-demo",
es_url="http://localhost:9200",
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Complex queries in Action!
We’ve tried out some simple queries, but what about more complex ones? Let’s try out a few more complex queries that utilize the full power of Elasticsearch.
retriever.get_relevant_documents(
"what animated or comedy movies have been released in the last 30 years about animated toys?"
)
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
vectorstore.client.indices.delete(index="elasticsearch-self-query-demo") |
https://python.langchain.com/docs/integrations/providers/writer/ | This page covers how to use the Writer ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Writer wrappers.
```
from langchain_community.llms import Writer
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:21.534Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/writer/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/writer/",
"description": "This page covers how to use the Writer ecosystem within LangChain.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3592",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"writer\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:21 GMT",
"etag": "W/\"99493078d7c04a5865c2d05a6e438ad2\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8tjzq-1713753741452-c1784dd9570c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/writer/",
"property": "og:url"
},
{
"content": "Writer | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This page covers how to use the Writer ecosystem within LangChain.",
"property": "og:description"
}
],
"title": "Writer | 🦜️🔗 LangChain"
} | This page covers how to use the Writer ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Writer wrappers.
from langchain_community.llms import Writer |
https://python.langchain.com/docs/integrations/retrievers/self_query/milvus_self_query/ | ## Milvus
> [Milvus](https://milvus.io/docs/overview.md) is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.
In the walkthrough, we’ll demo the `SelfQueryRetriever` with a `Milvus` vector store.
## Creating a Milvus vectorstore[](#creating-a-milvus-vectorstore "Direct link to Creating a Milvus vectorstore")
First we’ll want to create a Milvus VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
I have used the cloud version of Milvus, thus I need `uri` and `token` as well.
NOTE: The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `pymilvus` package.
```
%pip install --upgrade --quiet lark
```
```
%pip install --upgrade --quiet pymilvus
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import osOPENAI_API_KEY = "Use your OpenAI key:)"os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
```
```
from langchain_community.vectorstores import Milvusfrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "action"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "genre": "thriller", "rating": 8.2}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "rating": 8.3, "genre": "drama"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "genre": "science fiction"}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "genre": "thriller", "rating": 9.0}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated", "rating": 9.3}, ),]vector_store = Milvus.from_documents( docs, embedding=embeddings, connection_args={"uri": "Use your uri:)", "token": "Use your token:)"},)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vector_store, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 9.0, 'genre': 'thriller'})]
```
```
# This example specifies a filterretriever.get_relevant_documents("What are some highly rated movies (above 9)?")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=9) limit=None
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'})]
```
```
# This example only specifies a query and a filterretriever.get_relevant_documents( "I want to watch a movie about toys rated higher than 9")
```
```
query='toys' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=9) limit=None
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above or equal 9) thriller film?")
```
```
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='thriller'), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=9)]) limit=None
```
```
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 9.0, 'genre': 'thriller'})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about dinosaurs, \ and preferably has a lot of action")
```
```
query='dinosaur' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='action')]) limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vector_store, document_content_description, metadata_field_info, verbose=True, enable_limit=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are two movies about dinosaurs?")
```
```
query='dinosaur' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:22.434Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/milvus_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/milvus_self_query/",
"description": "Milvus is a database that",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3588",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"milvus_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:22 GMT",
"etag": "W/\"2740fe8efbeaabb7e41d3a420ee8f61c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::f5j7h-1713753741999-3dcbf9180e4c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/milvus_self_query/",
"property": "og:url"
},
{
"content": "Milvus | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Milvus is a database that",
"property": "og:description"
}
],
"title": "Milvus | 🦜️🔗 LangChain"
} | Milvus
Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.
In the walkthrough, we’ll demo the SelfQueryRetriever with a Milvus vector store.
Creating a Milvus vectorstore
First we’ll want to create a Milvus VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
I have used the cloud version of Milvus, thus I need uri and token as well.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the pymilvus package.
%pip install --upgrade --quiet lark
%pip install --upgrade --quiet pymilvus
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
OPENAI_API_KEY = "Use your OpenAI key:)"
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain_community.vectorstores import Milvus
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "action"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "genre": "thriller", "rating": 8.2},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "rating": 8.3, "genre": "drama"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={"year": 1979, "rating": 9.9, "genre": "science fiction"},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "genre": "thriller", "rating": 9.0},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated", "rating": 9.3},
),
]
vector_store = Milvus.from_documents(
docs,
embedding=embeddings,
connection_args={"uri": "Use your uri:)", "token": "Use your token:)"},
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vector_store, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 9.0, 'genre': 'thriller'})]
# This example specifies a filter
retriever.get_relevant_documents("What are some highly rated movies (above 9)?")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=9) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'})]
# This example only specifies a query and a filter
retriever.get_relevant_documents(
"I want to watch a movie about toys rated higher than 9"
)
query='toys' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=9) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'genre': 'science fiction'})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above or equal 9) thriller film?"
)
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='thriller'), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=9)]) limit=None
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 9.0, 'genre': 'thriller'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about dinosaurs, \
and preferably has a lot of action"
)
query='dinosaur' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='action')]) limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vector_store,
document_content_description,
metadata_field_info,
verbose=True,
enable_limit=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("What are two movies about dinosaurs?")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'action'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'rating': 9.3, 'genre': 'animated'})] |
https://python.langchain.com/docs/integrations/retrievers/qdrant-sparse/ | ## Qdrant Sparse Vector
> [Qdrant](https://qdrant.tech/) is an open-source, high-performance vector search engine/database.
> `QdrantSparseVectorRetriever` uses [sparse vectors](https://qdrant.tech/articles/sparse-vectors/) introduced in `Qdrant` [v1.7.0](https://qdrant.tech/articles/qdrant-1.7.x/) for document retrieval.
Install the ‘qdrant\_client’ package:
```
%pip install --upgrade --quiet qdrant_client
```
```
from qdrant_client import QdrantClient, modelsclient = QdrantClient(location=":memory:")collection_name = "sparse_collection"vector_name = "sparse_vector"client.create_collection( collection_name, vectors_config={}, sparse_vectors_config={ vector_name: models.SparseVectorParams( index=models.SparseIndexParams( on_disk=False, ) ) },)
```
```
from langchain_community.retrievers import ( QdrantSparseVectorRetriever,)from langchain_core.documents import Document
```
Create a demo encoder function:
```
import randomdef demo_encoder(_: str) -> tuple[list[int], list[float]]: return ( sorted(random.sample(range(100), 100)), [random.uniform(0.1, 1.0) for _ in range(100)], )# Create a retriever with a demo encoderretriever = QdrantSparseVectorRetriever( client=client, collection_name=collection_name, sparse_vector_name=vector_name, sparse_encoder=demo_encoder,)
```
Add some documents:
```
docs = [ Document( metadata={ "title": "Beyond Horizons: AI Chronicles", "author": "Dr. Cassandra Mitchell", }, page_content="An in-depth exploration of the fascinating journey of artificial intelligence, narrated by Dr. Mitchell. This captivating account spans the historical roots, current advancements, and speculative futures of AI, offering a gripping narrative that intertwines technology, ethics, and societal implications.", ), Document( metadata={ "title": "Synergy Nexus: Merging Minds with Machines", "author": "Prof. Benjamin S. Anderson", }, page_content="Professor Anderson delves into the synergistic possibilities of human-machine collaboration in 'Synergy Nexus.' The book articulates a vision where humans and AI seamlessly coalesce, creating new dimensions of productivity, creativity, and shared intelligence.", ), Document( metadata={ "title": "AI Dilemmas: Navigating the Unknown", "author": "Dr. Elena Rodriguez", }, page_content="Dr. Rodriguez pens an intriguing narrative in 'AI Dilemmas,' probing the uncharted territories of ethical quandaries arising from AI advancements. The book serves as a compass, guiding readers through the complex terrain of moral decisions confronting developers, policymakers, and society as AI evolves.", ), Document( metadata={ "title": "Sentient Threads: Weaving AI Consciousness", "author": "Prof. Alexander J. Bennett", }, page_content="In 'Sentient Threads,' Professor Bennett unravels the enigma of AI consciousness, presenting a tapestry of arguments that scrutinize the very essence of machine sentience. The book ignites contemplation on the ethical and philosophical dimensions surrounding the quest for true AI awareness.", ), Document( metadata={ "title": "Silent Alchemy: Unseen AI Alleviations", "author": "Dr. Emily Foster", }, page_content="Building upon her previous work, Dr. Foster unveils 'Silent Alchemy,' a profound examination of the covert presence of AI in our daily lives. This illuminating piece reveals the subtle yet impactful ways in which AI invisibly shapes our routines, emphasizing the need for heightened awareness in our technology-driven world.", ),]
```
Perform a retrieval:
```
retriever.add_documents(docs)
```
```
['1a3e0d292e6444d39451d0588ce746dc', '19b180dd31e749359d49967e5d5dcab7', '8de69e56086f47748e32c9e379e6865b', 'f528fac385954e46b89cf8607bf0ee5a', 'c1a6249d005d4abd9192b1d0b829cebe']
```
```
retriever.get_relevant_documents( "Life and ethical dilemmas of AI",)
```
```
[Document(page_content="In 'Sentient Threads,' Professor Bennett unravels the enigma of AI consciousness, presenting a tapestry of arguments that scrutinize the very essence of machine sentience. The book ignites contemplation on the ethical and philosophical dimensions surrounding the quest for true AI awareness.", metadata={'title': 'Sentient Threads: Weaving AI Consciousness', 'author': 'Prof. Alexander J. Bennett'}), Document(page_content="Dr. Rodriguez pens an intriguing narrative in 'AI Dilemmas,' probing the uncharted territories of ethical quandaries arising from AI advancements. The book serves as a compass, guiding readers through the complex terrain of moral decisions confronting developers, policymakers, and society as AI evolves.", metadata={'title': 'AI Dilemmas: Navigating the Unknown', 'author': 'Dr. Elena Rodriguez'}), Document(page_content="Professor Anderson delves into the synergistic possibilities of human-machine collaboration in 'Synergy Nexus.' The book articulates a vision where humans and AI seamlessly coalesce, creating new dimensions of productivity, creativity, and shared intelligence.", metadata={'title': 'Synergy Nexus: Merging Minds with Machines', 'author': 'Prof. Benjamin S. Anderson'}), Document(page_content='An in-depth exploration of the fascinating journey of artificial intelligence, narrated by Dr. Mitchell. This captivating account spans the historical roots, current advancements, and speculative futures of AI, offering a gripping narrative that intertwines technology, ethics, and societal implications.', metadata={'title': 'Beyond Horizons: AI Chronicles', 'author': 'Dr. Cassandra Mitchell'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:23.446Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/qdrant-sparse/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/qdrant-sparse/",
"description": "Qdrant is an open-source, high-performance",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"qdrant-sparse\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:23 GMT",
"etag": "W/\"9ac2237c1fbe6c7cad55d94bd79bb797\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::2q6t7-1713753743265-0fb495d126e1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/qdrant-sparse/",
"property": "og:url"
},
{
"content": "Qdrant Sparse Vector | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Qdrant is an open-source, high-performance",
"property": "og:description"
}
],
"title": "Qdrant Sparse Vector | 🦜️🔗 LangChain"
} | Qdrant Sparse Vector
Qdrant is an open-source, high-performance vector search engine/database.
QdrantSparseVectorRetriever uses sparse vectors introduced in Qdrant v1.7.0 for document retrieval.
Install the ‘qdrant_client’ package:
%pip install --upgrade --quiet qdrant_client
from qdrant_client import QdrantClient, models
client = QdrantClient(location=":memory:")
collection_name = "sparse_collection"
vector_name = "sparse_vector"
client.create_collection(
collection_name,
vectors_config={},
sparse_vectors_config={
vector_name: models.SparseVectorParams(
index=models.SparseIndexParams(
on_disk=False,
)
)
},
)
from langchain_community.retrievers import (
QdrantSparseVectorRetriever,
)
from langchain_core.documents import Document
Create a demo encoder function:
import random
def demo_encoder(_: str) -> tuple[list[int], list[float]]:
return (
sorted(random.sample(range(100), 100)),
[random.uniform(0.1, 1.0) for _ in range(100)],
)
# Create a retriever with a demo encoder
retriever = QdrantSparseVectorRetriever(
client=client,
collection_name=collection_name,
sparse_vector_name=vector_name,
sparse_encoder=demo_encoder,
)
Add some documents:
docs = [
Document(
metadata={
"title": "Beyond Horizons: AI Chronicles",
"author": "Dr. Cassandra Mitchell",
},
page_content="An in-depth exploration of the fascinating journey of artificial intelligence, narrated by Dr. Mitchell. This captivating account spans the historical roots, current advancements, and speculative futures of AI, offering a gripping narrative that intertwines technology, ethics, and societal implications.",
),
Document(
metadata={
"title": "Synergy Nexus: Merging Minds with Machines",
"author": "Prof. Benjamin S. Anderson",
},
page_content="Professor Anderson delves into the synergistic possibilities of human-machine collaboration in 'Synergy Nexus.' The book articulates a vision where humans and AI seamlessly coalesce, creating new dimensions of productivity, creativity, and shared intelligence.",
),
Document(
metadata={
"title": "AI Dilemmas: Navigating the Unknown",
"author": "Dr. Elena Rodriguez",
},
page_content="Dr. Rodriguez pens an intriguing narrative in 'AI Dilemmas,' probing the uncharted territories of ethical quandaries arising from AI advancements. The book serves as a compass, guiding readers through the complex terrain of moral decisions confronting developers, policymakers, and society as AI evolves.",
),
Document(
metadata={
"title": "Sentient Threads: Weaving AI Consciousness",
"author": "Prof. Alexander J. Bennett",
},
page_content="In 'Sentient Threads,' Professor Bennett unravels the enigma of AI consciousness, presenting a tapestry of arguments that scrutinize the very essence of machine sentience. The book ignites contemplation on the ethical and philosophical dimensions surrounding the quest for true AI awareness.",
),
Document(
metadata={
"title": "Silent Alchemy: Unseen AI Alleviations",
"author": "Dr. Emily Foster",
},
page_content="Building upon her previous work, Dr. Foster unveils 'Silent Alchemy,' a profound examination of the covert presence of AI in our daily lives. This illuminating piece reveals the subtle yet impactful ways in which AI invisibly shapes our routines, emphasizing the need for heightened awareness in our technology-driven world.",
),
]
Perform a retrieval:
retriever.add_documents(docs)
['1a3e0d292e6444d39451d0588ce746dc',
'19b180dd31e749359d49967e5d5dcab7',
'8de69e56086f47748e32c9e379e6865b',
'f528fac385954e46b89cf8607bf0ee5a',
'c1a6249d005d4abd9192b1d0b829cebe']
retriever.get_relevant_documents(
"Life and ethical dilemmas of AI",
)
[Document(page_content="In 'Sentient Threads,' Professor Bennett unravels the enigma of AI consciousness, presenting a tapestry of arguments that scrutinize the very essence of machine sentience. The book ignites contemplation on the ethical and philosophical dimensions surrounding the quest for true AI awareness.", metadata={'title': 'Sentient Threads: Weaving AI Consciousness', 'author': 'Prof. Alexander J. Bennett'}),
Document(page_content="Dr. Rodriguez pens an intriguing narrative in 'AI Dilemmas,' probing the uncharted territories of ethical quandaries arising from AI advancements. The book serves as a compass, guiding readers through the complex terrain of moral decisions confronting developers, policymakers, and society as AI evolves.", metadata={'title': 'AI Dilemmas: Navigating the Unknown', 'author': 'Dr. Elena Rodriguez'}),
Document(page_content="Professor Anderson delves into the synergistic possibilities of human-machine collaboration in 'Synergy Nexus.' The book articulates a vision where humans and AI seamlessly coalesce, creating new dimensions of productivity, creativity, and shared intelligence.", metadata={'title': 'Synergy Nexus: Merging Minds with Machines', 'author': 'Prof. Benjamin S. Anderson'}),
Document(page_content='An in-depth exploration of the fascinating journey of artificial intelligence, narrated by Dr. Mitchell. This captivating account spans the historical roots, current advancements, and speculative futures of AI, offering a gripping narrative that intertwines technology, ethics, and societal implications.', metadata={'title': 'Beyond Horizons: AI Chronicles', 'author': 'Dr. Cassandra Mitchell'})] |
https://python.langchain.com/docs/integrations/providers/xinference/ | ## Xorbits Inference (Xinference)
This page demonstrates how to use [Xinference](https://github.com/xorbitsai/inference) with LangChain.
`Xinference` is a powerful and versatile library designed to serve LLMs, speech recognition models, and multimodal models, even on your laptop. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
Xinference can be installed via pip from PyPI:
```
pip install "xinference[all]"
```
## LLM[](#llm "Direct link to LLM")
Xinference supports various models compatible with GGML, including chatglm, baichuan, whisper, vicuna, and orca. To view the builtin models, run the command:
### Wrapper for Xinference[](#wrapper-for-xinference "Direct link to Wrapper for Xinference")
You can start a local instance of Xinference by running:
You can also deploy Xinference in a distributed cluster. To do so, first start an Xinference supervisor on the server you want to run it:
```
xinference-supervisor -H "${supervisor_host}"
```
Then, start the Xinference workers on each of the other servers where you want to run them on:
```
xinference-worker -e "http://${supervisor_host}:9997"
```
You can also start a local instance of Xinference by running:
Once Xinference is running, an endpoint will be accessible for model management via CLI or Xinference client.
For local deployment, the endpoint will be http://localhost:9997.
For cluster deployment, the endpoint will be http://${supervisor\_host}:9997.
Then, you need to launch a model. You can specify the model names and other attributes including model\_size\_in\_billions and quantization. You can use command line interface (CLI) to do it. For example,
```
xinference launch -n orca -s 3 -q q4_0
```
A model uid will be returned.
Example usage:
```
from langchain_community.llms import Xinferencellm = Xinference( server_url="http://0.0.0.0:9997", model_uid = {model_uid} # replace model_uid with the model UID return from launching the model)llm( prompt="Q: where can we visit in the capital of France? A:", generate_config={"max_tokens": 1024, "stream": True},)
```
### Usage[](#usage "Direct link to Usage")
For more information and detailed examples, refer to the [example for xinference LLMs](https://python.langchain.com/docs/integrations/llms/xinference/)
### Embeddings[](#embeddings "Direct link to Embeddings")
Xinference also supports embedding queries and documents. See [example for xinference embeddings](https://python.langchain.com/docs/integrations/text_embedding/xinference/) for a more detailed demo. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:24.043Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/xinference/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/xinference/",
"description": "This page demonstrates how to use Xinference",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3595",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"xinference\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:23 GMT",
"etag": "W/\"7a964d1bcff648a30edddae7af25d20d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8tt4g-1713753743973-ea4f5a2a5bf9"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/xinference/",
"property": "og:url"
},
{
"content": "Xorbits Inference (Xinference) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This page demonstrates how to use Xinference",
"property": "og:description"
}
],
"title": "Xorbits Inference (Xinference) | 🦜️🔗 LangChain"
} | Xorbits Inference (Xinference)
This page demonstrates how to use Xinference with LangChain.
Xinference is a powerful and versatile library designed to serve LLMs, speech recognition models, and multimodal models, even on your laptop. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command.
Installation and Setup
Xinference can be installed via pip from PyPI:
pip install "xinference[all]"
LLM
Xinference supports various models compatible with GGML, including chatglm, baichuan, whisper, vicuna, and orca. To view the builtin models, run the command:
Wrapper for Xinference
You can start a local instance of Xinference by running:
You can also deploy Xinference in a distributed cluster. To do so, first start an Xinference supervisor on the server you want to run it:
xinference-supervisor -H "${supervisor_host}"
Then, start the Xinference workers on each of the other servers where you want to run them on:
xinference-worker -e "http://${supervisor_host}:9997"
You can also start a local instance of Xinference by running:
Once Xinference is running, an endpoint will be accessible for model management via CLI or Xinference client.
For local deployment, the endpoint will be http://localhost:9997.
For cluster deployment, the endpoint will be http://${supervisor_host}:9997.
Then, you need to launch a model. You can specify the model names and other attributes including model_size_in_billions and quantization. You can use command line interface (CLI) to do it. For example,
xinference launch -n orca -s 3 -q q4_0
A model uid will be returned.
Example usage:
from langchain_community.llms import Xinference
llm = Xinference(
server_url="http://0.0.0.0:9997",
model_uid = {model_uid} # replace model_uid with the model UID return from launching the model
)
llm(
prompt="Q: where can we visit in the capital of France? A:",
generate_config={"max_tokens": 1024, "stream": True},
)
Usage
For more information and detailed examples, refer to the example for xinference LLMs
Embeddings
Xinference also supports embedding queries and documents. See example for xinference embeddings for a more detailed demo. |
https://python.langchain.com/docs/integrations/providers/yandex/ | ## Yandex
All functionality related to Yandex Cloud
> [Yandex Cloud](https://cloud.yandex.com/en/) is a public cloud platform.
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
Yandex Cloud SDK can be installed via pip from PyPI:
## LLMs[](#llms "Direct link to LLMs")
### YandexGPT[](#yandexgpt "Direct link to YandexGPT")
See a [usage example](https://python.langchain.com/docs/integrations/llms/yandex/).
```
from langchain_community.llms import YandexGPT
```
## Chat models[](#chat-models "Direct link to Chat models")
### YandexGPT[](#yandexgpt-1 "Direct link to YandexGPT")
See a [usage example](https://python.langchain.com/docs/integrations/chat/yandex/).
```
from langchain_community.chat_models import ChatYandexGPT
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:24.172Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/yandex/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/yandex/",
"description": "All functionality related to Yandex Cloud",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"yandex\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:24 GMT",
"etag": "W/\"f74b43b1a441a6a2ef5ccb145de91fdf\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::pgznm-1713753743930-07eebec1979b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/yandex/",
"property": "og:url"
},
{
"content": "Yandex | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "All functionality related to Yandex Cloud",
"property": "og:description"
}
],
"title": "Yandex | 🦜️🔗 LangChain"
} | Yandex
All functionality related to Yandex Cloud
Yandex Cloud is a public cloud platform.
Installation and Setup
Yandex Cloud SDK can be installed via pip from PyPI:
LLMs
YandexGPT
See a usage example.
from langchain_community.llms import YandexGPT
Chat models
YandexGPT
See a usage example.
from langchain_community.chat_models import ChatYandexGPT
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/retrievers/self_query/myscale_self_query/ | ## MyScale
> [MyScale](https://docs.myscale.com/en/) is an integrated vector database. You can access your database in SQL and also from here, LangChain. `MyScale` can make use of [various data types and functions for filters](https://blog.myscale.com/2023/06/06/why-integrated-database-solution-can-boost-your-llm-apps/#filter-on-anything-without-constraints). It will boost up your LLM app no matter if you are scaling up your data or expand your system to broader application.
In the notebook, we’ll demo the `SelfQueryRetriever` wrapped around a `MyScale` vector store with some extra pieces we contributed to LangChain.
In short, it can be condensed into 4 points: 1. Add `contain` comparator to match the list of any if there is more than one element matched 2. Add `timestamp` data type for datetime match (ISO-format, or YYYY-MM-DD) 3. Add `like` comparator for string pattern search 4. Add arbitrary function capability
## Creating a MyScale vector store[](#creating-a-myscale-vector-store "Direct link to Creating a MyScale vector store")
MyScale has already been integrated to LangChain for a while. So you can follow [this notebook](https://python.langchain.com/docs/integrations/vectorstores/myscale/) to create your own vectorstore for a self-query retriever.
**Note:** All self-query retrievers requires you to have `lark` installed (`pip install lark`). We use `lark` for grammar definition. Before you proceed to the next step, we also want to remind you that `clickhouse-connect` is also needed to interact with your MyScale backend.
```
%pip install --upgrade --quiet lark clickhouse-connect
```
In this tutorial we follow other example’s setting and use `OpenAIEmbeddings`. Remember to get an OpenAI API Key for valid access to LLMs.
```
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")os.environ["MYSCALE_HOST"] = getpass.getpass("MyScale URL:")os.environ["MYSCALE_PORT"] = getpass.getpass("MyScale Port:")os.environ["MYSCALE_USERNAME"] = getpass.getpass("MyScale Username:")os.environ["MYSCALE_PASSWORD"] = getpass.getpass("MyScale Password:")
```
```
from langchain_community.vectorstores import MyScalefrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()
```
## Create some sample data[](#create-some-sample-data "Direct link to Create some sample data")
As you can see, the data we created has some differences compared to other self-query retrievers. We replaced the keyword `year` with `date` which gives you finer control on timestamps. We also changed the type of the keyword `gerne` to a list of strings, where an LLM can use a new `contain` comparator to construct filters. We also provide the `like` comparator and arbitrary function support to filters, which will be introduced in next few cells.
Now let’s look at the data first.
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"date": "1993-07-02", "rating": 7.7, "genre": ["science fiction"]}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"date": "2010-12-30", "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"date": "2006-04-23", "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"date": "2019-08-22", "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"date": "1995-02-11", "genre": ["animated"]}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "date": "1979-09-10", "director": "Andrei Tarkovsky", "genre": ["science fiction", "adventure"], "rating": 9.9, }, ),]vectorstore = MyScale.from_documents( docs, embeddings,)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Just like other retrievers… simple and nice.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genres of the movie", type="list[string]", ), # If you want to include length of a list, just define it as a new column # This will teach the LLM to use it as a column when constructing filter. AttributeInfo( name="length(genre)", description="The length of genres of the movie", type="integer", ), # Now you can define a column as timestamp. By simply set the type to timestamp. AttributeInfo( name="date", description="The date the movie was released", type="timestamp", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out with self-query retriever’s existing functionalities[](#testing-it-out-with-self-query-retrievers-existing-functionalities "Direct link to Testing it out with self-query retriever’s existing functionalities")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
## Wait a second… what else?
Self-query retriever with MyScale can do more! Let’s find out.
```
# You can use length(genres) to do anything you wantretriever.get_relevant_documents("What's a movie that have more than 1 genres?")
```
```
# Fine-grained datetime? You got it already.retriever.get_relevant_documents("What's a movie that release after feb 1995?")
```
```
# Don't know what your exact filter should be? Use string pattern match!retriever.get_relevant_documents("What's a movie whose name is like Andrei?")
```
```
# Contain works for lists: so you can match a list with contain comparator!retriever.get_relevant_documents( "What's a movie who has genres science fiction and adventure?")
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:24.270Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/myscale_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/myscale_self_query/",
"description": "MyScale is an integrated vector",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3590",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"myscale_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:24 GMT",
"etag": "W/\"760f7995c653bd5cef4eebacb5a5dca1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8krzg-1713753744011-ea224bc8e986"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/myscale_self_query/",
"property": "og:url"
},
{
"content": "MyScale | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "MyScale is an integrated vector",
"property": "og:description"
}
],
"title": "MyScale | 🦜️🔗 LangChain"
} | MyScale
MyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. MyScale can make use of various data types and functions for filters. It will boost up your LLM app no matter if you are scaling up your data or expand your system to broader application.
In the notebook, we’ll demo the SelfQueryRetriever wrapped around a MyScale vector store with some extra pieces we contributed to LangChain.
In short, it can be condensed into 4 points: 1. Add contain comparator to match the list of any if there is more than one element matched 2. Add timestamp data type for datetime match (ISO-format, or YYYY-MM-DD) 3. Add like comparator for string pattern search 4. Add arbitrary function capability
Creating a MyScale vector store
MyScale has already been integrated to LangChain for a while. So you can follow this notebook to create your own vectorstore for a self-query retriever.
Note: All self-query retrievers requires you to have lark installed (pip install lark). We use lark for grammar definition. Before you proceed to the next step, we also want to remind you that clickhouse-connect is also needed to interact with your MyScale backend.
%pip install --upgrade --quiet lark clickhouse-connect
In this tutorial we follow other example’s setting and use OpenAIEmbeddings. Remember to get an OpenAI API Key for valid access to LLMs.
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
os.environ["MYSCALE_HOST"] = getpass.getpass("MyScale URL:")
os.environ["MYSCALE_PORT"] = getpass.getpass("MyScale Port:")
os.environ["MYSCALE_USERNAME"] = getpass.getpass("MyScale Username:")
os.environ["MYSCALE_PASSWORD"] = getpass.getpass("MyScale Password:")
from langchain_community.vectorstores import MyScale
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
Create some sample data
As you can see, the data we created has some differences compared to other self-query retrievers. We replaced the keyword year with date which gives you finer control on timestamps. We also changed the type of the keyword gerne to a list of strings, where an LLM can use a new contain comparator to construct filters. We also provide the like comparator and arbitrary function support to filters, which will be introduced in next few cells.
Now let’s look at the data first.
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"date": "1993-07-02", "rating": 7.7, "genre": ["science fiction"]},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"date": "2010-12-30", "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"date": "2006-04-23", "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"date": "2019-08-22", "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"date": "1995-02-11", "genre": ["animated"]},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"date": "1979-09-10",
"director": "Andrei Tarkovsky",
"genre": ["science fiction", "adventure"],
"rating": 9.9,
},
),
]
vectorstore = MyScale.from_documents(
docs,
embeddings,
)
Creating our self-querying retriever
Just like other retrievers… simple and nice.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genres of the movie",
type="list[string]",
),
# If you want to include length of a list, just define it as a new column
# This will teach the LLM to use it as a column when constructing filter.
AttributeInfo(
name="length(genre)",
description="The length of genres of the movie",
type="integer",
),
# Now you can define a column as timestamp. By simply set the type to timestamp.
AttributeInfo(
name="date",
description="The date the movie was released",
type="timestamp",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out with self-query retriever’s existing functionalities
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
Wait a second… what else?
Self-query retriever with MyScale can do more! Let’s find out.
# You can use length(genres) to do anything you want
retriever.get_relevant_documents("What's a movie that have more than 1 genres?")
# Fine-grained datetime? You got it already.
retriever.get_relevant_documents("What's a movie that release after feb 1995?")
# Don't know what your exact filter should be? Use string pattern match!
retriever.get_relevant_documents("What's a movie whose name is like Andrei?")
# Contain works for lists: so you can match a list with contain comparator!
retriever.get_relevant_documents(
"What's a movie who has genres science fiction and adventure?"
)
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs") |
https://python.langchain.com/docs/integrations/providers/xata/ | [Xata](https://xata.io/) is a serverless data platform, based on `PostgreSQL`. It provides a Python SDK for interacting with your database, and a UI for managing your data. `Xata` has a native vector type, which can be added to any table, and supports similarity search. LangChain inserts vectors directly to `Xata`, and queries it for the nearest neighbors of a given vector, so that you can use all the LangChain Embeddings integrations with `Xata`.
We need to install `xata` python package.
```
pip install xata==1.0.0a7
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:24.920Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/xata/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/xata/",
"description": "Xata is a serverless data platform, based on PostgreSQL.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4642",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"xata\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:24 GMT",
"etag": "W/\"22aa69b760fe9708419d83bd2eb05a01\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::8bcxw-1713753744152-ab02f676beb4"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/xata/",
"property": "og:url"
},
{
"content": "Xata | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Xata is a serverless data platform, based on PostgreSQL.",
"property": "og:description"
}
],
"title": "Xata | 🦜️🔗 LangChain"
} | Xata is a serverless data platform, based on PostgreSQL. It provides a Python SDK for interacting with your database, and a UI for managing your data. Xata has a native vector type, which can be added to any table, and supports similarity search. LangChain inserts vectors directly to Xata, and queries it for the nearest neighbors of a given vector, so that you can use all the LangChain Embeddings integrations with Xata.
We need to install xata python package.
pip install xata==1.0.0a7 |
https://python.langchain.com/docs/integrations/retrievers/self_query/mongodb_atlas/ | ## MongoDB Atlas
> [MongoDB Atlas](https://www.mongodb.com/) is a document database that can be used as a vector database.
In the walkthrough, we’ll demo the `SelfQueryRetriever` with a `MongoDB Atlas` vector store.
## Creating a MongoDB Atlas vectorstore[](#creating-a-mongodb-atlas-vectorstore "Direct link to Creating a MongoDB Atlas vectorstore")
First we’ll want to create a MongoDB Atlas VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `pymongo` package.
```
%pip install --upgrade --quiet lark pymongo
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import osOPENAI_API_KEY = "Use your OpenAI key"os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
```
```
from langchain_community.vectorstores import MongoDBAtlasVectorSearchfrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsfrom pymongo import MongoClientCONNECTION_STRING = "Use your MongoDB Atlas connection string"DB_NAME = "Name of your MongoDB Atlas database"COLLECTION_NAME = "Name of your collection in the database"INDEX_NAME = "Name of a search index defined on the collection"MongoClient = MongoClient(CONNECTION_STRING)collection = MongoClient[DB_NAME][COLLECTION_NAME]embeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "action"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "genre": "thriller", "rating": 8.2}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "rating": 8.3, "genre": "drama"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "genre": "science fiction"}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "genre": "thriller", "rating": 9.0}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated", "rating": 9.3}, ),]vectorstore = MongoDBAtlasVectorSearch.from_documents( docs, embeddings, collection=collection, index_name=INDEX_NAME,)
```
Now, let’s create a vector search index on your cluster. In the below example, `embedding` is the name of the field that contains the embedding vector. Please refer to the [documentation](https://www.mongodb.com/docs/atlas/atlas-search/field-types/knn-vector) to get more details on how to define an Atlas Vector Search index. You can name the index `{COLLECTION_NAME}` and create the index on the namespace `{DB_NAME}.{COLLECTION_NAME}`. Finally, write the following definition in the JSON editor on MongoDB Atlas:
```
{ "mappings": { "dynamic": true, "fields": { "embedding": { "dimensions": 1536, "similarity": "cosine", "type": "knnVector" }, "genre": { "type": "token" }, "ratings": { "type": "number" }, "year": { "type": "number" } } }}
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"
```
```
llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
# This example specifies a filterretriever.get_relevant_documents("What are some highly rated movies (above 9)?")
```
```
# This example only specifies a query and a filterretriever.get_relevant_documents( "I want to watch a movie about toys rated higher than 9")
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above or equal 9) thriller film?")
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about dinosaurs, \ and preferably has a lot of action")
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True, enable_limit=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are two movies about dinosaurs?")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:25.005Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/mongodb_atlas/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/mongodb_atlas/",
"description": "MongoDB Atlas is a document database that",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4013",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"mongodb_atlas\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:24 GMT",
"etag": "W/\"d4881e6b141d10600cc9ab7e6f6cd949\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::2ndz7-1713753744627-3a30c43d76f3"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/mongodb_atlas/",
"property": "og:url"
},
{
"content": "MongoDB Atlas | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "MongoDB Atlas is a document database that",
"property": "og:description"
}
],
"title": "MongoDB Atlas | 🦜️🔗 LangChain"
} | MongoDB Atlas
MongoDB Atlas is a document database that can be used as a vector database.
In the walkthrough, we’ll demo the SelfQueryRetriever with a MongoDB Atlas vector store.
Creating a MongoDB Atlas vectorstore
First we’ll want to create a MongoDB Atlas VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the pymongo package.
%pip install --upgrade --quiet lark pymongo
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
OPENAI_API_KEY = "Use your OpenAI key"
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain_community.vectorstores import MongoDBAtlasVectorSearch
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
from pymongo import MongoClient
CONNECTION_STRING = "Use your MongoDB Atlas connection string"
DB_NAME = "Name of your MongoDB Atlas database"
COLLECTION_NAME = "Name of your collection in the database"
INDEX_NAME = "Name of a search index defined on the collection"
MongoClient = MongoClient(CONNECTION_STRING)
collection = MongoClient[DB_NAME][COLLECTION_NAME]
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "action"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "genre": "thriller", "rating": 8.2},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "rating": 8.3, "genre": "drama"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={"year": 1979, "rating": 9.9, "genre": "science fiction"},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "genre": "thriller", "rating": 9.0},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated", "rating": 9.3},
),
]
vectorstore = MongoDBAtlasVectorSearch.from_documents(
docs,
embeddings,
collection=collection,
index_name=INDEX_NAME,
)
Now, let’s create a vector search index on your cluster. In the below example, embedding is the name of the field that contains the embedding vector. Please refer to the documentation to get more details on how to define an Atlas Vector Search index. You can name the index {COLLECTION_NAME} and create the index on the namespace {DB_NAME}.{COLLECTION_NAME}. Finally, write the following definition in the JSON editor on MongoDB Atlas:
{
"mappings": {
"dynamic": true,
"fields": {
"embedding": {
"dimensions": 1536,
"similarity": "cosine",
"type": "knnVector"
},
"genre": {
"type": "token"
},
"ratings": {
"type": "number"
},
"year": {
"type": "number"
}
}
}
}
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
# This example specifies a filter
retriever.get_relevant_documents("What are some highly rated movies (above 9)?")
# This example only specifies a query and a filter
retriever.get_relevant_documents(
"I want to watch a movie about toys rated higher than 9"
)
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above or equal 9) thriller film?"
)
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about dinosaurs, \
and preferably has a lot of action"
)
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
verbose=True,
enable_limit=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("What are two movies about dinosaurs?") |
https://python.langchain.com/docs/integrations/retrievers/self_query/opensearch_self_query/ | ## OpenSearch
> [OpenSearch](https://opensearch.org/) is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. `OpenSearch` is a distributed search and analytics engine based on `Apache Lucene`.
In this notebook, we’ll demo the `SelfQueryRetriever` with an `OpenSearch` vector store.
## Creating an OpenSearch vector store[](#creating-an-opensearch-vector-store "Direct link to Creating an OpenSearch vector store")
First, we’ll want to create an `OpenSearch` vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `opensearch-py` package.
```
%pip install --upgrade --quiet lark opensearch-py
```
```
import getpassimport osfrom langchain_community.vectorstores import OpenSearchVectorSearchfrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")embeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]vectorstore = OpenSearchVectorSearch.from_documents( docs, embeddings, index_name="opensearch-self-query-demo", opensearch_url="http://localhost:9200",)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='science fiction')]) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
## Complex queries in Action![](#complex-queries-in-action "Direct link to Complex queries in Action!")
We’ve tried out some simple queries, but what about more complex ones? Let’s try out a few more complex queries that utilize the full power of OpenSearch.
```
retriever.get_relevant_documents( "what animated or comedy movies have been released in the last 30 years about animated toys?")
```
```
query='animated toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Operation(operator=<Operator.OR: 'or'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='comedy')]), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='year', value=1990)]) limit=None
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
```
vectorstore.client.indices.delete(index="opensearch-self-query-demo")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:25.567Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/opensearch_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/opensearch_self_query/",
"description": "OpenSearch is a scalable, flexible, and",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3590",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"opensearch_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:25 GMT",
"etag": "W/\"3646d3f132e33335f08e17de29c9aa3d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::k2nqv-1713753745035-59e447b698c8"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/opensearch_self_query/",
"property": "og:url"
},
{
"content": "OpenSearch | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "OpenSearch is a scalable, flexible, and",
"property": "og:description"
}
],
"title": "OpenSearch | 🦜️🔗 LangChain"
} | OpenSearch
OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene.
In this notebook, we’ll demo the SelfQueryRetriever with an OpenSearch vector store.
Creating an OpenSearch vector store
First, we’ll want to create an OpenSearch vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the opensearch-py package.
%pip install --upgrade --quiet lark opensearch-py
import getpass
import os
from langchain_community.vectorstores import OpenSearchVectorSearch
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"rating": 9.9,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
},
),
]
vectorstore = OpenSearchVectorSearch.from_documents(
docs,
embeddings,
index_name="opensearch-self-query-demo",
opensearch_url="http://localhost:9200",
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='science fiction')]) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Complex queries in Action!
We’ve tried out some simple queries, but what about more complex ones? Let’s try out a few more complex queries that utilize the full power of OpenSearch.
retriever.get_relevant_documents(
"what animated or comedy movies have been released in the last 30 years about animated toys?"
)
query='animated toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Operation(operator=<Operator.OR: 'or'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='comedy')]), Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='year', value=1990)]) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
vectorstore.client.indices.delete(index="opensearch-self-query-demo") |
https://python.langchain.com/docs/integrations/providers/yeagerai/ | ## Yeager.ai
This page covers how to use [Yeager.ai](https://yeager.ai/) to generate LangChain tools and agents.
## What is Yeager.ai?[](#what-is-yeagerai "Direct link to What is Yeager.ai?")
Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools.
It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.
## yAgents[](#yagents "Direct link to yAgents")
Low code generative agent designed to help you build, prototype, and deploy Langchain tools with ease.
### How to use?[](#how-to-use "Direct link to How to use?")
```
pip install yeagerai-agentyeagerai-agent
```
Go to [http://127.0.0.1:7860](http://127.0.0.1:7860/)
This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab "Settings".
`OPENAI_API_KEY=<your_openai_api_key_here>`
We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.
### Creating and Executing Tools with yAgents[](#creating-and-executing-tools-with-yagents "Direct link to Creating and Executing Tools with yAgents")
yAgents makes it easy to create and execute AI-powered tools. Here's a brief overview of the process:
1. Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool's purpose and functionality. For example: `create a tool that returns the n-th prime number`
2. Load the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example: `load the tool that you just created it into your toolkit`
3. Execute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example: `generate the 50th prime number`
You can see a video of how it works [here](https://www.youtube.com/watch?v=KA5hCM3RaWE).
As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.
For more information, see [yAgents' Github](https://github.com/yeagerai/yeagerai-agent) or our [docs](https://yeagerai.gitbook.io/docs/general/welcome-to-yeager.ai) | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:25.927Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/yeagerai/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/yeagerai/",
"description": "This page covers how to use Yeager.ai to generate LangChain tools and agents.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3596",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"yeagerai\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:25 GMT",
"etag": "W/\"19e13248677c53ff092075f63cff7f32\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8tt4g-1713753745574-19b4bc9ca581"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/yeagerai/",
"property": "og:url"
},
{
"content": "Yeager.ai | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This page covers how to use Yeager.ai to generate LangChain tools and agents.",
"property": "og:description"
}
],
"title": "Yeager.ai | 🦜️🔗 LangChain"
} | Yeager.ai
This page covers how to use Yeager.ai to generate LangChain tools and agents.
What is Yeager.ai?
Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools.
It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.
yAgents
Low code generative agent designed to help you build, prototype, and deploy Langchain tools with ease.
How to use?
pip install yeagerai-agent
yeagerai-agent
Go to http://127.0.0.1:7860
This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab "Settings".
OPENAI_API_KEY=<your_openai_api_key_here>
We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.
Creating and Executing Tools with yAgents
yAgents makes it easy to create and execute AI-powered tools. Here's a brief overview of the process:
Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool's purpose and functionality. For example: create a tool that returns the n-th prime number
Load the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example: load the tool that you just created it into your toolkit
Execute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example: generate the 50th prime number
You can see a video of how it works here.
As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.
For more information, see yAgents' Github or our docs |
https://python.langchain.com/docs/integrations/retrievers/self_query/pgvector_self_query/ | ## PGVector (Postgres)
> [PGVector](https://github.com/pgvector/pgvector) is a vector similarity search package for `Postgres` data base.
In the notebook, we’ll demo the `SelfQueryRetriever` wrapped around a `PGVector` vector store.
## Creating a PGVector vector store[](#creating-a-pgvector-vector-store "Direct link to Creating a PGVector vector store")
First we’ll want to create a PGVector vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the \`\` package.
```
%pip install --upgrade --quiet lark pgvector psycopg2-binary
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
```
```
from langchain_community.vectorstores import PGVectorfrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingscollection = "Name of your collection"embeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = PGVector.from_documents( docs, embeddings, collection_name=collection,)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:26.219Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/pgvector_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/pgvector_self_query/",
"description": "PGVector is a vector",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"pgvector_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:26 GMT",
"etag": "W/\"9cbbc395cdee129766848918216a64cf\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::fgt7r-1713753746011-971a111872d8"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/pgvector_self_query/",
"property": "og:url"
},
{
"content": "PGVector (Postgres) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "PGVector is a vector",
"property": "og:description"
}
],
"title": "PGVector (Postgres) | 🦜️🔗 LangChain"
} | PGVector (Postgres)
PGVector is a vector similarity search package for Postgres data base.
In the notebook, we’ll demo the SelfQueryRetriever wrapped around a PGVector vector store.
Creating a PGVector vector store
First we’ll want to create a PGVector vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the `` package.
%pip install --upgrade --quiet lark pgvector psycopg2-binary
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
from langchain_community.vectorstores import PGVector
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
collection = "Name of your collection"
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
"rating": 9.9,
},
),
]
vectorstore = PGVector.from_documents(
docs,
embeddings,
collection_name=collection,
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs") |
https://python.langchain.com/docs/integrations/providers/youtube/ | ```
from langchain_community.document_loaders import YoutubeLoaderfrom langchain_community.document_loaders import GoogleApiYoutubeLoader
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:26.648Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/youtube/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/youtube/",
"description": "YouTube is an online video sharing and social media platform by Google.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4643",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"youtube\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:26 GMT",
"etag": "W/\"2b4cb76dda205da3b1fc9e6c63050d0b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::bmr9d-1713753746537-5af6df4d6ea8"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/youtube/",
"property": "og:url"
},
{
"content": "YouTube | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "YouTube is an online video sharing and social media platform by Google.",
"property": "og:description"
}
],
"title": "YouTube | 🦜️🔗 LangChain"
} | from langchain_community.document_loaders import YoutubeLoader
from langchain_community.document_loaders import GoogleApiYoutubeLoader |
https://python.langchain.com/docs/integrations/providers/zep/ | ## Zep
> [Zep](http://www.getzep.com/) is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.
> Key Features:
>
> * **Fast!** Zep operates independently of the your chat loop, ensuring a snappy user experience.
> * **Chat History Memory, Archival, and Enrichment**, populate your prompts with relevant chat history, sumamries, named entities, intent data, and more.
> * **Vector Search over Chat History and Documents** Automatic embedding of documents, chat histories, and summaries. Use Zep's similarity or native MMR Re-ranked search to find the most relevant.
> * **Manage Users and their Chat Sessions** Users and their Chat Sessions are first-class citizens in Zep, allowing you to manage user interactions with your bots or agents easily.
> * **Records Retention and Privacy Compliance** Comply with corporate and regulatory mandates for records retention while ensuring compliance with privacy regulations such as CCPA and GDPR. Fulfill _Right To Be Forgotten_ requests with a single API call
> Zep project: [https://github.com/getzep/zep](https://github.com/getzep/zep)
>
> Docs: [https://docs.getzep.com/](https://docs.getzep.com/)
## Installation and Setup[](#installation-and-setup "Direct link to Installation and Setup")
1. Install the Zep service. See the [Zep Quick Start Guide](https://docs.getzep.com/deployment/quickstart/).
2. Install the Zep Python SDK:
## Memory[](#memory "Direct link to Memory")
Zep's [Memory API](https://docs.getzep.com/sdk/chat_history/) persists your app's chat history and metadata to a Session, enriches the memory, automatically generates summaries, and enables vector similarity search over historical chat messages and summaries.
There are two approaches to populating your prompt with chat history:
1. Retrieve the most recent N messages (and potentionally a summary) from a Session and use them to construct your prompt.
2. Search over the Session's chat history for messages that are relevant and use them to construct your prompt.
Both of these approaches may be useful, with the first providing the LLM with context as to the most recent interactions with a human. The second approach enables you to look back further in the chat history and retrieve messages that are relevant to the current conversation in a token-efficient manner.
```
from langchain.memory import ZepMemory
```
See a [RAG App Example here](https://python.langchain.com/docs/integrations/memory/zep_memory/).
## Retriever[](#retriever "Direct link to Retriever")
Zep's Memory Retriever is a LangChain Retriever that enables you to retrieve messages from a Zep Session and use them to construct your prompt.
The Retriever supports searching over both individual messages and summaries of conversations. The latter is useful for providing rich, but succinct context to the LLM as to relevant past conversations.
Zep's Memory Retriever supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://docs.getzep.com/sdk/search_query/). MMR search is useful for ensuring that the retrieved messages are diverse and not too similar to each other
See a [usage example](https://python.langchain.com/docs/integrations/retrievers/zep_memorystore/).
```
from langchain_community.retrievers import ZepRetriever
```
## Vector store[](#vector-store "Direct link to Vector store")
Zep's [Document VectorStore API](https://docs.getzep.com/sdk/documents/) enables you to store and retrieve documents using vector similarity search. Zep doesn't require you to understand distance functions, types of embeddings, or indexing best practices. You just pass in your chunked documents, and Zep handles the rest.
Zep supports both similarity search and [Maximum Marginal Relevance (MMR) reranking](https://docs.getzep.com/sdk/search_query/). MMR search is useful for ensuring that the retrieved documents are diverse and not too similar to each other.
```
from langchain_community.vectorstores import ZepVectorStore
```
See a [usage example](https://python.langchain.com/docs/integrations/vectorstores/zep/). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:26.905Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/zep/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/zep/",
"description": "Zep is an open source platform for productionizing LLM apps. Go from a prototype",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4643",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"zep\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:26 GMT",
"etag": "W/\"6f286164cf2012587ee857cfbe9d883e\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wfnv6-1713753746535-782dd14fbab9"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/zep/",
"property": "og:url"
},
{
"content": "Zep | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Zep is an open source platform for productionizing LLM apps. Go from a prototype",
"property": "og:description"
}
],
"title": "Zep | 🦜️🔗 LangChain"
} | Zep
Zep is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.
Key Features:
Fast! Zep operates independently of the your chat loop, ensuring a snappy user experience.
Chat History Memory, Archival, and Enrichment, populate your prompts with relevant chat history, sumamries, named entities, intent data, and more.
Vector Search over Chat History and Documents Automatic embedding of documents, chat histories, and summaries. Use Zep's similarity or native MMR Re-ranked search to find the most relevant.
Manage Users and their Chat Sessions Users and their Chat Sessions are first-class citizens in Zep, allowing you to manage user interactions with your bots or agents easily.
Records Retention and Privacy Compliance Comply with corporate and regulatory mandates for records retention while ensuring compliance with privacy regulations such as CCPA and GDPR. Fulfill Right To Be Forgotten requests with a single API call
Zep project: https://github.com/getzep/zep
Docs: https://docs.getzep.com/
Installation and Setup
Install the Zep service. See the Zep Quick Start Guide.
Install the Zep Python SDK:
Memory
Zep's Memory API persists your app's chat history and metadata to a Session, enriches the memory, automatically generates summaries, and enables vector similarity search over historical chat messages and summaries.
There are two approaches to populating your prompt with chat history:
Retrieve the most recent N messages (and potentionally a summary) from a Session and use them to construct your prompt.
Search over the Session's chat history for messages that are relevant and use them to construct your prompt.
Both of these approaches may be useful, with the first providing the LLM with context as to the most recent interactions with a human. The second approach enables you to look back further in the chat history and retrieve messages that are relevant to the current conversation in a token-efficient manner.
from langchain.memory import ZepMemory
See a RAG App Example here.
Retriever
Zep's Memory Retriever is a LangChain Retriever that enables you to retrieve messages from a Zep Session and use them to construct your prompt.
The Retriever supports searching over both individual messages and summaries of conversations. The latter is useful for providing rich, but succinct context to the LLM as to relevant past conversations.
Zep's Memory Retriever supports both similarity search and Maximum Marginal Relevance (MMR) reranking. MMR search is useful for ensuring that the retrieved messages are diverse and not too similar to each other
See a usage example.
from langchain_community.retrievers import ZepRetriever
Vector store
Zep's Document VectorStore API enables you to store and retrieve documents using vector similarity search. Zep doesn't require you to understand distance functions, types of embeddings, or indexing best practices. You just pass in your chunked documents, and Zep handles the rest.
Zep supports both similarity search and Maximum Marginal Relevance (MMR) reranking. MMR search is useful for ensuring that the retrieved documents are diverse and not too similar to each other.
from langchain_community.vectorstores import ZepVectorStore
See a usage example. |
https://python.langchain.com/docs/integrations/retrievers/self_query/qdrant_self_query/ | ## Qdrant
> [Qdrant](https://qdrant.tech/documentation/) (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. `Qdrant` is tailored to extended filtering support.
In the notebook, we’ll demo the `SelfQueryRetriever` wrapped around a `Qdrant` vector store.
## Creating a Qdrant vector store[](#creating-a-qdrant-vector-store "Direct link to Creating a Qdrant vector store")
First we’ll want to create a Qdrant vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `qdrant-client` package.
```
%pip install --upgrade --quiet lark qdrant-client
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
# import os# import getpass# os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
```
```
from langchain_community.vectorstores import Qdrantfrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]vectorstore = Qdrant.from_documents( docs, embeddings, location=":memory:", # Local mode with in-memory storage only collection_name="my_documents",)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
```
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:27.398Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/qdrant_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/qdrant_self_query/",
"description": "Qdrant (read: quadrant) is a",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3592",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"qdrant_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:26 GMT",
"etag": "W/\"19f442cd5f0bec85d3af98bc4d7d8e0c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::8fs27-1713753746741-9543230a8483"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/qdrant_self_query/",
"property": "og:url"
},
{
"content": "Qdrant | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Qdrant (read: quadrant) is a",
"property": "og:description"
}
],
"title": "Qdrant | 🦜️🔗 LangChain"
} | Qdrant
Qdrant (read: quadrant) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support.
In the notebook, we’ll demo the SelfQueryRetriever wrapped around a Qdrant vector store.
Creating a Qdrant vector store
First we’ll want to create a Qdrant vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the qdrant-client package.
%pip install --upgrade --quiet lark qdrant-client
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
# import os
# import getpass
# os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain_community.vectorstores import Qdrant
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"rating": 9.9,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
},
),
]
vectorstore = Qdrant.from_documents(
docs,
embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})] |
https://python.langchain.com/docs/integrations/retrievers/self_query/pinecone/ | ## Pinecone
> [Pinecone](https://docs.pinecone.io/docs/overview) is a vector database with broad functionality.
In the walkthrough, we’ll demo the `SelfQueryRetriever` with a `Pinecone` vector store.
## Creating a Pinecone index[](#creating-a-pinecone-index "Direct link to Creating a Pinecone index")
First we’ll want to create a `Pinecone` vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
To use Pinecone, you have to have `pinecone` package installed and you must have an API key and an environment. Here are the [installation instructions](https://docs.pinecone.io/docs/quickstart).
**Note:** The self-query retriever requires you to have `lark` package installed.
```
%pip install --upgrade --quiet lark
```
```
%pip install --upgrade --quiet pinecone-client
```
```
import osimport pineconepinecone.init( api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"])
```
```
/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) from tqdm.autonotebook import tqdm
```
```
from langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsfrom langchain_pinecone import PineconeVectorStoreembeddings = OpenAIEmbeddings()# create new indexpinecone.create_index("langchain-self-retriever-demo", dimension=1536)
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": ["action", "science fiction"]}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": ["science fiction", "thriller"], "rating": 9.9, }, ),]vectorstore = PineconeVectorStore.from_documents( docs, embeddings, index_name="langchain-self-retriever-demo")
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
query='dinosaur' filter=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
```
```
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
```
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990.0), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005.0), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are two movies about dinosaurs")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:27.005Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/pinecone/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/pinecone/",
"description": "Pinecone is a vector",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3592",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"pinecone\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:26 GMT",
"etag": "W/\"d4da9df19fa84fcd89aadd52683fb75f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::xqkjm-1713753746748-c41216d75337"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/pinecone/",
"property": "og:url"
},
{
"content": "Pinecone | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Pinecone is a vector",
"property": "og:description"
}
],
"title": "Pinecone | 🦜️🔗 LangChain"
} | Pinecone
Pinecone is a vector database with broad functionality.
In the walkthrough, we’ll demo the SelfQueryRetriever with a Pinecone vector store.
Creating a Pinecone index
First we’ll want to create a Pinecone vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
To use Pinecone, you have to have pinecone package installed and you must have an API key and an environment. Here are the installation instructions.
Note: The self-query retriever requires you to have lark package installed.
%pip install --upgrade --quiet lark
%pip install --upgrade --quiet pinecone-client
import os
import pinecone
pinecone.init(
api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"]
)
/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
from tqdm.autonotebook import tqdm
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
from langchain_pinecone import PineconeVectorStore
embeddings = OpenAIEmbeddings()
# create new index
pinecone.create_index("langchain-self-retriever-demo", dimension=1536)
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": ["action", "science fiction"]},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": ["science fiction", "thriller"],
"rating": 9.9,
},
),
]
vectorstore = PineconeVectorStore.from_documents(
docs, embeddings, index_name="langchain-self-retriever-demo"
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990.0), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005.0), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])
[Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("What are two movies about dinosaurs") |
https://python.langchain.com/docs/integrations/providers/zilliz/ | A wrapper around Zilliz indexes allows you to use it as a vectorstore, whether for semantic search or example selection.
```
from langchain_community.vectorstores import Milvus
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:28.036Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/providers/zilliz/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/providers/zilliz/",
"description": "Zilliz Cloud is a fully managed service on cloud for LF AI Milvus®,",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"zilliz\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:27 GMT",
"etag": "W/\"edf6833ae398311cde9d5b4899ad7daf\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::nvx8d-1713753747763-8da7dc4a5b36"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/providers/zilliz/",
"property": "og:url"
},
{
"content": "Zilliz | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Zilliz Cloud is a fully managed service on cloud for LF AI Milvus®,",
"property": "og:description"
}
],
"title": "Zilliz | 🦜️🔗 LangChain"
} | A wrapper around Zilliz indexes allows you to use it as a vectorstore, whether for semantic search or example selection.
from langchain_community.vectorstores import Milvus |
https://python.langchain.com/docs/integrations/retrievers/self_query/redis_self_query/ | ## Redis
> [Redis](https://redis.com/) is an open-source key-value store that can be used as a cache, message broker, database, vector database and more.
In the notebook, we’ll demo the `SelfQueryRetriever` wrapped around a `Redis` vector store.
## Creating a Redis vector store[](#creating-a-redis-vector-store "Direct link to Creating a Redis vector store")
First we’ll want to create a Redis vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`) along with integration-specific requirements.
```
%pip install --upgrade --quiet redis redisvl langchain-openai tiktoken lark
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
```
```
from langchain_community.vectorstores import Redisfrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={ "year": 1993, "rating": 7.7, "director": "Steven Spielberg", "genre": "science fiction", }, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={ "year": 2010, "director": "Christopher Nolan", "genre": "science fiction", "rating": 8.2, }, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={ "year": 2006, "director": "Satoshi Kon", "genre": "science fiction", "rating": 8.6, }, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={ "year": 2019, "director": "Greta Gerwig", "genre": "drama", "rating": 8.3, }, ), Document( page_content="Toys come alive and have a blast doing so", metadata={ "year": 1995, "director": "John Lasseter", "genre": "animated", "rating": 9.1, }, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]
```
```
index_schema = { "tag": [{"name": "genre"}], "text": [{"name": "director"}], "numeric": [{"name": "year"}, {"name": "rating"}],}vectorstore = Redis.from_documents( docs, embeddings, redis_url="redis://localhost:6379", index_name="movie_reviews", index_schema=index_schema,)
```
```
`index_schema` does not match generated metadata schema.If you meant to manually override the schema, please ignore this message.index_schema: {'tag': [{'name': 'genre'}], 'text': [{'name': 'director'}], 'numeric': [{'name': 'year'}, {'name': 'rating'}]}generated_schema: {'text': [{'name': 'director'}, {'name': 'genre'}], 'numeric': [{'name': 'year'}, {'name': 'rating'}], 'tag': []}
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"
```
```
llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
/Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn(
```
```
query='dinosaur' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 'doc:movie_reviews:7b5481d753bc4135851b66fa61def7fb', 'director': 'Steven Spielberg', 'genre': 'science fiction', 'year': '1993', 'rating': '7.7'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.4")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.4) limit=None
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'id': 'doc:movie_reviews:bb899807b93c442083fd45e75a4779d5', 'director': 'Greta Gerwig', 'genre': 'drama', 'year': '2019', 'rating': '8.3'})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='science fiction')]) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
```
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='animated')]) limit=None
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 'doc:movie_reviews:7b5481d753bc4135851b66fa61def7fb', 'director': 'Steven Spielberg', 'genre': 'science fiction', 'year': '1993', 'rating': '7.7'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:28.955Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/redis_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/redis_self_query/",
"description": "Redis is an open-source key-value store that can",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4017",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"redis_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:28 GMT",
"etag": "W/\"c3f68b8735672b50c74b5533124dbbf6\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::tqd6x-1713753748840-fc8a36d44761"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/redis_self_query/",
"property": "og:url"
},
{
"content": "Redis | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Redis is an open-source key-value store that can",
"property": "og:description"
}
],
"title": "Redis | 🦜️🔗 LangChain"
} | Redis
Redis is an open-source key-value store that can be used as a cache, message broker, database, vector database and more.
In the notebook, we’ll demo the SelfQueryRetriever wrapped around a Redis vector store.
Creating a Redis vector store
First we’ll want to create a Redis vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark) along with integration-specific requirements.
%pip install --upgrade --quiet redis redisvl langchain-openai tiktoken lark
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
from langchain_community.vectorstores import Redis
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={
"year": 1993,
"rating": 7.7,
"director": "Steven Spielberg",
"genre": "science fiction",
},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={
"year": 2010,
"director": "Christopher Nolan",
"genre": "science fiction",
"rating": 8.2,
},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={
"year": 2006,
"director": "Satoshi Kon",
"genre": "science fiction",
"rating": 8.6,
},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={
"year": 2019,
"director": "Greta Gerwig",
"genre": "drama",
"rating": 8.3,
},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={
"year": 1995,
"director": "John Lasseter",
"genre": "animated",
"rating": 9.1,
},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"rating": 9.9,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
},
),
]
index_schema = {
"tag": [{"name": "genre"}],
"text": [{"name": "director"}],
"numeric": [{"name": "year"}, {"name": "rating"}],
}
vectorstore = Redis.from_documents(
docs,
embeddings,
redis_url="redis://localhost:6379",
index_name="movie_reviews",
index_schema=index_schema,
)
`index_schema` does not match generated metadata schema.
If you meant to manually override the schema, please ignore this message.
index_schema: {'tag': [{'name': 'genre'}], 'text': [{'name': 'director'}], 'numeric': [{'name': 'year'}, {'name': 'rating'}]}
generated_schema: {'text': [{'name': 'director'}, {'name': 'genre'}], 'numeric': [{'name': 'year'}, {'name': 'rating'}], 'tag': []}
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
/Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 'doc:movie_reviews:7b5481d753bc4135851b66fa61def7fb', 'director': 'Steven Spielberg', 'genre': 'science fiction', 'year': '1993', 'rating': '7.7'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.4")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.4) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'id': 'doc:movie_reviews:bb899807b93c442083fd45e75a4779d5', 'director': 'Greta Gerwig', 'genre': 'drama', 'year': '2019', 'rating': '8.3'})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='science fiction')]) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'id': 'doc:movie_reviews:2cc66f38bfbd438eb3a045d90a1a4088', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'year': '1979', 'rating': '9.9'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'id': 'doc:movie_reviews:edf567b1d5334e02b2a4c692d853c80c', 'director': 'Satoshi Kon', 'genre': 'science fiction', 'year': '2006', 'rating': '8.6'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='animated')]) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'id': 'doc:movie_reviews:7b5481d753bc4135851b66fa61def7fb', 'director': 'Steven Spielberg', 'genre': 'science fiction', 'year': '1993', 'rating': '7.7'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'id': 'doc:movie_reviews:9e4e84daa0374941a6aa4274e9bbb607', 'director': 'John Lasseter', 'genre': 'animated', 'year': '1995', 'rating': '9.1'})] |
https://python.langchain.com/docs/integrations/retrievers/ | [
## 📄️ LOTR (Merger Retriever)
Lord of the Retrievers (LOTR), also known as MergerRetriever,
](https://python.langchain.com/docs/integrations/retrievers/merger_retriever/) | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:29.982Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/",
"description": null,
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3600",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"retrievers\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:29 GMT",
"etag": "W/\"6d2377457844e3c2b0a1299cbfab0669\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::2cm6b-1713753749912-f5e3bea20b0c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/",
"property": "og:url"
},
{
"content": "Retrievers | 🦜️🔗 LangChain",
"property": "og:title"
}
],
"title": "Retrievers | 🦜️🔗 LangChain"
} | 📄️ LOTR (Merger Retriever)
Lord of the Retrievers (LOTR), also known as MergerRetriever, |
https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query/ | ## Supabase (Postgres)
> [Supabase](https://supabase.com/docs) is an open-source `Firebase` alternative. `Supabase` is built on top of `PostgreSQL`, which offers strong `SQL` querying capabilities and enables a simple interface with already-existing tools and frameworks.
> [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) also known as `Postgres`, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and `SQL` compliance.
>
> [Supabase](https://supabase.com/docs/guides/ai) provides an open-source toolkit for developing AI applications using Postgres and pgvector. Use the Supabase client libraries to store, index, and query your vector embeddings at scale.
In the notebook, we’ll demo the `SelfQueryRetriever` wrapped around a `Supabase` vector store.
Specifically, we will: 1. Create a Supabase database 2. Enable the `pgvector` extension 3. Create a `documents` table and `match_documents` function that will be used by `SupabaseVectorStore` 4. Load sample documents into the vector store (database table) 5. Build and test a self-querying retriever
## Setup Supabase Database[](#setup-supabase-database "Direct link to Setup Supabase Database")
1. Head over to [https://database.new](https://database.new/) to provision your Supabase database.
2. In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and run the following script to enable `pgvector` and setup your database as a vector store: \`\`\`sql – Enable the pgvector extension to work with embedding vectors create extension if not exists vector;
– Create a table to store your documents create table documents ( id uuid primary key, content text, – corresponds to Document.pageContent metadata jsonb, – corresponds to Document.metadata embedding vector (1536) – 1536 works for OpenAI embeddings, change if needed );
– Create a function to search for documents create function match\_documents ( query\_embedding vector (1536), filter jsonb default ‘{}’ ) returns table ( id uuid, content text, metadata jsonb, similarity float ) language plpgsql as $$ #variable\_conflict use\_column begin return query select id, content, metadata, 1 - (documents.embedding <=> query\_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query\_embedding; end; $$; \`\`\`
## Creating a Supabase vector store[](#creating-a-supabase-vector-store "Direct link to Creating a Supabase vector store")
Next we’ll want to create a Supabase vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Be sure to install the latest version of `langchain` with `openai` support:
```
%pip install --upgrade --quiet langchain langchain-openai tiktoken
```
The self-query retriever requires you to have `lark` installed:
```
%pip install --upgrade --quiet lark
```
We also need the `supabase` package:
```
%pip install --upgrade --quiet supabase
```
Since we are using `SupabaseVectorStore` and `OpenAIEmbeddings`, we have to load their API keys.
* To find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project’s [API settings](https://supabase.com/dashboard/project/_/settings/api).
* `SUPABASE_URL` corresponds to the Project URL
* `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key
* To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
```
import getpassimport osos.environ["SUPABASE_URL"] = getpass.getpass("Supabase URL:")os.environ["SUPABASE_SERVICE_KEY"] = getpass.getpass("Supabase Service Key:")os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
```
_Optional:_ If you’re storing your Supabase and OpenAI API keys in a `.env` file, you can load them with [`dotenv`](https://github.com/theskumar/python-dotenv).
```
%pip install --upgrade --quiet python-dotenv
```
```
from dotenv import load_dotenvload_dotenv()
```
First we’ll create a Supabase client and instantiate a OpenAI embeddings class.
```
import osfrom langchain_community.vectorstores import SupabaseVectorStorefrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsfrom supabase.client import Client, create_clientsupabase_url = os.environ.get("SUPABASE_URL")supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")supabase: Client = create_client(supabase_url, supabase_key)embeddings = OpenAIEmbeddings()
```
Next let’s create our documents.
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = SupabaseVectorStore.from_documents( docs, embeddings, client=supabase, table_name="documents", query_name="match_documents",)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women?")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before (or on) 2005 that's all about toys, and preferably is animated")
```
```
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LTE: 'lte'>, attribute='year', value=2005), Comparison(comparator=<Comparator.LIKE: 'like'>, attribute='genre', value='animated')]) limit=None
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:30.485Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query/",
"description": "Supabase is an open-source Firebase",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4491",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"supabase_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:30 GMT",
"etag": "W/\"7523acbd773966a54c32617130498a53\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::np5t5-1713753750412-a43f3449b311"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/supabase_self_query/",
"property": "og:url"
},
{
"content": "Supabase (Postgres) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Supabase is an open-source Firebase",
"property": "og:description"
}
],
"title": "Supabase (Postgres) | 🦜️🔗 LangChain"
} | Supabase (Postgres)
Supabase is an open-source Firebase alternative. Supabase is built on top of PostgreSQL, which offers strong SQL querying capabilities and enables a simple interface with already-existing tools and frameworks.
PostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.
Supabase provides an open-source toolkit for developing AI applications using Postgres and pgvector. Use the Supabase client libraries to store, index, and query your vector embeddings at scale.
In the notebook, we’ll demo the SelfQueryRetriever wrapped around a Supabase vector store.
Specifically, we will: 1. Create a Supabase database 2. Enable the pgvector extension 3. Create a documents table and match_documents function that will be used by SupabaseVectorStore 4. Load sample documents into the vector store (database table) 5. Build and test a self-querying retriever
Setup Supabase Database
Head over to https://database.new to provision your Supabase database.
In the studio, jump to the SQL editor and run the following script to enable pgvector and setup your database as a vector store: ```sql – Enable the pgvector extension to work with embedding vectors create extension if not exists vector;
– Create a table to store your documents create table documents ( id uuid primary key, content text, – corresponds to Document.pageContent metadata jsonb, – corresponds to Document.metadata embedding vector (1536) – 1536 works for OpenAI embeddings, change if needed );
– Create a function to search for documents create function match_documents ( query_embedding vector (1536), filter jsonb default ‘{}’ ) returns table ( id uuid, content text, metadata jsonb, similarity float ) language plpgsql as $$ #variable_conflict use_column begin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding; end; $$; ```
Creating a Supabase vector store
Next we’ll want to create a Supabase vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Be sure to install the latest version of langchain with openai support:
%pip install --upgrade --quiet langchain langchain-openai tiktoken
The self-query retriever requires you to have lark installed:
%pip install --upgrade --quiet lark
We also need the supabase package:
%pip install --upgrade --quiet supabase
Since we are using SupabaseVectorStore and OpenAIEmbeddings, we have to load their API keys.
To find your SUPABASE_URL and SUPABASE_SERVICE_KEY, head to your Supabase project’s API settings.
SUPABASE_URL corresponds to the Project URL
SUPABASE_SERVICE_KEY corresponds to the service_role API key
To get your OPENAI_API_KEY, navigate to API keys on your OpenAI account and create a new secret key.
import getpass
import os
os.environ["SUPABASE_URL"] = getpass.getpass("Supabase URL:")
os.environ["SUPABASE_SERVICE_KEY"] = getpass.getpass("Supabase Service Key:")
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
Optional: If you’re storing your Supabase and OpenAI API keys in a .env file, you can load them with dotenv.
%pip install --upgrade --quiet python-dotenv
from dotenv import load_dotenv
load_dotenv()
First we’ll create a Supabase client and instantiate a OpenAI embeddings class.
import os
from langchain_community.vectorstores import SupabaseVectorStore
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
from supabase.client import Client, create_client
supabase_url = os.environ.get("SUPABASE_URL")
supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")
supabase: Client = create_client(supabase_url, supabase_key)
embeddings = OpenAIEmbeddings()
Next let’s create our documents.
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
"rating": 9.9,
},
),
]
vectorstore = SupabaseVectorStore.from_documents(
docs,
embeddings,
client=supabase,
table_name="documents",
query_name="match_documents",
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women?")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before (or on) 2005 that's all about toys, and preferably is animated"
)
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LTE: 'lte'>, attribute='year', value=2005), Comparison(comparator=<Comparator.LIKE: 'like'>, attribute='genre', value='animated')]) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})] |
https://python.langchain.com/docs/integrations/retrievers/self_query/tencentvectordb/ | ## Tencent Cloud VectorDB
> [Tencent Cloud VectorDB](https://cloud.tencent.com/document/product/1709) is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data.
In the walkthrough, we’ll demo the `SelfQueryRetriever` with a Tencent Cloud VectorDB.
## create a TencentVectorDB instance[](#create-a-tencentvectordb-instance "Direct link to create a TencentVectorDB instance")
First we’ll want to create a TencentVectorDB and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`) along with integration-specific requirements.
```
%pip install --upgrade --quiet tcvectordb langchain-openai tiktoken lark
```
```
[notice] A new release of pip is available: 23.2.1 -> 24.0[notice] To update, run: pip install --upgrade pipNote: you may need to restart the kernel to use updated packages.
```
We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key.
```
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
```
create a TencentVectorDB instance and seed it with some data:
```
from langchain_community.vectorstores.tencentvectordb import ( ConnectionParams, MetaField, TencentVectorDB,)from langchain_core.documents import Documentfrom tcvectordb.model.enum import FieldTypemeta_fields = [ MetaField(name="year", data_type="uint64", index=True), MetaField(name="rating", data_type="string", index=False), MetaField(name="genre", data_type=FieldType.String, index=True), MetaField(name="director", data_type=FieldType.String, index=True),]docs = [ Document( page_content="The Shawshank Redemption is a 1994 American drama film written and directed by Frank Darabont.", metadata={ "year": 1994, "rating": "9.3", "genre": "drama", "director": "Frank Darabont", }, ), Document( page_content="The Godfather is a 1972 American crime film directed by Francis Ford Coppola.", metadata={ "year": 1972, "rating": "9.2", "genre": "crime", "director": "Francis Ford Coppola", }, ), Document( page_content="The Dark Knight is a 2008 superhero film directed by Christopher Nolan.", metadata={ "year": 2008, "rating": "9.0", "genre": "science fiction", "director": "Christopher Nolan", }, ), Document( page_content="Inception is a 2010 science fiction action film written and directed by Christopher Nolan.", metadata={ "year": 2010, "rating": "8.8", "genre": "science fiction", "director": "Christopher Nolan", }, ), Document( page_content="The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.", metadata={ "year": 2012, "rating": "8.0", "genre": "science fiction", "director": "Joss Whedon", }, ), Document( page_content="Black Panther is a 2018 American superhero film based on the Marvel Comics character of the same name.", metadata={ "year": 2018, "rating": "7.3", "genre": "science fiction", "director": "Ryan Coogler", }, ),]vector_db = TencentVectorDB.from_documents( docs, None, connection_params=ConnectionParams( url="http://10.0.X.X", key="eC4bLRy2va******************************", username="root", timeout=20, ), collection_name="self_query_movies", meta_fields=meta_fields, drop_old=True,)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import ChatOpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="string" ),]document_content_description = "Brief summary of a movie"
```
```
llm = ChatOpenAI(temperature=0, model="gpt-4", max_tokens=4069)retriever = SelfQueryRetriever.from_llm( llm, vector_db, document_content_description, metadata_field_info, verbose=True)
```
## Test it out[](#test-it-out "Direct link to Test it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("movies about a superhero")
```
```
[Document(page_content='The Dark Knight is a 2008 superhero film directed by Christopher Nolan.', metadata={'year': 2008, 'rating': '9.0', 'genre': 'science fiction', 'director': 'Christopher Nolan'}), Document(page_content='The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.', metadata={'year': 2012, 'rating': '8.0', 'genre': 'science fiction', 'director': 'Joss Whedon'}), Document(page_content='Black Panther is a 2018 American superhero film based on the Marvel Comics character of the same name.', metadata={'year': 2018, 'rating': '7.3', 'genre': 'science fiction', 'director': 'Ryan Coogler'}), Document(page_content='The Godfather is a 1972 American crime film directed by Francis Ford Coppola.', metadata={'year': 1972, 'rating': '9.2', 'genre': 'crime', 'director': 'Francis Ford Coppola'})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("movies that were released after 2010")
```
```
[Document(page_content='The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.', metadata={'year': 2012, 'rating': '8.0', 'genre': 'science fiction', 'director': 'Joss Whedon'}), Document(page_content='Black Panther is a 2018 American superhero film based on the Marvel Comics character of the same name.', metadata={'year': 2018, 'rating': '7.3', 'genre': 'science fiction', 'director': 'Ryan Coogler'})]
```
```
# This example specifies both a relevant query and a filterretriever.get_relevant_documents( "movies about a superhero which were released after 2010")
```
```
[Document(page_content='The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.', metadata={'year': 2012, 'rating': '8.0', 'genre': 'science fiction', 'director': 'Joss Whedon'}), Document(page_content='Black Panther is a 2018 American superhero film based on the Marvel Comics character of the same name.', metadata={'year': 2018, 'rating': '7.3', 'genre': 'science fiction', 'director': 'Ryan Coogler'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vector_db, document_content_description, metadata_field_info, verbose=True, enable_limit=True,)
```
```
retriever.get_relevant_documents("what are two movies about a superhero")
```
```
[Document(page_content='The Dark Knight is a 2008 superhero film directed by Christopher Nolan.', metadata={'year': 2008, 'rating': '9.0', 'genre': 'science fiction', 'director': 'Christopher Nolan'}), Document(page_content='The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.', metadata={'year': 2012, 'rating': '8.0', 'genre': 'science fiction', 'director': 'Joss Whedon'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:30.941Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/tencentvectordb/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/tencentvectordb/",
"description": "[Tencent Cloud",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3595",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tencentvectordb\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:30 GMT",
"etag": "W/\"e195056cd063b11b0454912c6185581a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::cb5mv-1713753750433-b3fc739d3631"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/tencentvectordb/",
"property": "og:url"
},
{
"content": "Tencent Cloud VectorDB | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Tencent Cloud",
"property": "og:description"
}
],
"title": "Tencent Cloud VectorDB | 🦜️🔗 LangChain"
} | Tencent Cloud VectorDB
Tencent Cloud VectorDB is a fully managed, self-developed, enterprise-level distributed database service designed for storing, retrieving, and analyzing multi-dimensional vector data.
In the walkthrough, we’ll demo the SelfQueryRetriever with a Tencent Cloud VectorDB.
create a TencentVectorDB instance
First we’ll want to create a TencentVectorDB and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark) along with integration-specific requirements.
%pip install --upgrade --quiet tcvectordb langchain-openai tiktoken lark
[notice] A new release of pip is available: 23.2.1 -> 24.0
[notice] To update, run: pip install --upgrade pip
Note: you may need to restart the kernel to use updated packages.
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
create a TencentVectorDB instance and seed it with some data:
from langchain_community.vectorstores.tencentvectordb import (
ConnectionParams,
MetaField,
TencentVectorDB,
)
from langchain_core.documents import Document
from tcvectordb.model.enum import FieldType
meta_fields = [
MetaField(name="year", data_type="uint64", index=True),
MetaField(name="rating", data_type="string", index=False),
MetaField(name="genre", data_type=FieldType.String, index=True),
MetaField(name="director", data_type=FieldType.String, index=True),
]
docs = [
Document(
page_content="The Shawshank Redemption is a 1994 American drama film written and directed by Frank Darabont.",
metadata={
"year": 1994,
"rating": "9.3",
"genre": "drama",
"director": "Frank Darabont",
},
),
Document(
page_content="The Godfather is a 1972 American crime film directed by Francis Ford Coppola.",
metadata={
"year": 1972,
"rating": "9.2",
"genre": "crime",
"director": "Francis Ford Coppola",
},
),
Document(
page_content="The Dark Knight is a 2008 superhero film directed by Christopher Nolan.",
metadata={
"year": 2008,
"rating": "9.0",
"genre": "science fiction",
"director": "Christopher Nolan",
},
),
Document(
page_content="Inception is a 2010 science fiction action film written and directed by Christopher Nolan.",
metadata={
"year": 2010,
"rating": "8.8",
"genre": "science fiction",
"director": "Christopher Nolan",
},
),
Document(
page_content="The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.",
metadata={
"year": 2012,
"rating": "8.0",
"genre": "science fiction",
"director": "Joss Whedon",
},
),
Document(
page_content="Black Panther is a 2018 American superhero film based on the Marvel Comics character of the same name.",
metadata={
"year": 2018,
"rating": "7.3",
"genre": "science fiction",
"director": "Ryan Coogler",
},
),
]
vector_db = TencentVectorDB.from_documents(
docs,
None,
connection_params=ConnectionParams(
url="http://10.0.X.X",
key="eC4bLRy2va******************************",
username="root",
timeout=20,
),
collection_name="self_query_movies",
meta_fields=meta_fields,
drop_old=True,
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import ChatOpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="string"
),
]
document_content_description = "Brief summary of a movie"
llm = ChatOpenAI(temperature=0, model="gpt-4", max_tokens=4069)
retriever = SelfQueryRetriever.from_llm(
llm, vector_db, document_content_description, metadata_field_info, verbose=True
)
Test it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("movies about a superhero")
[Document(page_content='The Dark Knight is a 2008 superhero film directed by Christopher Nolan.', metadata={'year': 2008, 'rating': '9.0', 'genre': 'science fiction', 'director': 'Christopher Nolan'}),
Document(page_content='The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.', metadata={'year': 2012, 'rating': '8.0', 'genre': 'science fiction', 'director': 'Joss Whedon'}),
Document(page_content='Black Panther is a 2018 American superhero film based on the Marvel Comics character of the same name.', metadata={'year': 2018, 'rating': '7.3', 'genre': 'science fiction', 'director': 'Ryan Coogler'}),
Document(page_content='The Godfather is a 1972 American crime film directed by Francis Ford Coppola.', metadata={'year': 1972, 'rating': '9.2', 'genre': 'crime', 'director': 'Francis Ford Coppola'})]
# This example only specifies a filter
retriever.get_relevant_documents("movies that were released after 2010")
[Document(page_content='The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.', metadata={'year': 2012, 'rating': '8.0', 'genre': 'science fiction', 'director': 'Joss Whedon'}),
Document(page_content='Black Panther is a 2018 American superhero film based on the Marvel Comics character of the same name.', metadata={'year': 2018, 'rating': '7.3', 'genre': 'science fiction', 'director': 'Ryan Coogler'})]
# This example specifies both a relevant query and a filter
retriever.get_relevant_documents(
"movies about a superhero which were released after 2010"
)
[Document(page_content='The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.', metadata={'year': 2012, 'rating': '8.0', 'genre': 'science fiction', 'director': 'Joss Whedon'}),
Document(page_content='Black Panther is a 2018 American superhero film based on the Marvel Comics character of the same name.', metadata={'year': 2018, 'rating': '7.3', 'genre': 'science fiction', 'director': 'Ryan Coogler'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vector_db,
document_content_description,
metadata_field_info,
verbose=True,
enable_limit=True,
)
retriever.get_relevant_documents("what are two movies about a superhero")
[Document(page_content='The Dark Knight is a 2008 superhero film directed by Christopher Nolan.', metadata={'year': 2008, 'rating': '9.0', 'genre': 'science fiction', 'director': 'Christopher Nolan'}),
Document(page_content='The Avengers is a 2012 American superhero film based on the Marvel Comics superhero team of the same name.', metadata={'year': 2012, 'rating': '8.0', 'genre': 'science fiction', 'director': 'Joss Whedon'})] |
https://python.langchain.com/docs/integrations/retrievers/self_query/timescalevector_self_query/ | ## Timescale Vector (Postgres)
> [Timescale Vector](https://www.timescale.com/ai) is `PostgreSQL++` for AI applications. It enables you to efficiently store and query billions of vector embeddings in `PostgreSQL`.
>
> [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL) also known as `Postgres`, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and `SQL` compliance.
This notebook shows how to use the Postgres vector database (`TimescaleVector`) to perform self-querying. In the notebook we’ll demo the `SelfQueryRetriever` wrapped around a TimescaleVector vector store.
## What is Timescale Vector?[](#what-is-timescale-vector "Direct link to What is Timescale Vector?")
**[Timescale Vector](https://www.timescale.com/ai) is PostgreSQL++ for AI applications.**
Timescale Vector enables you to efficiently store and query millions of vector embeddings in `PostgreSQL`. - Enhances `pgvector` with faster and more accurate similarity search on 1B+ vectors via DiskANN inspired indexing algorithm. - Enables fast time-based vector search via automatic time-based partitioning and indexing. - Provides a familiar SQL interface for querying vector embeddings and relational data.
Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production: - Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database. - Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security. - Enables a worry-free experience with enterprise-grade security and compliance.
## How to access Timescale Vector[](#how-to-access-timescale-vector "Direct link to How to access Timescale Vector")
Timescale Vector is available on [Timescale](https://www.timescale.com/ai), the cloud PostgreSQL platform. (There is no self-hosted version at this time.)
LangChain users get a 90-day free trial for Timescale Vector. - To get started, [signup](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) to Timescale, create a new database and follow this notebook! - See the [Timescale Vector explainer blog](https://www.timescale.com/blog/how-we-made-postgresql-the-best-vector-database/?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) for more details and performance benchmarks. - See the [installation instructions](https://github.com/timescale/python-vector) for more details on using Timescale Vector in python.
## Creating a TimescaleVector vectorstore[](#creating-a-timescalevector-vectorstore "Direct link to Creating a TimescaleVector vectorstore")
First we’ll want to create a Timescale Vector vectorstore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `timescale-vector` package.
```
%pip install --upgrade --quiet lark
```
```
%pip install --upgrade --quiet timescale-vector
```
In this example, we’ll use `OpenAIEmbeddings`, so let’s load your OpenAI API key.
```
# Get openAI api key by reading local .env file# The .env file should contain a line starting with `OPENAI_API_KEY=sk-`import osfrom dotenv import find_dotenv, load_dotenv_ = load_dotenv(find_dotenv())OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]# Alternatively, use getpass to enter the key in a prompt# import os# import getpass# os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
```
To connect to your PostgreSQL database, you’ll need your service URI, which can be found in the cheatsheet or `.env` file you downloaded after creating a new database.
If you haven’t already, [signup for Timescale](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral), and create a new database.
The URI will look something like this: `postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require`
```
# Get the service url by reading local .env file# The .env file should contain a line starting with `TIMESCALE_SERVICE_URL=postgresql://`_ = load_dotenv(find_dotenv())TIMESCALE_SERVICE_URL = os.environ["TIMESCALE_SERVICE_URL"]# Alternatively, use getpass to enter the key in a prompt# import os# import getpass# TIMESCALE_SERVICE_URL = getpass.getpass("Timescale Service URL:")
```
```
from langchain_community.vectorstores.timescalevector import TimescaleVectorfrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()
```
Here’s the sample documents we’ll use for this demo. The data is about movies, and has both content and metadata fields with information about particular movie.
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]
```
Finally, we’ll create our Timescale Vector vectorstore. Note that the collection name will be the name of the PostgreSQL table in which the documents are stored in.
```
COLLECTION_NAME = "langchain_self_query_demo"vectorstore = TimescaleVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME, service_url=TIMESCALE_SERVICE_URL,)
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAI# Give LLM info about the metadata fieldsmetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"# Instantiate the self-query retriever from an LLMllm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Self Querying Retrieval with Timescale Vector[](#self-querying-retrieval-with-timescale-vector "Direct link to Self Querying Retrieval with Timescale Vector")
And now we can try actually using our retriever!
Run the queries below and note how you can specify a query, filter, composite filter (filters with AND, OR) in natural language and the self-query retriever will translate that query into SQL and perform the search on the Timescale Vector (Postgres) vectorstore.
This illustrates the power of the self-query retriever. You can use it to perform complex searches over your vectorstore without you or your users having to write any SQL directly!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
/Users/avtharsewrathan/sideprojects2023/timescaleai/tsv-langchain/langchain/libs/langchain/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn(
```
```
query='dinosaur' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'}), Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
```
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
```
### Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example specifies a query with a LIMIT valueretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}), Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:31.471Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/timescalevector_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/timescalevector_self_query/",
"description": "Timescale Vector is PostgreSQL++ for",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4019",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"timescalevector_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:31 GMT",
"etag": "W/\"8a233f3e8761d4907e30927bf7a51408\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::p4nxg-1713753751342-05c2cd45980a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/timescalevector_self_query/",
"property": "og:url"
},
{
"content": "Timescale Vector (Postgres) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Timescale Vector is PostgreSQL++ for",
"property": "og:description"
}
],
"title": "Timescale Vector (Postgres) | 🦜️🔗 LangChain"
} | Timescale Vector (Postgres)
Timescale Vector is PostgreSQL++ for AI applications. It enables you to efficiently store and query billions of vector embeddings in PostgreSQL.
PostgreSQL also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance.
This notebook shows how to use the Postgres vector database (TimescaleVector) to perform self-querying. In the notebook we’ll demo the SelfQueryRetriever wrapped around a TimescaleVector vector store.
What is Timescale Vector?
Timescale Vector is PostgreSQL++ for AI applications.
Timescale Vector enables you to efficiently store and query millions of vector embeddings in PostgreSQL. - Enhances pgvector with faster and more accurate similarity search on 1B+ vectors via DiskANN inspired indexing algorithm. - Enables fast time-based vector search via automatic time-based partitioning and indexing. - Provides a familiar SQL interface for querying vector embeddings and relational data.
Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production: - Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database. - Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security. - Enables a worry-free experience with enterprise-grade security and compliance.
How to access Timescale Vector
Timescale Vector is available on Timescale, the cloud PostgreSQL platform. (There is no self-hosted version at this time.)
LangChain users get a 90-day free trial for Timescale Vector. - To get started, signup to Timescale, create a new database and follow this notebook! - See the Timescale Vector explainer blog for more details and performance benchmarks. - See the installation instructions for more details on using Timescale Vector in python.
Creating a TimescaleVector vectorstore
First we’ll want to create a Timescale Vector vectorstore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the timescale-vector package.
%pip install --upgrade --quiet lark
%pip install --upgrade --quiet timescale-vector
In this example, we’ll use OpenAIEmbeddings, so let’s load your OpenAI API key.
# Get openAI api key by reading local .env file
# The .env file should contain a line starting with `OPENAI_API_KEY=sk-`
import os
from dotenv import find_dotenv, load_dotenv
_ = load_dotenv(find_dotenv())
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
# Alternatively, use getpass to enter the key in a prompt
# import os
# import getpass
# os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
To connect to your PostgreSQL database, you’ll need your service URI, which can be found in the cheatsheet or .env file you downloaded after creating a new database.
If you haven’t already, signup for Timescale, and create a new database.
The URI will look something like this: postgres://tsdbadmin:<password>@<id>.tsdb.cloud.timescale.com:<port>/tsdb?sslmode=require
# Get the service url by reading local .env file
# The .env file should contain a line starting with `TIMESCALE_SERVICE_URL=postgresql://`
_ = load_dotenv(find_dotenv())
TIMESCALE_SERVICE_URL = os.environ["TIMESCALE_SERVICE_URL"]
# Alternatively, use getpass to enter the key in a prompt
# import os
# import getpass
# TIMESCALE_SERVICE_URL = getpass.getpass("Timescale Service URL:")
from langchain_community.vectorstores.timescalevector import TimescaleVector
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
Here’s the sample documents we’ll use for this demo. The data is about movies, and has both content and metadata fields with information about particular movie.
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
"rating": 9.9,
},
),
]
Finally, we’ll create our Timescale Vector vectorstore. Note that the collection name will be the name of the PostgreSQL table in which the documents are stored in.
COLLECTION_NAME = "langchain_self_query_demo"
vectorstore = TimescaleVector.from_documents(
embedding=embeddings,
documents=docs,
collection_name=COLLECTION_NAME,
service_url=TIMESCALE_SERVICE_URL,
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
# Give LLM info about the metadata fields
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
# Instantiate the self-query retriever from an LLM
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Self Querying Retrieval with Timescale Vector
And now we can try actually using our retriever!
Run the queries below and note how you can specify a query, filter, composite filter (filters with AND, OR) in natural language and the self-query retriever will translate that query into SQL and perform the search on the Timescale Vector (Postgres) vectorstore.
This illustrates the power of the self-query retriever. You can use it to perform complex searches over your vectorstore without you or your users having to write any SQL directly!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
/Users/avtharsewrathan/sideprojects2023/timescaleai/tsv-langchain/langchain/libs/langchain/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}),
Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'rating': 8.6, 'director': 'Satoshi Kon'})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'}),
Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'rating': 8.3, 'director': 'Greta Gerwig'})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GTE: 'gte'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'genre': 'science fiction', 'rating': 9.9, 'director': 'Andrei Tarkovsky'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None
[Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example specifies a query with a LIMIT value
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7}),
Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'genre': 'science fiction', 'rating': 7.7})] |
https://python.langchain.com/docs/integrations/retrievers/arxiv/ | ## Arxiv
> [arXiv](https://arxiv.org/) is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
This notebook shows how to retrieve scientific articles from `Arxiv.org` into the Document format that is used downstream.
## Installation[](#installation "Direct link to Installation")
First, you need to install `arxiv` python package.
```
%pip install --upgrade --quiet arxiv
```
`ArxivRetriever` has these arguments: - optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now. - optional `load_all_available_meta`: default=False. By default only the most important fields downloaded: `Published` (date when document was published/last updated), `Title`, `Authors`, `Summary`. If True, other fields also downloaded.
`get_relevant_documents()` has one argument, `query`: free text which used to find documents in `Arxiv.org`
## Examples[](#examples "Direct link to Examples")
### Running retriever[](#running-retriever "Direct link to Running retriever")
```
from langchain_community.retrievers import ArxivRetriever
```
```
retriever = ArxivRetriever(load_max_docs=2)
```
```
docs = retriever.get_relevant_documents(query="1605.08386")
```
```
docs[0].metadata # meta-information of the Document
```
```
{'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
```
```
docs[0].page_content[:400] # a content of the Document
```
```
'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'
```
### Question Answering on facts[](#question-answering-on-facts "Direct link to Question Answering on facts")
```
# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassOPENAI_API_KEY = getpass()
```
```
import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
```
```
from langchain.chains import ConversationalRetrievalChainfrom langchain_openai import ChatOpenAImodel = ChatOpenAI(model="gpt-3.5-turbo") # switch to 'gpt-4'qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)
```
```
questions = [ "What are Heat-bath random walks with Markov base?", "What is the ImageBind model?", "How does Compositional Reasoning with Large Language Models works?",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n")
```
```
-> **Question**: What are Heat-bath random walks with Markov base? **Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term? -> **Question**: What is the ImageBind model? **Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks. -> **Question**: How does Compositional Reasoning with Large Language Models works? **Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones. In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed. The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts.
```
```
questions = [ "What are Heat-bath random walks with Markov base? Include references to answer.",]chat_history = []for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n")
```
```
-> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. **Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.References:Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media.
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:31.871Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/arxiv/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/arxiv/",
"description": "arXiv is an open-access archive for 2 million",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4056",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"arxiv\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:31 GMT",
"etag": "W/\"6d602daacdaa93a41334778cfa8818ff\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::6jz7h-1713753751498-4001e6d4dda3"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/arxiv/",
"property": "og:url"
},
{
"content": "Arxiv | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "arXiv is an open-access archive for 2 million",
"property": "og:description"
}
],
"title": "Arxiv | 🦜️🔗 LangChain"
} | Arxiv
arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.
This notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream.
Installation
First, you need to install arxiv python package.
%pip install --upgrade --quiet arxiv
ArxivRetriever has these arguments: - optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now. - optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.
get_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.org
Examples
Running retriever
from langchain_community.retrievers import ArxivRetriever
retriever = ArxivRetriever(load_max_docs=2)
docs = retriever.get_relevant_documents(query="1605.08386")
docs[0].metadata # meta-information of the Document
{'Published': '2016-05-26',
'Title': 'Heat-bath random walks with Markov bases',
'Authors': 'Caprice Stanley, Tobias Windisch',
'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
docs[0].page_content[:400] # a content of the Document
'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'
Question Answering on facts
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.chains import ConversationalRetrievalChain
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-3.5-turbo") # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)
questions = [
"What are Heat-bath random walks with Markov base?",
"What is the ImageBind model?",
"How does Compositional Reasoning with Large Language Models works?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result["answer"]))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are Heat-bath random walks with Markov base?
**Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term?
-> **Question**: What is the ImageBind model?
**Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks.
-> **Question**: How does Compositional Reasoning with Large Language Models works?
**Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones.
In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed.
The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts.
questions = [
"What are Heat-bath random walks with Markov base? Include references to answer.",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result["answer"]))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What are Heat-bath random walks with Markov base? Include references to answer.
**Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.
The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.
References:
Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.
Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media. |
https://python.langchain.com/docs/integrations/retrievers/activeloop/ | ## Activeloop Deep Memory
> [Activeloop Deep Memory](https://docs.activeloop.ai/performance-features/deep-memory) is a suite of tools that enables you to optimize your Vector Store for your use-case and achieve higher accuracy in your LLM apps.
`Retrieval-Augmented Generatation` (`RAG`) has recently gained significant attention. As advanced RAG techniques and agents emerge, they expand the potential of what RAGs can accomplish. However, several challenges may limit the integration of RAGs into production. The primary factors to consider when implementing RAGs in production settings are accuracy (recall), cost, and latency. For basic use cases, OpenAI’s Ada model paired with a naive similarity search can produce satisfactory results. Yet, for higher accuracy or recall during searches, one might need to employ advanced retrieval techniques. These methods might involve varying data chunk sizes, rewriting queries multiple times, and more, potentially increasing latency and costs. Activeloop’s [Deep Memory](https://www.activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/) a feature available to `Activeloop Deep Lake` users, addresses these issuea by introducing a tiny neural network layer trained to match user queries with relevant data from a corpus. While this addition incurs minimal latency during search, it can boost retrieval accuracy by up to 27 % and remains cost-effective and simple to use, without requiring any additional advanced rag techniques.
For this tutorial we will parse `DeepLake` documentation, and create a RAG system that could answer the question from the docs.
## 1\. Dataset Creation[](#dataset-creation "Direct link to 1. Dataset Creation")
We will parse activeloop’s docs for this tutorial using `BeautifulSoup` library and LangChain’s document parsers like `Html2TextTransformer`, `AsyncHtmlLoader`. So we will need to install the following libraries:
```
%pip install --upgrade --quiet tiktoken langchain-openai python-dotenv datasets langchain deeplake beautifulsoup4 html2text ragas
```
Also you’ll need to create a [Activeloop](https://activeloop.ai/) account.
```
from langchain.chains import RetrievalQAfrom langchain_community.vectorstores import DeepLakefrom langchain_openai import ChatOpenAI, OpenAIEmbeddings
```
```
import getpassimport osos.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API token: ")# # activeloop token is needed if you are not signed in using CLI: `activeloop login -u <USERNAME> -p <PASSWORD>`os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass( "Enter your ActiveLoop API token: ") # Get your API token from https://app.activeloop.ai, click on your profile picture in the top right corner, and select "API Tokens"token = os.getenv("ACTIVELOOP_TOKEN")openai_embeddings = OpenAIEmbeddings()
```
```
db = DeepLake( dataset_path=f"hub://{ORG_ID}/deeplake-docs-deepmemory", # org_id stands for your username or organization from activeloop embedding=openai_embeddings, runtime={"tensor_db": True}, token=token, # overwrite=True, # user overwrite flag if you want to overwrite the full dataset read_only=False,)
```
parsing all links in the webpage using `BeautifulSoup`
```
from urllib.parse import urljoinimport requestsfrom bs4 import BeautifulSoupdef get_all_links(url): response = requests.get(url) if response.status_code != 200: print(f"Failed to retrieve the page: {url}") return [] soup = BeautifulSoup(response.content, "html.parser") # Finding all 'a' tags which typically contain href attribute for links links = [ urljoin(url, a["href"]) for a in soup.find_all("a", href=True) if a["href"] ] return linksbase_url = "https://docs.deeplake.ai/en/latest/"all_links = get_all_links(base_url)
```
Loading data:
```
from langchain_community.document_loaders.async_html import AsyncHtmlLoaderloader = AsyncHtmlLoader(all_links)docs = loader.load()
```
Converting data into user readable format:
```
from langchain_community.document_transformers import Html2TextTransformerhtml2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)
```
Now, let us chunk further the documents as some of the contain too much text:
```
from langchain_text_splitters import RecursiveCharacterTextSplitterchunk_size = 4096docs_new = []text_splitter = RecursiveCharacterTextSplitter( chunk_size=chunk_size,)for doc in docs_transformed: if len(doc.page_content) < chunk_size: docs_new.append(doc) else: docs = text_splitter.create_documents([doc.page_content]) docs_new.extend(docs)
```
Populating VectorStore:
```
docs = db.add_documents(docs_new)
```
## 2\. Generating synthetic queries and training Deep Memory[](#generating-synthetic-queries-and-training-deep-memory "Direct link to 2. Generating synthetic queries and training Deep Memory")
Next step would be to train a deep\_memory model that will align your users queries with the dataset that you already have. If you don’t have any user queries yet, no worries, we will generate them using LLM!
#### TODO: Add image[](#todo-add-image "Direct link to TODO: Add image")
Here above we showed the overall schema how deep\_memory works. So as you can see, in order to train it you need relevance, queries together with corpus data (data that we want to query). Corpus data was already populated in the previous section, here we will be generating questions and relevance.
1. `questions` - is a text of strings, where each string represents a query
2. `relevance` - contains links to the ground truth for each question. There might be several docs that contain answer to the given question. Because of this relevenve is `List[List[tuple[str, float]]]`, where outer list represents queries and inner list relevant documents. Tuple contains str, float pair where string represent the id of the source doc (corresponds to the `id` tensor in the dataset), while float corresponds to how much current document is related to the question.
Now, let us generate synthetic questions and relevance:
```
from typing import Listfrom langchain.chains.openai_functions import ( create_structured_output_chain,)from langchain_core.messages import HumanMessage, SystemMessagefrom langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain_openai import ChatOpenAIfrom pydantic import BaseModel, Field
```
```
# fetch dataset docs and ids if they exist (optional you can also ingest)docs = db.vectorstore.dataset.text.data(fetch_chunks=True, aslist=True)["value"]ids = db.vectorstore.dataset.id.data(fetch_chunks=True, aslist=True)["value"]
```
```
# If we pass in a model explicitly, we need to make sure it supports the OpenAI function-calling API.llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)class Questions(BaseModel): """Identifying information about a person.""" question: str = Field(..., description="Questions about text")prompt_msgs = [ SystemMessage( content="You are a world class expert for generating questions based on provided context. \ You make sure the question can be answered by the text." ), HumanMessagePromptTemplate.from_template( "Use the given text to generate a question from the following input: {input}" ), HumanMessage(content="Tips: Make sure to answer in the correct format"),]prompt = ChatPromptTemplate(messages=prompt_msgs)chain = create_structured_output_chain(Questions, llm, prompt, verbose=True)text = "# Understanding Hallucinations and Bias ## **Introduction** In this lesson, we'll cover the concept of **hallucinations** in LLMs, highlighting their influence on AI applications and demonstrating how to mitigate them using techniques like the retriever's architectures. We'll also explore **bias** within LLMs with examples."questions = chain.run(input=text)print(questions)
```
```
import randomfrom langchain_openai import OpenAIEmbeddingsfrom tqdm import tqdmdef generate_queries(docs: List[str], ids: List[str], n: int = 100): questions = [] relevances = [] pbar = tqdm(total=n) while len(questions) < n: # 1. randomly draw a piece of text and relevance id r = random.randint(0, len(docs) - 1) text, label = docs[r], ids[r] # 2. generate queries and assign and relevance id generated_qs = [chain.run(input=text).question] questions.extend(generated_qs) relevances.extend([[(label, 1)] for _ in generated_qs]) pbar.update(len(generated_qs)) if len(questions) % 10 == 0: print(f"q: {len(questions)}") return questions[:n], relevances[:n]chain = create_structured_output_chain(Questions, llm, prompt, verbose=False)questions, relevances = generate_queries(docs, ids, n=200)train_questions, train_relevances = questions[:100], relevances[:100]test_questions, test_relevances = questions[100:], relevances[100:]
```
Now we created 100 training queries as well as 100 queries for testing. Now let us train the deep\_memory:
```
job_id = db.vectorstore.deep_memory.train( queries=train_questions, relevance=train_relevances,)
```
Let us track the training progress:
```
db.vectorstore.deep_memory.status("6538939ca0b69a9ca45c528c")
```
```
--------------------------------------------------------------| 6538e02ecda4691033a51c5b |--------------------------------------------------------------| status | completed |--------------------------------------------------------------| progress | eta: 1.4 seconds || | recall@10: 79.00% (+34.00%) |--------------------------------------------------------------| results | recall@10: 79.00% (+34.00%) |--------------------------------------------------------------
```
## 3\. Evaluating Deep Memory performance[](#evaluating-deep-memory-performance "Direct link to 3. Evaluating Deep Memory performance")
Great we’ve trained the model! It’s showing some substantial improvement in recall, but how can we use it now and evaluate on unseen new data? In this section we will delve into model evaluation and inference part and see how it can be used with LangChain in order to increase retrieval accuracy
### 3.1 Deep Memory evaluation[](#deep-memory-evaluation "Direct link to 3.1 Deep Memory evaluation")
For the beginning we can use deep\_memory’s builtin evaluation method. It calculates several `recall` metrics. It can be done easily in a few lines of code.
```
recall = db.vectorstore.deep_memory.evaluate( queries=test_questions, relevance=test_relevances,)
```
```
Embedding queries took 0.81 seconds---- Evaluating without model ---- Recall@1: 9.0%Recall@3: 19.0%Recall@5: 24.0%Recall@10: 42.0%Recall@50: 93.0%Recall@100: 98.0%---- Evaluating with model ---- Recall@1: 19.0%Recall@3: 42.0%Recall@5: 49.0%Recall@10: 69.0%Recall@50: 97.0%Recall@100: 97.0%
```
It is showing quite substatntial improvement on an unseen test dataset too!!!
### 3.2 Deep Memory + RAGas[](#deep-memory-ragas "Direct link to 3.2 Deep Memory + RAGas")
```
from ragas.langchain import RagasEvaluatorChainfrom ragas.metrics import ( context_recall,)
```
Let us convert recall into ground truths:
```
def convert_relevance_to_ground_truth(docs, relevance): ground_truths = [] for rel in relevance: ground_truth = [] for doc_id, _ in rel: ground_truth.append(docs[doc_id]) ground_truths.append(ground_truth) return ground_truths
```
```
ground_truths = convert_relevance_to_ground_truth(docs, test_relevances)for deep_memory in [False, True]: print("\nEvaluating with deep_memory =", deep_memory) print("===================================") retriever = db.as_retriever() retriever.search_kwargs["deep_memory"] = deep_memory qa_chain = RetrievalQA.from_chain_type( llm=ChatOpenAI(model="gpt-3.5-turbo"), chain_type="stuff", retriever=retriever, return_source_documents=True, ) metrics = { "context_recall_score": 0, } eval_chains = {m.name: RagasEvaluatorChain(metric=m) for m in [context_recall]} for question, ground_truth in zip(test_questions, ground_truths): result = qa_chain({"query": question}) result["ground_truths"] = ground_truth for name, eval_chain in eval_chains.items(): score_name = f"{name}_score" metrics[score_name] += eval_chain(result)[score_name] for metric in metrics: metrics[metric] /= len(test_questions) print(f"{metric}: {metrics[metric]}") print("===================================")
```
```
Evaluating with deep_memory = False===================================context_recall_score = 0.3763423145===================================Evaluating with deep_memory = True===================================context_recall_score = 0.5634545323===================================
```
### 3.3 Deep Memory Inference[](#deep-memory-inference "Direct link to 3.3 Deep Memory Inference")
#### TODO: Add image[](#todo-add-image-1 "Direct link to TODO: Add image")
with deep\_memory
```
retriever = db.as_retriever()retriever.search_kwargs["deep_memory"] = Trueretriever.search_kwargs["k"] = 10query = "Deamination of cytidine to uridine on the minus strand of viral DNA results in catastrophic G-to-A mutations in the viral genome."qa = RetrievalQA.from_chain_type( llm=ChatOpenAI(model="gpt-4"), chain_type="stuff", retriever=retriever)print(qa.run(query))
```
```
The base htype of the 'video_seq' tensor is 'video'.
```
without deep\_memory
```
retriever = db.as_retriever()retriever.search_kwargs["deep_memory"] = Falseretriever.search_kwargs["k"] = 10query = "Deamination of cytidine to uridine on the minus strand of viral DNA results in catastrophic G-to-A mutations in the viral genome."qa = RetrievalQA.from_chain_type( llm=ChatOpenAI(model="gpt-4"), chain_type="stuff", retriever=retriever)qa.run(query)
```
```
The text does not provide information on the base htype of the 'video_seq' tensor.
```
### 3.4 Deep Memory cost savings[](#deep-memory-cost-savings "Direct link to 3.4 Deep Memory cost savings")
Deep Memory increases retrieval accuracy without altering your existing workflow. Additionally, by reducing the top\_k input into the LLM, you can significantly cut inference costs via lower token usage. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:32.213Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/activeloop/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/activeloop/",
"description": "[Activeloop Deep",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3601",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"activeloop\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:31 GMT",
"etag": "W/\"1517ec698df20eafb69e9e6f3133295f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::vtglz-1713753751564-6a348c39fc42"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/activeloop/",
"property": "og:url"
},
{
"content": "Activeloop Deep Memory | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Activeloop Deep",
"property": "og:description"
}
],
"title": "Activeloop Deep Memory | 🦜️🔗 LangChain"
} | Activeloop Deep Memory
Activeloop Deep Memory is a suite of tools that enables you to optimize your Vector Store for your use-case and achieve higher accuracy in your LLM apps.
Retrieval-Augmented Generatation (RAG) has recently gained significant attention. As advanced RAG techniques and agents emerge, they expand the potential of what RAGs can accomplish. However, several challenges may limit the integration of RAGs into production. The primary factors to consider when implementing RAGs in production settings are accuracy (recall), cost, and latency. For basic use cases, OpenAI’s Ada model paired with a naive similarity search can produce satisfactory results. Yet, for higher accuracy or recall during searches, one might need to employ advanced retrieval techniques. These methods might involve varying data chunk sizes, rewriting queries multiple times, and more, potentially increasing latency and costs. Activeloop’s Deep Memory a feature available to Activeloop Deep Lake users, addresses these issuea by introducing a tiny neural network layer trained to match user queries with relevant data from a corpus. While this addition incurs minimal latency during search, it can boost retrieval accuracy by up to 27 % and remains cost-effective and simple to use, without requiring any additional advanced rag techniques.
For this tutorial we will parse DeepLake documentation, and create a RAG system that could answer the question from the docs.
1. Dataset Creation
We will parse activeloop’s docs for this tutorial using BeautifulSoup library and LangChain’s document parsers like Html2TextTransformer, AsyncHtmlLoader. So we will need to install the following libraries:
%pip install --upgrade --quiet tiktoken langchain-openai python-dotenv datasets langchain deeplake beautifulsoup4 html2text ragas
Also you’ll need to create a Activeloop account.
from langchain.chains import RetrievalQA
from langchain_community.vectorstores import DeepLake
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API token: ")
# # activeloop token is needed if you are not signed in using CLI: `activeloop login -u <USERNAME> -p <PASSWORD>`
os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass(
"Enter your ActiveLoop API token: "
) # Get your API token from https://app.activeloop.ai, click on your profile picture in the top right corner, and select "API Tokens"
token = os.getenv("ACTIVELOOP_TOKEN")
openai_embeddings = OpenAIEmbeddings()
db = DeepLake(
dataset_path=f"hub://{ORG_ID}/deeplake-docs-deepmemory", # org_id stands for your username or organization from activeloop
embedding=openai_embeddings,
runtime={"tensor_db": True},
token=token,
# overwrite=True, # user overwrite flag if you want to overwrite the full dataset
read_only=False,
)
parsing all links in the webpage using BeautifulSoup
from urllib.parse import urljoin
import requests
from bs4 import BeautifulSoup
def get_all_links(url):
response = requests.get(url)
if response.status_code != 200:
print(f"Failed to retrieve the page: {url}")
return []
soup = BeautifulSoup(response.content, "html.parser")
# Finding all 'a' tags which typically contain href attribute for links
links = [
urljoin(url, a["href"]) for a in soup.find_all("a", href=True) if a["href"]
]
return links
base_url = "https://docs.deeplake.ai/en/latest/"
all_links = get_all_links(base_url)
Loading data:
from langchain_community.document_loaders.async_html import AsyncHtmlLoader
loader = AsyncHtmlLoader(all_links)
docs = loader.load()
Converting data into user readable format:
from langchain_community.document_transformers import Html2TextTransformer
html2text = Html2TextTransformer()
docs_transformed = html2text.transform_documents(docs)
Now, let us chunk further the documents as some of the contain too much text:
from langchain_text_splitters import RecursiveCharacterTextSplitter
chunk_size = 4096
docs_new = []
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
)
for doc in docs_transformed:
if len(doc.page_content) < chunk_size:
docs_new.append(doc)
else:
docs = text_splitter.create_documents([doc.page_content])
docs_new.extend(docs)
Populating VectorStore:
docs = db.add_documents(docs_new)
2. Generating synthetic queries and training Deep Memory
Next step would be to train a deep_memory model that will align your users queries with the dataset that you already have. If you don’t have any user queries yet, no worries, we will generate them using LLM!
TODO: Add image
Here above we showed the overall schema how deep_memory works. So as you can see, in order to train it you need relevance, queries together with corpus data (data that we want to query). Corpus data was already populated in the previous section, here we will be generating questions and relevance.
questions - is a text of strings, where each string represents a query
relevance - contains links to the ground truth for each question. There might be several docs that contain answer to the given question. Because of this relevenve is List[List[tuple[str, float]]], where outer list represents queries and inner list relevant documents. Tuple contains str, float pair where string represent the id of the source doc (corresponds to the id tensor in the dataset), while float corresponds to how much current document is related to the question.
Now, let us generate synthetic questions and relevance:
from typing import List
from langchain.chains.openai_functions import (
create_structured_output_chain,
)
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
# fetch dataset docs and ids if they exist (optional you can also ingest)
docs = db.vectorstore.dataset.text.data(fetch_chunks=True, aslist=True)["value"]
ids = db.vectorstore.dataset.id.data(fetch_chunks=True, aslist=True)["value"]
# If we pass in a model explicitly, we need to make sure it supports the OpenAI function-calling API.
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
class Questions(BaseModel):
"""Identifying information about a person."""
question: str = Field(..., description="Questions about text")
prompt_msgs = [
SystemMessage(
content="You are a world class expert for generating questions based on provided context. \
You make sure the question can be answered by the text."
),
HumanMessagePromptTemplate.from_template(
"Use the given text to generate a question from the following input: {input}"
),
HumanMessage(content="Tips: Make sure to answer in the correct format"),
]
prompt = ChatPromptTemplate(messages=prompt_msgs)
chain = create_structured_output_chain(Questions, llm, prompt, verbose=True)
text = "# Understanding Hallucinations and Bias ## **Introduction** In this lesson, we'll cover the concept of **hallucinations** in LLMs, highlighting their influence on AI applications and demonstrating how to mitigate them using techniques like the retriever's architectures. We'll also explore **bias** within LLMs with examples."
questions = chain.run(input=text)
print(questions)
import random
from langchain_openai import OpenAIEmbeddings
from tqdm import tqdm
def generate_queries(docs: List[str], ids: List[str], n: int = 100):
questions = []
relevances = []
pbar = tqdm(total=n)
while len(questions) < n:
# 1. randomly draw a piece of text and relevance id
r = random.randint(0, len(docs) - 1)
text, label = docs[r], ids[r]
# 2. generate queries and assign and relevance id
generated_qs = [chain.run(input=text).question]
questions.extend(generated_qs)
relevances.extend([[(label, 1)] for _ in generated_qs])
pbar.update(len(generated_qs))
if len(questions) % 10 == 0:
print(f"q: {len(questions)}")
return questions[:n], relevances[:n]
chain = create_structured_output_chain(Questions, llm, prompt, verbose=False)
questions, relevances = generate_queries(docs, ids, n=200)
train_questions, train_relevances = questions[:100], relevances[:100]
test_questions, test_relevances = questions[100:], relevances[100:]
Now we created 100 training queries as well as 100 queries for testing. Now let us train the deep_memory:
job_id = db.vectorstore.deep_memory.train(
queries=train_questions,
relevance=train_relevances,
)
Let us track the training progress:
db.vectorstore.deep_memory.status("6538939ca0b69a9ca45c528c")
--------------------------------------------------------------
| 6538e02ecda4691033a51c5b |
--------------------------------------------------------------
| status | completed |
--------------------------------------------------------------
| progress | eta: 1.4 seconds |
| | recall@10: 79.00% (+34.00%) |
--------------------------------------------------------------
| results | recall@10: 79.00% (+34.00%) |
--------------------------------------------------------------
3. Evaluating Deep Memory performance
Great we’ve trained the model! It’s showing some substantial improvement in recall, but how can we use it now and evaluate on unseen new data? In this section we will delve into model evaluation and inference part and see how it can be used with LangChain in order to increase retrieval accuracy
3.1 Deep Memory evaluation
For the beginning we can use deep_memory’s builtin evaluation method. It calculates several recall metrics. It can be done easily in a few lines of code.
recall = db.vectorstore.deep_memory.evaluate(
queries=test_questions,
relevance=test_relevances,
)
Embedding queries took 0.81 seconds
---- Evaluating without model ----
Recall@1: 9.0%
Recall@3: 19.0%
Recall@5: 24.0%
Recall@10: 42.0%
Recall@50: 93.0%
Recall@100: 98.0%
---- Evaluating with model ----
Recall@1: 19.0%
Recall@3: 42.0%
Recall@5: 49.0%
Recall@10: 69.0%
Recall@50: 97.0%
Recall@100: 97.0%
It is showing quite substatntial improvement on an unseen test dataset too!!!
3.2 Deep Memory + RAGas
from ragas.langchain import RagasEvaluatorChain
from ragas.metrics import (
context_recall,
)
Let us convert recall into ground truths:
def convert_relevance_to_ground_truth(docs, relevance):
ground_truths = []
for rel in relevance:
ground_truth = []
for doc_id, _ in rel:
ground_truth.append(docs[doc_id])
ground_truths.append(ground_truth)
return ground_truths
ground_truths = convert_relevance_to_ground_truth(docs, test_relevances)
for deep_memory in [False, True]:
print("\nEvaluating with deep_memory =", deep_memory)
print("===================================")
retriever = db.as_retriever()
retriever.search_kwargs["deep_memory"] = deep_memory
qa_chain = RetrievalQA.from_chain_type(
llm=ChatOpenAI(model="gpt-3.5-turbo"),
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
)
metrics = {
"context_recall_score": 0,
}
eval_chains = {m.name: RagasEvaluatorChain(metric=m) for m in [context_recall]}
for question, ground_truth in zip(test_questions, ground_truths):
result = qa_chain({"query": question})
result["ground_truths"] = ground_truth
for name, eval_chain in eval_chains.items():
score_name = f"{name}_score"
metrics[score_name] += eval_chain(result)[score_name]
for metric in metrics:
metrics[metric] /= len(test_questions)
print(f"{metric}: {metrics[metric]}")
print("===================================")
Evaluating with deep_memory = False
===================================
context_recall_score = 0.3763423145
===================================
Evaluating with deep_memory = True
===================================
context_recall_score = 0.5634545323
===================================
3.3 Deep Memory Inference
TODO: Add image
with deep_memory
retriever = db.as_retriever()
retriever.search_kwargs["deep_memory"] = True
retriever.search_kwargs["k"] = 10
query = "Deamination of cytidine to uridine on the minus strand of viral DNA results in catastrophic G-to-A mutations in the viral genome."
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(model="gpt-4"), chain_type="stuff", retriever=retriever
)
print(qa.run(query))
The base htype of the 'video_seq' tensor is 'video'.
without deep_memory
retriever = db.as_retriever()
retriever.search_kwargs["deep_memory"] = False
retriever.search_kwargs["k"] = 10
query = "Deamination of cytidine to uridine on the minus strand of viral DNA results in catastrophic G-to-A mutations in the viral genome."
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(model="gpt-4"), chain_type="stuff", retriever=retriever
)
qa.run(query)
The text does not provide information on the base htype of the 'video_seq' tensor.
3.4 Deep Memory cost savings
Deep Memory increases retrieval accuracy without altering your existing workflow. Additionally, by reducing the top_k input into the LLM, you can significantly cut inference costs via lower token usage. |
https://python.langchain.com/docs/integrations/retrievers/arcee/ | This notebook demonstrates how to use the `ArceeRetriever` class to retrieve relevant document(s) for Arcee’s `Domain Adapted Language Models` (`DALMs`).
Before using `ArceeRetriever`, make sure the Arcee API key is set as `ARCEE_API_KEY` environment variable. You can also pass the api key as a named parameter.
You can also configure `ArceeRetriever`’s parameters such as `arcee_api_url`, `arcee_app_url`, and `model_kwargs` as needed. Setting the `model_kwargs` at the object initialization uses the filters and size as default for all the subsequent retrievals.
```
retriever = ArceeRetriever( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY", # if not already set in the environment arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai model_kwargs={ "size": 5, "filters": [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein", } ], },)
```
You can retrieve relevant documents from uploaded contexts by providing a query. Here’s an example:
Arcee allows you to apply `filters` and set the `size` (in terms of count) of retrieved document(s). Filters help narrow down the results. Here’s how to use these parameters:
```
# Define filtersfilters = [ {"field_name": "document", "filter_type": "fuzzy_search", "value": "Music"}, {"field_name": "year", "filter_type": "strict_search", "value": "1905"},]# Retrieve documents with filters and size paramsdocuments = retriever.get_relevant_documents(query=query, size=5, filters=filters)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:32.791Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/arcee/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/arcee/",
"description": "Arcee helps with the",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4056",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"arcee\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:31 GMT",
"etag": "W/\"f7eb54e950056c51629e9ed462d4aaed\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wpm5b-1713753751866-4b8d9c91016f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/arcee/",
"property": "og:url"
},
{
"content": "Arcee | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Arcee helps with the",
"property": "og:description"
}
],
"title": "Arcee | 🦜️🔗 LangChain"
} | This notebook demonstrates how to use the ArceeRetriever class to retrieve relevant document(s) for Arcee’s Domain Adapted Language Models (DALMs).
Before using ArceeRetriever, make sure the Arcee API key is set as ARCEE_API_KEY environment variable. You can also pass the api key as a named parameter.
You can also configure ArceeRetriever’s parameters such as arcee_api_url, arcee_app_url, and model_kwargs as needed. Setting the model_kwargs at the object initialization uses the filters and size as default for all the subsequent retrievals.
retriever = ArceeRetriever(
model="DALM-PubMed",
# arcee_api_key="ARCEE-API-KEY", # if not already set in the environment
arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai
arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai
model_kwargs={
"size": 5,
"filters": [
{
"field_name": "document",
"filter_type": "fuzzy_search",
"value": "Einstein",
}
],
},
)
You can retrieve relevant documents from uploaded contexts by providing a query. Here’s an example:
Arcee allows you to apply filters and set the size (in terms of count) of retrieved document(s). Filters help narrow down the results. Here’s how to use these parameters:
# Define filters
filters = [
{"field_name": "document", "filter_type": "fuzzy_search", "value": "Music"},
{"field_name": "year", "filter_type": "strict_search", "value": "1905"},
]
# Retrieve documents with filters and size params
documents = retriever.get_relevant_documents(query=query, size=5, filters=filters) |
https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever/ | ## Amazon Kendra
> [Amazon Kendra](https://docs.aws.amazon.com/kendra/latest/dg/what-is-kendra.html) is an intelligent search service provided by `Amazon Web Services` (`AWS`). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. `Kendra` is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.
> With `Kendra`, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.
## Using the Amazon Kendra Index Retriever[](#using-the-amazon-kendra-index-retriever "Direct link to Using the Amazon Kendra Index Retriever")
```
%pip install --upgrade --quiet boto3
```
```
from langchain_community.retrievers import AmazonKendraRetriever
```
Create New Retriever
```
retriever = AmazonKendraRetriever(index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03")
```
Now you can use retrieved documents from Kendra index
```
retriever.get_relevant_documents("what is langchain")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:32.884Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever/",
"description": "[Amazon",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3602",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"amazon_kendra_retriever\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:31 GMT",
"etag": "W/\"ea72884a7df8bdfa2103f771a803c422\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::fxzgb-1713753751926-54d3f8d82b32"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever/",
"property": "og:url"
},
{
"content": "Amazon Kendra | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Amazon",
"property": "og:description"
}
],
"title": "Amazon Kendra | 🦜️🔗 LangChain"
} | Amazon Kendra
Amazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.
With Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.
Using the Amazon Kendra Index Retriever
%pip install --upgrade --quiet boto3
from langchain_community.retrievers import AmazonKendraRetriever
Create New Retriever
retriever = AmazonKendraRetriever(index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03")
Now you can use retrieved documents from Kendra index
retriever.get_relevant_documents("what is langchain")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/retrievers/self_query/vectara_self_query/ | ## Vectara
> [Vectara](https://vectara.com/) is the trusted GenAI platform that provides an easy-to-use API for document indexing and querying.
>
> `Vectara` provides an end-to-end managed service for `Retrieval Augmented Generation` or [RAG](https://vectara.com/grounded-generation/), which includes: 1. A way to `extract text` from document files and `chunk` them into sentences. 2. The state-of-the-art [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model. Each text chunk is encoded into a vector embedding using `Boomerang`, and stored in the Vectara internal knowledge (vector+text) store 3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) and [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/)) 4. An option to create [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents, including citations.
See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.
This notebook shows how to use `SelfQueryRetriever` with Vectara.
## Setup
You will need a `Vectara` account to use `Vectara` with `LangChain`. To get started, use the following steps (see our [quickstart](https://docs.vectara.com/docs/quickstart) guide): 1. [Sign up](https://console.vectara.com/signup) for a `Vectara` account if you don’t already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window. 2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingesting from input documents. To create a corpus, use the **“Create Corpus”** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top. 3. Next you’ll need to create API keys to access the corpus. Click on the **“Authorization”** tab in the corpus view and then the **“Create API Key”** button. Give your key a name, and choose whether you want query only or query+index for your key. Click “Create” and you now have an active API key. Keep this key confidential.
To use LangChain with Vectara, you need three values: customer ID, corpus ID and api\_key. You can provide those to LangChain in two ways:
1. Include in your environment these three variables: `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`.
> For example, you can set these variables using `os.environ` and `getpass` as follows:
```
import osimport getpassos.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")
```
1. Provide them as arguments when creating the `Vectara` vectorstore object:
```
vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key )
```
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`).
## Connecting to Vectara from LangChain[](#connecting-to-vectara-from-langchain "Direct link to Connecting to Vectara from LangChain")
In this example, we assume that you’ve created an account and a corpus, and added your VECTARA\_CUSTOMER\_ID, VECTARA\_CORPUS\_ID and VECTARA\_API\_KEY (created with permissions for both indexing and query) as environment variables.
The corpus has 4 fields defined as metadata for filtering: year, director, rating, and genre
```
from langchain.chains import ConversationalRetrievalChainfrom langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_community.document_loaders import TextLoaderfrom langchain_community.embeddings import FakeEmbeddingsfrom langchain_community.vectorstores import Vectarafrom langchain_core.documents import Documentfrom langchain_openai import OpenAIfrom langchain_text_splitters import CharacterTextSplitter
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", }, ),]vectara = Vectara()for doc in docs: vectara.add_texts( [doc.page_content], embedding=FakeEmbeddings(size=768), doc_metadata=doc.metadata, )
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectara, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'lang': 'eng', 'offset': '0', 'len': '76', 'year': '2010', 'director': 'Christopher Nolan', 'rating': '8.2', 'source': 'langchain'}), Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]
```
```
# This example only specifies a filterretriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
```
```
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'})]
```
```
# This example specifies a composite filterretriever.get_relevant_documents( "What's a highly rated (above 8.5) science fiction film?")
```
```
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]
```
```
# This example specifies a query and composite filterretriever.get_relevant_documents( "What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
```
```
[Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectara, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:33.203Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/vectara_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/vectara_self_query/",
"description": "Vectara is the trusted GenAI platform that",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"vectara_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:33 GMT",
"etag": "W/\"1f12df5575ab3a372446135790851dde\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::tghr7-1713753753090-3e83302472f6"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/vectara_self_query/",
"property": "og:url"
},
{
"content": "Vectara | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Vectara is the trusted GenAI platform that",
"property": "og:description"
}
],
"title": "Vectara | 🦜️🔗 LangChain"
} | Vectara
Vectara is the trusted GenAI platform that provides an easy-to-use API for document indexing and querying.
Vectara provides an end-to-end managed service for Retrieval Augmented Generation or RAG, which includes: 1. A way to extract text from document files and chunk them into sentences. 2. The state-of-the-art Boomerang embeddings model. Each text chunk is encoded into a vector embedding using Boomerang, and stored in the Vectara internal knowledge (vector+text) store 3. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for Hybrid Search and MMR) 4. An option to create generative summary, based on the retrieved documents, including citations.
See the Vectara API documentation for more information on how to use the API.
This notebook shows how to use SelfQueryRetriever with Vectara.
Setup
You will need a Vectara account to use Vectara with LangChain. To get started, use the following steps (see our quickstart guide): 1. Sign up for a Vectara account if you don’t already have one. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window. 2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingesting from input documents. To create a corpus, use the “Create Corpus” button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top. 3. Next you’ll need to create API keys to access the corpus. Click on the “Authorization” tab in the corpus view and then the “Create API Key” button. Give your key a name, and choose whether you want query only or query+index for your key. Click “Create” and you now have an active API key. Keep this key confidential.
To use LangChain with Vectara, you need three values: customer ID, corpus ID and api_key. You can provide those to LangChain in two ways:
Include in your environment these three variables: VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY.
For example, you can set these variables using os.environ and getpass as follows:
import os
import getpass
os.environ["VECTARA_CUSTOMER_ID"] = getpass.getpass("Vectara Customer ID:")
os.environ["VECTARA_CORPUS_ID"] = getpass.getpass("Vectara Corpus ID:")
os.environ["VECTARA_API_KEY"] = getpass.getpass("Vectara API Key:")
Provide them as arguments when creating the Vectara vectorstore object:
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)
Note: The self-query retriever requires you to have lark installed (pip install lark).
Connecting to Vectara from LangChain
In this example, we assume that you’ve created an account and a corpus, and added your VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY (created with permissions for both indexing and query) as environment variables.
The corpus has 4 fields defined as metadata for filtering: year, director, rating, and genre
from langchain.chains import ConversationalRetrievalChain
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_community.document_loaders import TextLoader
from langchain_community.embeddings import FakeEmbeddings
from langchain_community.vectorstores import Vectara
from langchain_core.documents import Document
from langchain_openai import OpenAI
from langchain_text_splitters import CharacterTextSplitter
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"rating": 9.9,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
},
),
]
vectara = Vectara()
for doc in docs:
vectara.add_texts(
[doc.page_content],
embedding=FakeEmbeddings(size=768),
doc_metadata=doc.metadata,
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectara, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}),
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'lang': 'eng', 'offset': '0', 'len': '76', 'year': '2010', 'director': 'Christopher Nolan', 'rating': '8.2', 'source': 'langchain'}),
Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]
# This example only specifies a filter
retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5")
[Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'lang': 'eng', 'offset': '0', 'len': '116', 'year': '2006', 'director': 'Satoshi Kon', 'rating': '8.6', 'source': 'langchain'}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'lang': 'eng', 'offset': '0', 'len': '82', 'year': '2019', 'director': 'Greta Gerwig', 'rating': '8.3', 'source': 'langchain'})]
# This example specifies a composite filter
retriever.get_relevant_documents(
"What's a highly rated (above 8.5) science fiction film?"
)
[Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'lang': 'eng', 'offset': '0', 'len': '60', 'year': '1979', 'rating': '9.9', 'director': 'Andrei Tarkovsky', 'genre': 'science fiction', 'source': 'langchain'})]
# This example specifies a query and composite filter
retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated"
)
[Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectara,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'lang': 'eng', 'offset': '0', 'len': '66', 'year': '1993', 'rating': '7.7', 'genre': 'science fiction', 'source': 'langchain'}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'lang': 'eng', 'offset': '0', 'len': '41', 'year': '1995', 'genre': 'animated', 'source': 'langchain'})] |
https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/ | This notebook shows how to use Azure AI Search (AAS) within LangChain.
Please note you will need 1. the name of your AAS service, 2. the name of your AAS index, 3. your API key.
Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.
Set Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to `AzureAISearchRetriever`). The search index name you use determines which documents are queried, so be sure to select the right one.
\*You may also use `AzureCognitiveSearchRetriever` however this will soon be depreciated. Please switch to `AzureAISearchRetriever` where possible.
`content_key` is the key in the retrieved result to set as the Document page\_content. `top_k` is the number of number of results you’d like to retrieve. Setting it to None (the default) returns all results.
Now you can use it to retrieve documents from Azure AI Search. This is the method you would call to do so. It will return all documents relevant to the query.
First let’s create an Azure vector store and upload some data to it.
We’ll use an embedding model from openai to turn our documents into embeddings stored in the Azure AI Search vector store. We’ll also set the index name to `langchain-vector-demo`. This will create a new vector store associated with that index name.
Next we’ll load data into our newly created vector store. For this example we load all the text files from a folder named `qna`. We’ll split the text in 1000 token chunks with no overlap. Finally the documents are added to our vector store as emeddings.
```
['YWY0NzY1MWYtMTU1Ni00YmEzLTlhNTQtZDQxNWFkMTlkNjMx', 'MTUzM2EyOGYtYWE0My00OTIyLWJkNWUtMjVjNTgwMzZlMjcx', 'ZGMyMjQ3N2EtNTQ5NC00ZjZhLWIyMzctYjRlZDYzMWUxNGQ4', 'OWM5MWQ3YzUtZjFkZS00MGI2LTg1OGMtMmRlYzUwMDc2MzZi', 'ZmFiYWVkOGQtNTcwYi00YTVmLWE3ZDEtMWQ3MTAxYjI2NTJj', 'NTUwM2ExMjItNTk4Zi00OTg0LTg1ZDItZTZlMGYyMjJiNTIy']
```
Next we’ll create a retriever similar to the one we created above but we’re using the index name associated with our new vector store. In this case that’s `langchain-vector-demo`.
Now we can retrieve the data that is relevant to our query from the documents we uploaded.
```
[Document(page_content='\n# What is Azure OpenAI?\n\nThe Azure OpenAI service provides REST API access to OpenAI\'s powerful language models including the GPT-3, Codex and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.\n\n### Features overview\n\n| Feature | Azure OpenAI |\n| --- | --- |\n| Models available | GPT-3 base series <br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|\n| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \\* available by request. Please open a support request|\n| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |\n| Virtual network support | Yes | \n| Managed Identity| Yes, via Azure Active Directory | \n| UI experience | **Azure Portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |\n| Regional availability | East US <br> South Central US <br> West Europe |\n| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. |\n\n## Responsible AI\n\nAt Microsoft, we\'re committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in the Azure OpenAI service have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases, incorporating Microsoft’s <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">principles for responsible AI use</a>, building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers.\n\n## How do I get access to Azure OpenAI?\n\nHow do I get access to Azure OpenAI Service?\n\nAccess is currently limited as we navigate high demand, upcoming product improvements, and <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">Microsoft’s commitment to responsible AI</a>. For now, we\'re working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations. In addition to applying for initial access, all solutions using the Azure OpenAI service are required to go through a use case review before they can be released for production use.\n\nMore specific information is included in the application form. We appreciate your patience as we work to responsibly enable broader access to the Azure OpenAI service.\n\nApply here for initial access or for a production review:\n\n<a href="https://aka.ms/oaiapply" target="_blank">Apply now</a>\n\nAll solutions using the Azure OpenAI service are also required to go through a use case review before they can be released for production use, and are evaluated on a case-by-case basis. In general, the more sensitive the scenario the more important risk mitigation measures will be for approval.\n\n## Comparing Azure OpenAI and OpenAI\n\nAzure OpenAI Service gives customers advanced language AI with OpenAI GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.\n\nWith Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering. \n\n## Key concepts\n\n### Prompts & Completions\n\nThe completions endpoint is the core component of the API service. This API provides access to the model\'s text-in, text-out interface. Users simply need to provide an input **prompt** containing the English text command, and the model will generate a text **completion**.\n\nHere\'s an example of a simple prompt and completion:\n\n>**Prompt', metadata={'@search.score': 2.3721094, 'id': 'MDEyNzU0ZDEtOTlmMy00YjE0LWE2YzMtNWI2ZGYxZjBkYzIx', 'content_vector': [3.636302e-05, -0.014039703, -0.0011298007, -0.005913462, -0.016717235, 0.0152605465, -0.003933059, 0.0037596438, -0.026900182, -0.035265736, 0.035598695, 0.0051747127, -0.030132644, -0.014116006, -0.025956802, 0.004467178, 0.022696596, -0.008871927, 0.013366852, -0.0060591307, -0.017272165, 0.00086967775, -0.01308245, -0.0144559, 0.00079510914, 0.004588569, 0.015759982, -0.029882925, 0.0006828228, -0.012666253, 0.018118432, 0.0032931566, -0.013137943, 0.0011601484, 0.02465272, 0.01996357, 0.016606249, -0.009489286, 0.015690617, 0.00049466715, 0.016606249, 0.028662082, 0.019325402, 0.0052891667, -0.015343786, 0.010522841, -0.009385237, -0.01054365, 0.00014501855, 0.01768836, 0.001506979, 0.04580939, 0.0037908584, -0.00047515793, -0.015080195, -0.022016807, -0.02938349, 0.03226912, 0.012943719, 0.013054704, 0.011743684, -0.0012052364, -0.022238778, 0.004588569, -0.008386364, -0.0002640248, -0.010085834, 0.015038575, 0.0128119225, -0.01893695, 0.029438982, 0.015440899, 0.018770473, -0.008566716, 0.032074895, -0.01099453, -0.015399279, -0.021656103, -0.016966954, 0.0090245325, 0.011986466, -0.015440899, -0.009628017, 0.02289082, 0.019311529, 0.017868713, 0.007172457, 0.007845309, 0.015676744, -0.011022277, 0.011722875, 0.008760941, 0.0127980495, 0.026456239, -0.011882417, 0.015981954, -0.008518159, 0.011639635, -0.005334255, -0.01832653, -0.016897587, 0.019311529, -0.028634336, -0.012492838, -0.029855179, 0.021517372, 0.023806453, -0.008219886, 0.04164742, 0.04134221, -0.0060140425, 0.002322031, -0.016800474, -0.024014551, -0.024916312, -0.0011193958, 0.010939037, -0.018007446, -0.022308145, 0.016814347, 0.0045920373, 0.0418139, -0.0013457028, 0.011868543, -0.0019318465, -0.0047411746, 0.0019769345, -0.0114870295, 0.0144836465, -0.013762238, 0.004293763, 0.0063331267, 0.027385745, -0.0028648209, -0.02254399, 0.046641782, -0.034183625, 0.0056602755, -0.015121815, -0.01359576, 0.009614144, 0.012624634, -0.00763721, -0.007214077, 0.0043804706, 0.02125378, 0.009655764, 0.0034318888, 0.009891609, 0.031159261, 0.016675616, -0.029022785, -0.025138283, 0.011355234, -0.0016136294, -0.0047307694, 0.007692703, 0.020615611, -0.040731788, -0.012666253, 0.016842095, 0.030965036, -0.023737086, -0.014927589, 0.008226822, 0.017910333, 0.015857095, -0.022488497, -0.012492838, -0.02395906, -0.0059238668, 0.022086173, -0.03451658, 0.015107941, -0.0010691053, 0.007491541, 0.03626461, -0.0057400465, -0.012728684, -0.005192054, 0.029327996, 0.010231503, 0.011327487, 0.021878075, -0.0029324528, -0.024777578, 0.006853373, -0.0030295653, 0.007588654, -0.008199075, -0.005278762, 0.027552223, -0.007255696, -0.018825965, -0.6055385, -0.0324356, 0.017868713, -0.024694338, 0.018049065, 0.008809498, 0.012770303, 0.00029458923, 0.009447666, 0.03157546, -0.0011436739, 0.0059203985, -0.03343447, -0.0120488955, 0.01936702, -0.01406745, -2.3519451e-05, -0.01232636, 0.009843052, 0.0011774899, -0.02844011, 0.01359576, -0.023140538, 0.0051816492, -0.022682723, 0.0101552, -0.0038359466, 0.022016807, 0.0022630696, 0.021808708, -0.014428154, 0.006558567, 0.016855968, -0.016259419, 0.047668397, 0.002223184, -0.018382022, 0.019325402, -0.006360873, 0.025401874, -0.013137943, -0.016633997, 0.01961674, -0.0047134277, 0.014941462, -0.009073089, 0.013658189, -0.0071239006, -0.01914505, -0.01017601, 0.015981954, 0.016231673, 0.009662701, -0.021420259, 0.004841755, 0.0048556286, 0.037762918, -0.026581097, 0.018354276, -0.020088429, -0.01056446, 0.0097875595, -0.03221363, -0.052718252, -0.0367363, 0.02438913, 0.0030625144, 0.011806114, 0.0001495707, -0.034821793, 0.013068577, 0.016314913, 0.015288293, -0.0029428578, -0.009593335, 0.024944058, 0.0261649, 0.002639381, 0.00448452, 0.020421386, 0.029411236, -0.02508279, 0.013692873, -0.017091813, -0.001201768, 0.012666253, -0.013429281, 0.0034648378, 0.019852584, -0.010460411, 0.02495793, 0.006662616, -0.0347663, -0.03748545, 0.023154411, -0.014247801, -0.0061943945, -0.016939206, 0.014955336, -0.011917099, 0.013054704, -0.005313445, 0.01570449, 0.020504626, 0.037901647, 0.016190052, -0.030465601, 0.0076580197, -0.006073004, -0.01351252, -0.009523968, -0.014317168, -0.006322722, -0.017286038, 0.003693746, -0.027690956, 0.011903226, -0.0016136294, 0.00050160376, -0.02547124, 0.0049943607, 0.0012858745, -0.01707794, 0.009392173, 0.02090695, 5.581805e-05, 0.011958719, -0.030604333, -0.0086499555, 0.02099019, 0.006707704, -0.036375593, 0.010654637, -0.017230544, 0.031825177, 0.011570269, 0.028967293, -0.0018503413, 0.022322018, -0.010592206, -0.014553012, 0.0073458725, 0.012007276, -0.009260377, -0.029189264, -0.0058857156, -0.012825795, -0.002675798, -0.014074386, -0.01820167, -0.0035133942, -0.021669976, -0.009329744, 0.017757727, 0.0027278226, -0.0058198175, -0.0105367135, -0.029855179, -0.009967912, -0.02387582, -0.019422514, 0.029106025, -0.006763197, 0.019699978, 0.006638338, -0.002717418, 0.004134221, 0.022960186, -0.03135349, -0.018062938, 0.015413152, -0.007921611, 0.0068082847, 0.00061172247, 0.011944846, 0.011008403, -0.039150238, 0.005212864, 0.014774984, -0.017216671, -0.0017965826, 0.020199414, -0.00067848735, -0.02241913, 0.008233759, -0.00076129317, 0.04375615, 0.039816152, -0.011306678, 0.00612156, 0.004418622, 0.029577713, -0.0118477335, 0.04461629, -0.011924036, 0.017008573, -0.0043249778, 0.010980657, 0.0036209116, 0.035432216, 0.02727476, -0.010224566, 0.022488497, 0.010980657, -0.009135518, 0.0086638285, 0.0038012634, -0.016384277, 0.021364765, -0.009475412, 0.01893695, -0.002878694, -0.004609379, -0.015954208, -0.009558652, 0.00841411, 0.012159881, -0.00020549714, -0.03815137, 0.0055666314, 0.006378215, 0.016703362, 1.2294874e-05, 0.013817731, -0.0009719928, 0.03804038, 0.011889353, 0.005889184, 0.01181305, -0.02938349, 0.014636252, 0.029577713, 0.0045712274, 0.005299572, 0.015357659, -0.0058995886, 0.016495263, -0.015690617, 0.022405257, -0.015954208, -0.011050023, -0.0064926688, -0.012430409, -0.033323485, 0.033711936, -0.0033382445, 0.02452786, 0.008136646, -0.02495793, 0.004213992, -0.0378739, 0.0031023999, -0.008004851, -0.0038290098, 0.004782794, -0.009260377, 0.0038602247, -0.00081635255, -0.008691575, 0.030410107, 0.003138817, 0.004678745, 0.019949697, 0.018271036, -0.0012477231, 0.009870799, -0.012291676, -0.02250237, -0.00064553844, -0.025526732, -0.007651083, -0.02422265, 0.015940335, -0.026775323, 0.02495793, 0.026262013, -0.0028943014, 0.023931311, 0.010495095, 0.04392263, -0.007318126, -0.059710357, 0.032990526, -0.007963231, -0.0037527073, 0.011632699, -0.03257433, -0.012014212, -0.025693212, 0.022294272, -0.008032597, 0.014747238, -0.021822581, 0.0110847065, -0.01146622, 0.006364342, 0.012187627, -0.015565758, -0.0010370235, -0.022835327, 0.00896904, -0.0025075853, -0.019602867, 0.0021937035, 0.018728852, -0.00092430355, -0.0169947, -0.004862565, -0.018173924, -0.04750192, 0.0006242951, 0.008767878, 0.010286996, -0.013443154, 0.0327963, 0.010293933, -0.010446538, -0.018257163, 0.0017142103, 0.015399279, 0.0010734408, -0.012749493, -0.0029948824, 0.010293933, 0.059599373, 0.014150688, -0.014761111, -0.009392173, -0.015205054, 0.015648996, -0.0177716, -0.00081115006, 0.0039226543, 0.02327927, -0.025748704, -0.008525096, 0.021281525, 0.024708213, 0.03249109, 0.0029029723, -0.0012858745, -0.035598695, 0.00011038968, -0.034072638, 0.0006503074, 0.020795964, 0.007873055, 0.0062152045, 0.015205054, 0.002380992, 0.033046022, 0.008171329, -0.017230544, -0.025485113, 0.0037700487, -0.008823371, -0.002641115, 0.00638862, 0.0005813748, 0.027912928, 0.009947102, 0.017910333, -0.0029307187, 0.013325232, 0.014677871, 0.013762238, -0.015496392, -0.023903565, -0.009461539, -0.021531245, -0.014261674, 0.014102132, -0.01639815, -0.019464133, -0.034405597, 0.012881288, -0.04530995, -0.002889099, 0.0043561924, 0.01570449, -0.0246111, -0.009801433, -0.012250057, -0.05610332, -0.020171668, -0.018909205, -0.025637718, -0.0056394655, -0.016342659, -0.031464472, 0.0027052788, 0.010557524, -0.020976314, 0.020282654, 0.009371363, -0.023265397, -0.051691633, 0.0065446934, 0.006041789, 0.024264269, 0.0021815645, -0.025762577, -0.005472987, -0.007422175, 0.022974059, -0.021239907, 0.018215543, -0.025124408, 0.020241035, -0.020795964, -0.0008358618, 0.002479839, -0.002015086, 0.01871498, -0.006919271, -0.007373619, -0.0057747294, 0.009288124, 0.021448005, 0.0281349, 0.016203927, 0.016328786, 0.0017965826, -0.03593165, -0.016661743, -0.03446109, -0.009523968, -0.021531245, 0.014220055, -0.0005570967, 0.01738315, 0.0065204157, 0.012707873, -0.021059554, 0.005972423, 0.013096324, 0.00853897, -0.011750621, 0.00027963216, -0.021350892, 0.013276676, 0.02551286, -0.017244417, -0.022613356, 0.0071100276, -0.04586488, 0.034655314, 0.0054417723, -0.026650464, 0.03587616, 0.010973721, -0.002878694, -0.041147985, 0.013117134, 0.0037527073, 0.004373534, -0.008518159, -0.0020827178, 0.003874098, -0.007096154, -0.015857095, 0.0053446596, 0.0017861776, -0.009225694, -0.013394598, -0.0064128977, -0.010120518, 0.008601399, -0.0070441295, -0.037235733, -0.00048036038, 0.0010665042, 0.0007738658, 0.025138283, -0.0030104897, -3.9858423e-05, -0.043228965, -0.032158133, -0.025568353, -0.00015152163, 0.0005345527, -0.0056464025, 0.02839849, 0.017050192, 0.038068127, -0.0045053298, 0.033406723, 0.021198288, -0.008150519, -0.013803858, 0.010890481, -0.0067493236, 0.015482518, -0.020587865, -0.0040197666, 0.0046926183, 0.027260887, -0.0012659318, 0.0040440448, 0.014074386, -0.008941293, -0.033240244, -0.005042917, -0.046669528, -0.0013751833, 0.019283783, -0.002176362, 0.016592376, -0.03998263, -0.007741259, 0.025845816, 0.0009390439, 0.016606249, 0.012451218, 0.013810795, -0.002380992, 0.017674487, 0.0144975195, 0.0054868604, -0.017951952, 0.027011167, -0.023307016, -0.0008904876, 0.030992784, 0.0051885857, 0.020463007, 0.019089557, -0.007512351, -0.013325232, 0.024375254, -0.02641462, -0.011709001, -0.017951952, -0.038761787, -0.011521713, -0.015912589, 0.0065134787, 0.001801785, -0.032685317, 0.0066105914, 0.011549459, 0.028856307, -0.007873055, -0.011674318, 0.025498986, 0.0021347424, 0.052357547, 0.020504626, 0.02090695, 0.012617698, 0.02504117, -0.018118432, 0.013894035, -0.037957143, -0.0025006486, 0.010619953, 0.014594632, 0.0011627496, -0.0015720098, 0.02452786, -0.025915183, -0.0128119225, -0.017840967, 0.018395895, 0.011521713, 0.0127841765, -0.019852584, -0.024624974, 0.0069400803, 0.02762159, -0.027233139, 0.0052891667, 0.0077898153, -0.010779495, 8.4594154e-05, -0.016190052, 0.022779834, 0.020865329, 0.023681594, 0.0139634, -0.0042001186, 0.002063642, -0.0031457536, -0.009184075, -0.00034357907, 0.016120687, -0.022322018, 0.01949188, 0.030687572, 0.0079008015, 0.008032597, 0.009461539, 0.014428154, 0.034655314, -0.04625333, -0.003150956, 0.009364426, -0.009732067, 0.013394598, -0.013928717, -0.002925516, 0.012097452, -0.0082753785, 0.012118261, 0.015995828, 0.0056498707, -0.048806004, 0.008545906, 0.0049562096, -0.038928267, -0.00037869567, 0.02684469, 0.009829179, -0.022349764, -0.00896904, 0.0015390608, -0.020573992, 0.0063920883, 0.018257163, 0.013394598, -0.009648828, 0.024694338, -0.007137774, -0.021434132, 0.003228993, 0.033046022, -0.018132305, 0.025526732, -0.021891948, -0.015829349, 0.02319603, -0.002380992, -0.026581097, -0.01750801, -0.012666253, 0.010918228, 0.009135518, 0.005042917, 0.008310061, 0.0015685414, -0.018076811, -0.008795625, -0.0032914225, 0.023168284, 0.012978401, -0.0005844096, 0.03501602, -0.011493966, 0.020865329, -0.0144559, 0.017133432, 0.0013110196, -0.03562644, -0.013346042, 0.016453644, 0.0063157855, 0.00072487595, 0.0001522803, -0.0047897305, -0.01099453, -0.037790664, 0.006562035, 0.010085834, 0.008684638, 0.0037561755, 0.0077759423, 0.012992275, 0.0022682722, -0.024097791, -0.0035966334, -0.023834199, -0.007075344, -0.023778707, 0.009419919, -0.0127980495, 0.029244756, 0.026830817, -0.042979248, -0.040065873, -0.0006520415, -0.030826304, 0.009433793, -0.02719152, 0.04431108, 0.013519458, 0.03382292, 0.01596808, 0.012832733, 0.004623252, 0.0028492135, -0.025762577, 0.015843222, -0.014955336, -0.017674487, -0.013970337, -0.014053577, -0.033545457, -0.014344914, 0.0026255078, -0.013796922, -0.0081089, 0.03851207, -0.0071863304, -0.015510265, -0.0063851513, -0.01054365, 0.0065759085, 0.013179563, 0.02473596, -0.03665306, 0.006537757, 0.010821115, -0.018465262, -0.00011553794, 0.003034768, 0.018271036, 0.0076788296, 0.012721746, -0.0052614203, -0.008858054, -0.0054660505, 0.013214246, 0.047307696, 0.015413152, -0.013866288, 0.026178773, 0.013789985, -0.014386534, -0.005223269, 0.016120687, 0.009662701, 0.010814179, 0.042452067, -0.02594293, -0.0025873564, -0.03296278, 0.022266526, -0.021933567, -0.019977443, -0.011043087, 0.0144975195, -0.023376383, 0.013491711, 0.016606249, -0.008268442, 0.024860818, -0.005133093, 0.0068152216, 0.0034821793, -0.03171419, -0.009267313, -0.013089387, -0.01570449, 0.022349764, -0.014358787, -0.022127792, 0.029522222, 0.019242162, -0.015316039, 0.009371363, 0.19444712, -0.016439771, -0.011278931, 0.009544779, 0.008698512, 0.0017220139, 0.014136815, 0.016675616, -0.019935824, 0.012395726, -0.0046301885, 0.0144836465, -0.0030052871, -0.0045365444, -0.00550767, -0.032130387, -0.018090684, -0.014053577, -0.013214246, -0.002096591, 0.0034769769, -0.007331999, 0.0040856646, -0.0025145218, -0.0040440448, 0.024319762, -0.0047307694, -0.01596808, 0.009267313, 0.012638507, 0.0034422937, 0.018007446, 0.004859097, 0.041619673, -0.026525605, -0.00681869, -0.012416536, -0.0044775833, 0.015759982, 0.017133432, 0.03024363, 0.019921951, 0.005254484, -0.02598455, -0.006659148, 0.018257163, -0.0031578927, -0.029660953, 0.0013049502, 0.023223778, -0.034100384, 0.01400502, 0.0054070894, -0.0026636592, -0.0065100105, 0.019269908, -0.020365894, 0.01185467, 0.017674487, 0.020768216, -0.013692873, 0.01028006, -0.023182157, 0.029855179, -0.016925333, -0.010210693, -0.021045681, -0.014608505, 0.006856841, 0.01863174, 0.004109943, 0.008830307, -0.012478965, -0.011930973, -0.02413941, -0.029494476, 0.053106703, 0.030382361, 0.01441428, 0.032546584, 0.022904694, 0.012534458, 0.013713682, -0.0025110536, -0.014941462, -0.028939545, 0.029466728, -0.03804038, -0.018173924, -0.00021481821, -0.010980657, -0.034710806, 0.010613017, 0.009829179, -0.009177138, 0.004168904, 0.017494136, -0.0037665805, -0.0045053298, -0.016106814, -0.0060175112, 0.0750264, -0.009267313, 0.0031873733, -0.014969209, 0.010328615, -0.027399618, 0.03390616, 0.0024278143, -0.007581717, -0.003613975, -0.009468475, -0.010370235, 0.010460411, -0.011833861, -0.009156328, -0.006738919, 0.0037908584, 0.008656892, -0.010037278, -0.023681594, -0.02078209, -0.022835327, 0.007977104, -0.01605132, -0.03335123, -0.0169947, -0.011514776, -0.03815137, -0.018784346, 0.022682723, -0.023931311, 0.00222145, 0.017549628, -0.0014774984, 0.013026957, 0.03157546, 0.004383939, 0.0036868094, 0.017133432, 0.0030122239, 0.008150519, 0.0019908077, 0.004172372, 0.01185467, -0.016925333, 0.008192139, 0.012534458, 0.0072418232, -0.008989849, -0.017494136, -0.0041446257, 0.000651608, 0.013013084, 0.03188067, -0.0012815391, -0.010661573, 0.007949358, -0.00018425376, 0.022322018, -0.053078957, -0.007845309, -0.00015368931, -0.0073042526, -0.019214416, -0.009301997, -0.1758015, 0.0031249437, 0.0057400465, -0.05141417, 0.047085725, 0.015440899, -0.0032480687, -0.018603994, -0.0032168538, 0.004973551, 0.018493008, -0.0062360144, -0.02198906, -0.00845573, -0.00233417, 0.005060259, -0.024680465, 0.009662701, 0.040204603, -0.0012919441, 0.00896904, -0.009204884, 0.015288293, -0.012576078, 0.004609379, 0.0007521889, -0.04014911, 0.025679339, 0.005608251, -0.033961654, -0.008858054, -0.020213287, 0.033489965, -0.0049874242, 0.014199245, -0.0048660333, 0.04170291, -0.026081663, -0.040648546, 0.031214755, -0.0015095802, 0.024416875, 0.028065532, -0.0047203647, 0.0036694678, 0.013443154, 0.015024702, -0.013283612, 0.012083578, -0.00158675, 0.033157006, -0.022086173, 0.0018052533, 0.0019023659, -0.012069705, 0.015440899, 0.020032937, 0.010786432, -0.02839849, -0.030687572, 0.01979709, -0.0177716, 0.01228474, -0.009662701, 0.0026705957, -0.00061345665, -0.029577713, 0.028287504, -0.030104896, 0.014955336, 0.00972513, 0.003953869, -0.01316569, 0.0014757642, 0.002028959, 0.025027297, -0.033101514, 0.02190582, 0.039621927, 0.01656463, 0.00056143204, 0.010973721, -0.040759534, 0.011833861, -0.014032766, 0.00078513776, 0.0016127623, 0.010286996, -0.013068577, 0.013831604, 0.0105367135, -0.013769175, -0.00014989586, 0.016176179, -0.0068152216, 0.026428493, 0.02530476, -0.020296527, -0.0007244424, -0.0077898153, 0.013096324, 0.016869841, -0.008303124, 0.0036209116, 0.03798489, 0.011882417, -0.021864202, 0.0026237736, 0.01850688, -0.0017237482, -0.016578503, 0.017799348, 0.004227865, 0.022058427, -0.04483826, 0.017993571, -0.015954208, -0.014747238, 0.0114870295, 0.015413152, 0.04764065, -0.0015806805, -0.0076441467, 0.012173754, -0.02344575, -0.023764834, -0.113482974, -0.017355403, 0.005968955, 0.002028959, -0.013283612, 0.004269485, -0.0047550476, 0.036986016, -0.023265397, 0.02844011, -0.024944058, -0.014171499, 0.0054903287, -0.0144975195, -0.008296188, -0.028315252, -0.013380725, -0.01949188, -0.012527522, 0.0063400636, 0.016925333, -0.01138298, 0.023404129, -0.0045955055, -0.032130387, -0.017286038, -0.03382292, 0.008400237, -0.0055111386, -0.010730939, 0.026220394, -0.02215554, -0.0067909434, -0.011480093, -0.0013474369, -0.0036174431, -0.011105516, -0.020837583, 0.00972513, -0.030104896, -0.017757727, -0.017466389, 0.0056810854, -0.011022277, -0.009482349, 0.00804647, -0.0055909096, 0.0024330167, -0.016661743, -0.014344914, -0.01596808, 0.0029029723, -0.015551885, -0.007859182, 0.015107941, 0.027482858, 0.007158584, 0.009121645, -0.016467517, -0.024583353, -0.0007955427, -0.0063053803, -0.010266186, 0.006062599, 0.007158584, 0.014955336, -0.025415746, -0.014802731, -0.014553012, -0.012444282, -0.005968955, 0.030410107, 0.010980657, 0.0047966675, -0.024125537, -0.004623252, 0.00047559149, -0.02938349, 0.013866288, 0.011223438, -0.022863073, -0.013228119, -0.017147304, -0.029744193, -0.0028596183, 0.0033711935, 0.0025058512, -0.010314742, 0.0014688276, -0.046808258, 0.015551885, 0.027288632, 0.019921951, 0.00722795, -0.0021468815, -0.011688191, -0.030659826, -0.0052302056, 0.008532033, 0.03890052, 0.004928463, -0.0062498874, -0.075137384, 0.012028085, -0.0021676912, -0.022904694, 0.004609379, -0.01441428, 0.0032550052, -0.005656807, -0.009988721, 0.017050192, -0.012485902, 0.0069747637, -0.0028284036, 0.0050359806, -0.0030399703, -0.0067458553, 0.02530476, -0.022044554, 0.008060344, -0.0077135125, -0.0032879543, -0.019158922, 0.00841411, 0.0027486326, -0.0036070384, -0.010640763, -0.030410107, 0.029716447, -0.007893865, -0.011147136, 0.01095291, -0.018354276, 0.0031735, 0.034433343, -0.018146178, 0.010717066, 0.014580758, 0.018104557, 0.0021174008, 0.017785473, -0.041064743, -0.030632079, 0.027718702, -0.026858563, -0.010453475, -0.0116604455, 0.00014350117, -0.014594632, 0.012208438, 0.0062533556, 0.031741936, 0.0047758576, -0.03976066, -0.045948118, -0.005882247, -0.002391397, 0.03701376, 0.0027625058, -0.001801785, 0.01441428, 0.04875051, 0.002686203, 0.0060383207, -0.0060348525, 0.014705618, -0.0050671953, -0.014650125, -0.019644486, 0.0046059103, 0.0009164999, -0.0177716, -0.008747068, 0.00033230707, -0.008053407, 0.025929056, -0.002627242, 0.028065532, -0.008795625, -0.01988033, 0.02056012, 0.021142794, -0.00974594, -0.006187458, 0.013672062, 0.028634336, -0.0067944117, -0.02594293, -0.019436387, -0.025374128, -0.022211032, 0.024458494, -0.0033018275, 0.0061735846, 0.03820686, 0.0027503667, -0.008344744, -0.0029862116, -0.0030382362, 0.016592376, 0.016703362, 0.023376383, -0.005219801, 0.004907653, -0.023043426, -0.033046022, 0.001496574, -0.025831943, -0.033739682, 0.024944058, 0.02215554, 0.01664787, 0.00030022525, 0.0014558214, 0.013887097, -0.012215374, 0.042951502, -0.0065655033, -0.012596888, -0.028204264, 0.008858054, 0.0077343225, 0.01824329, 0.019200543, -0.024597228, 0.03859531, 0.010592206, 0.019810963, -0.015177308, 0.023848072, 0.017452516, -0.029771939, -0.012319423, -0.019630613, -0.021753216, -0.01781322, -0.01863174, -0.04478277, 0.02465272, 0.006378215, 0.10454862, 0.0074846046, -0.0053203814, -0.004057918, -0.015940335, 0.0025457367, 0.0029185796, -0.003525533, -0.011154072, -0.011771431, -0.008143582, -0.0017237482, -0.015690617, -0.01574611, -0.011480093, -0.027122153, -0.0022856137, 0.0021382107, 0.016259419, 0.006992105, 0.024458494, -0.0051677763, 0.014733364, -0.026456239, -0.04211911, 0.019269908, 0.010432664, -0.0048937798, -0.009662701, -0.031436726, 0.04727995, 0.006527352, -0.026664337, -0.010529777, -0.01871498, 0.013110197, -0.00054322346, -0.02749673, 0.007172457, 0.00025730496, 0.020005189, 0.0027607717, -0.02405617, -0.03490503, -0.011771431, 0.010127454, 0.008733194, -0.020435259, -0.014275548], 'metadata': '{"source": "qna/overview_openai.txt"}'})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:33.941Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/",
"description": "[Microsoft Azure AI",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"azure_ai_search\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:33 GMT",
"etag": "W/\"6c3322a603b8dd93384a6310504bba7b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::77462-1713753753806-20ef6769106c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/",
"property": "og:url"
},
{
"content": "Azure AI Search | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Microsoft Azure AI",
"property": "og:description"
}
],
"title": "Azure AI Search | 🦜️🔗 LangChain"
} | This notebook shows how to use Azure AI Search (AAS) within LangChain.
Please note you will need 1. the name of your AAS service, 2. the name of your AAS index, 3. your API key.
Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key.
Set Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureAISearchRetriever). The search index name you use determines which documents are queried, so be sure to select the right one.
*You may also use AzureCognitiveSearchRetriever however this will soon be depreciated. Please switch to AzureAISearchRetriever where possible.
content_key is the key in the retrieved result to set as the Document page_content. top_k is the number of number of results you’d like to retrieve. Setting it to None (the default) returns all results.
Now you can use it to retrieve documents from Azure AI Search. This is the method you would call to do so. It will return all documents relevant to the query.
First let’s create an Azure vector store and upload some data to it.
We’ll use an embedding model from openai to turn our documents into embeddings stored in the Azure AI Search vector store. We’ll also set the index name to langchain-vector-demo. This will create a new vector store associated with that index name.
Next we’ll load data into our newly created vector store. For this example we load all the text files from a folder named qna. We’ll split the text in 1000 token chunks with no overlap. Finally the documents are added to our vector store as emeddings.
['YWY0NzY1MWYtMTU1Ni00YmEzLTlhNTQtZDQxNWFkMTlkNjMx',
'MTUzM2EyOGYtYWE0My00OTIyLWJkNWUtMjVjNTgwMzZlMjcx',
'ZGMyMjQ3N2EtNTQ5NC00ZjZhLWIyMzctYjRlZDYzMWUxNGQ4',
'OWM5MWQ3YzUtZjFkZS00MGI2LTg1OGMtMmRlYzUwMDc2MzZi',
'ZmFiYWVkOGQtNTcwYi00YTVmLWE3ZDEtMWQ3MTAxYjI2NTJj',
'NTUwM2ExMjItNTk4Zi00OTg0LTg1ZDItZTZlMGYyMjJiNTIy']
Next we’ll create a retriever similar to the one we created above but we’re using the index name associated with our new vector store. In this case that’s langchain-vector-demo.
Now we can retrieve the data that is relevant to our query from the documents we uploaded.
[Document(page_content='\n# What is Azure OpenAI?\n\nThe Azure OpenAI service provides REST API access to OpenAI\'s powerful language models including the GPT-3, Codex and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.\n\n### Features overview\n\n| Feature | Azure OpenAI |\n| --- | --- |\n| Models available | GPT-3 base series <br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|\n| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \\* available by request. Please open a support request|\n| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |\n| Virtual network support | Yes | \n| Managed Identity| Yes, via Azure Active Directory | \n| UI experience | **Azure Portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |\n| Regional availability | East US <br> South Central US <br> West Europe |\n| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. |\n\n## Responsible AI\n\nAt Microsoft, we\'re committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in the Azure OpenAI service have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases, incorporating Microsoft’s <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">principles for responsible AI use</a>, building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers.\n\n## How do I get access to Azure OpenAI?\n\nHow do I get access to Azure OpenAI Service?\n\nAccess is currently limited as we navigate high demand, upcoming product improvements, and <a href="https://www.microsoft.com/ai/responsible-ai?activetab=pivot1:primaryr6" target="_blank">Microsoft’s commitment to responsible AI</a>. For now, we\'re working with customers with an existing partnership with Microsoft, lower risk use cases, and those committed to incorporating mitigations. In addition to applying for initial access, all solutions using the Azure OpenAI service are required to go through a use case review before they can be released for production use.\n\nMore specific information is included in the application form. We appreciate your patience as we work to responsibly enable broader access to the Azure OpenAI service.\n\nApply here for initial access or for a production review:\n\n<a href="https://aka.ms/oaiapply" target="_blank">Apply now</a>\n\nAll solutions using the Azure OpenAI service are also required to go through a use case review before they can be released for production use, and are evaluated on a case-by-case basis. In general, the more sensitive the scenario the more important risk mitigation measures will be for approval.\n\n## Comparing Azure OpenAI and OpenAI\n\nAzure OpenAI Service gives customers advanced language AI with OpenAI GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.\n\nWith Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering. \n\n## Key concepts\n\n### Prompts & Completions\n\nThe completions endpoint is the core component of the API service. This API provides access to the model\'s text-in, text-out interface. Users simply need to provide an input **prompt** containing the English text command, and the model will generate a text **completion**.\n\nHere\'s an example of a simple prompt and completion:\n\n>**Prompt', metadata={'@search.score': 2.3721094, 'id': 'MDEyNzU0ZDEtOTlmMy00YjE0LWE2YzMtNWI2ZGYxZjBkYzIx', 'content_vector': [3.636302e-05, -0.014039703, -0.0011298007, -0.005913462, -0.016717235, 0.0152605465, -0.003933059, 0.0037596438, -0.026900182, -0.035265736, 0.035598695, 0.0051747127, -0.030132644, -0.014116006, -0.025956802, 0.004467178, 0.022696596, -0.008871927, 0.013366852, -0.0060591307, -0.017272165, 0.00086967775, -0.01308245, -0.0144559, 0.00079510914, 0.004588569, 0.015759982, -0.029882925, 0.0006828228, -0.012666253, 0.018118432, 0.0032931566, -0.013137943, 0.0011601484, 0.02465272, 0.01996357, 0.016606249, -0.009489286, 0.015690617, 0.00049466715, 0.016606249, 0.028662082, 0.019325402, 0.0052891667, -0.015343786, 0.010522841, -0.009385237, -0.01054365, 0.00014501855, 0.01768836, 0.001506979, 0.04580939, 0.0037908584, -0.00047515793, -0.015080195, -0.022016807, -0.02938349, 0.03226912, 0.012943719, 0.013054704, 0.011743684, -0.0012052364, -0.022238778, 0.004588569, -0.008386364, -0.0002640248, -0.010085834, 0.015038575, 0.0128119225, -0.01893695, 0.029438982, 0.015440899, 0.018770473, -0.008566716, 0.032074895, -0.01099453, -0.015399279, -0.021656103, -0.016966954, 0.0090245325, 0.011986466, -0.015440899, -0.009628017, 0.02289082, 0.019311529, 0.017868713, 0.007172457, 0.007845309, 0.015676744, -0.011022277, 0.011722875, 0.008760941, 0.0127980495, 0.026456239, -0.011882417, 0.015981954, -0.008518159, 0.011639635, -0.005334255, -0.01832653, -0.016897587, 0.019311529, -0.028634336, -0.012492838, -0.029855179, 0.021517372, 0.023806453, -0.008219886, 0.04164742, 0.04134221, -0.0060140425, 0.002322031, -0.016800474, -0.024014551, -0.024916312, -0.0011193958, 0.010939037, -0.018007446, -0.022308145, 0.016814347, 0.0045920373, 0.0418139, -0.0013457028, 0.011868543, -0.0019318465, -0.0047411746, 0.0019769345, -0.0114870295, 0.0144836465, -0.013762238, 0.004293763, 0.0063331267, 0.027385745, -0.0028648209, -0.02254399, 0.046641782, -0.034183625, 0.0056602755, -0.015121815, -0.01359576, 0.009614144, 0.012624634, -0.00763721, -0.007214077, 0.0043804706, 0.02125378, 0.009655764, 0.0034318888, 0.009891609, 0.031159261, 0.016675616, -0.029022785, -0.025138283, 0.011355234, -0.0016136294, -0.0047307694, 0.007692703, 0.020615611, -0.040731788, -0.012666253, 0.016842095, 0.030965036, -0.023737086, -0.014927589, 0.008226822, 0.017910333, 0.015857095, -0.022488497, -0.012492838, -0.02395906, -0.0059238668, 0.022086173, -0.03451658, 0.015107941, -0.0010691053, 0.007491541, 0.03626461, -0.0057400465, -0.012728684, -0.005192054, 0.029327996, 0.010231503, 0.011327487, 0.021878075, -0.0029324528, -0.024777578, 0.006853373, -0.0030295653, 0.007588654, -0.008199075, -0.005278762, 0.027552223, -0.007255696, -0.018825965, -0.6055385, -0.0324356, 0.017868713, -0.024694338, 0.018049065, 0.008809498, 0.012770303, 0.00029458923, 0.009447666, 0.03157546, -0.0011436739, 0.0059203985, -0.03343447, -0.0120488955, 0.01936702, -0.01406745, -2.3519451e-05, -0.01232636, 0.009843052, 0.0011774899, -0.02844011, 0.01359576, -0.023140538, 0.0051816492, -0.022682723, 0.0101552, -0.0038359466, 0.022016807, 0.0022630696, 0.021808708, -0.014428154, 0.006558567, 0.016855968, -0.016259419, 0.047668397, 0.002223184, -0.018382022, 0.019325402, -0.006360873, 0.025401874, -0.013137943, -0.016633997, 0.01961674, -0.0047134277, 0.014941462, -0.009073089, 0.013658189, -0.0071239006, -0.01914505, -0.01017601, 0.015981954, 0.016231673, 0.009662701, -0.021420259, 0.004841755, 0.0048556286, 0.037762918, -0.026581097, 0.018354276, -0.020088429, -0.01056446, 0.0097875595, -0.03221363, -0.052718252, -0.0367363, 0.02438913, 0.0030625144, 0.011806114, 0.0001495707, -0.034821793, 0.013068577, 0.016314913, 0.015288293, -0.0029428578, -0.009593335, 0.024944058, 0.0261649, 0.002639381, 0.00448452, 0.020421386, 0.029411236, -0.02508279, 0.013692873, -0.017091813, -0.001201768, 0.012666253, -0.013429281, 0.0034648378, 0.019852584, -0.010460411, 0.02495793, 0.006662616, -0.0347663, -0.03748545, 0.023154411, -0.014247801, -0.0061943945, -0.016939206, 0.014955336, -0.011917099, 0.013054704, -0.005313445, 0.01570449, 0.020504626, 0.037901647, 0.016190052, -0.030465601, 0.0076580197, -0.006073004, -0.01351252, -0.009523968, -0.014317168, -0.006322722, -0.017286038, 0.003693746, -0.027690956, 0.011903226, -0.0016136294, 0.00050160376, -0.02547124, 0.0049943607, 0.0012858745, -0.01707794, 0.009392173, 0.02090695, 5.581805e-05, 0.011958719, -0.030604333, -0.0086499555, 0.02099019, 0.006707704, -0.036375593, 0.010654637, -0.017230544, 0.031825177, 0.011570269, 0.028967293, -0.0018503413, 0.022322018, -0.010592206, -0.014553012, 0.0073458725, 0.012007276, -0.009260377, -0.029189264, -0.0058857156, -0.012825795, -0.002675798, -0.014074386, -0.01820167, -0.0035133942, -0.021669976, -0.009329744, 0.017757727, 0.0027278226, -0.0058198175, -0.0105367135, -0.029855179, -0.009967912, -0.02387582, -0.019422514, 0.029106025, -0.006763197, 0.019699978, 0.006638338, -0.002717418, 0.004134221, 0.022960186, -0.03135349, -0.018062938, 0.015413152, -0.007921611, 0.0068082847, 0.00061172247, 0.011944846, 0.011008403, -0.039150238, 0.005212864, 0.014774984, -0.017216671, -0.0017965826, 0.020199414, -0.00067848735, -0.02241913, 0.008233759, -0.00076129317, 0.04375615, 0.039816152, -0.011306678, 0.00612156, 0.004418622, 0.029577713, -0.0118477335, 0.04461629, -0.011924036, 0.017008573, -0.0043249778, 0.010980657, 0.0036209116, 0.035432216, 0.02727476, -0.010224566, 0.022488497, 0.010980657, -0.009135518, 0.0086638285, 0.0038012634, -0.016384277, 0.021364765, -0.009475412, 0.01893695, -0.002878694, -0.004609379, -0.015954208, -0.009558652, 0.00841411, 0.012159881, -0.00020549714, -0.03815137, 0.0055666314, 0.006378215, 0.016703362, 1.2294874e-05, 0.013817731, -0.0009719928, 0.03804038, 0.011889353, 0.005889184, 0.01181305, -0.02938349, 0.014636252, 0.029577713, 0.0045712274, 0.005299572, 0.015357659, -0.0058995886, 0.016495263, -0.015690617, 0.022405257, -0.015954208, -0.011050023, -0.0064926688, -0.012430409, -0.033323485, 0.033711936, -0.0033382445, 0.02452786, 0.008136646, -0.02495793, 0.004213992, -0.0378739, 0.0031023999, -0.008004851, -0.0038290098, 0.004782794, -0.009260377, 0.0038602247, -0.00081635255, -0.008691575, 0.030410107, 0.003138817, 0.004678745, 0.019949697, 0.018271036, -0.0012477231, 0.009870799, -0.012291676, -0.02250237, -0.00064553844, -0.025526732, -0.007651083, -0.02422265, 0.015940335, -0.026775323, 0.02495793, 0.026262013, -0.0028943014, 0.023931311, 0.010495095, 0.04392263, -0.007318126, -0.059710357, 0.032990526, -0.007963231, -0.0037527073, 0.011632699, -0.03257433, -0.012014212, -0.025693212, 0.022294272, -0.008032597, 0.014747238, -0.021822581, 0.0110847065, -0.01146622, 0.006364342, 0.012187627, -0.015565758, -0.0010370235, -0.022835327, 0.00896904, -0.0025075853, -0.019602867, 0.0021937035, 0.018728852, -0.00092430355, -0.0169947, -0.004862565, -0.018173924, -0.04750192, 0.0006242951, 0.008767878, 0.010286996, -0.013443154, 0.0327963, 0.010293933, -0.010446538, -0.018257163, 0.0017142103, 0.015399279, 0.0010734408, -0.012749493, -0.0029948824, 0.010293933, 0.059599373, 0.014150688, -0.014761111, -0.009392173, -0.015205054, 0.015648996, -0.0177716, -0.00081115006, 0.0039226543, 0.02327927, -0.025748704, -0.008525096, 0.021281525, 0.024708213, 0.03249109, 0.0029029723, -0.0012858745, -0.035598695, 0.00011038968, -0.034072638, 0.0006503074, 0.020795964, 0.007873055, 0.0062152045, 0.015205054, 0.002380992, 0.033046022, 0.008171329, -0.017230544, -0.025485113, 0.0037700487, -0.008823371, -0.002641115, 0.00638862, 0.0005813748, 0.027912928, 0.009947102, 0.017910333, -0.0029307187, 0.013325232, 0.014677871, 0.013762238, -0.015496392, -0.023903565, -0.009461539, -0.021531245, -0.014261674, 0.014102132, -0.01639815, -0.019464133, -0.034405597, 0.012881288, -0.04530995, -0.002889099, 0.0043561924, 0.01570449, -0.0246111, -0.009801433, -0.012250057, -0.05610332, -0.020171668, -0.018909205, -0.025637718, -0.0056394655, -0.016342659, -0.031464472, 0.0027052788, 0.010557524, -0.020976314, 0.020282654, 0.009371363, -0.023265397, -0.051691633, 0.0065446934, 0.006041789, 0.024264269, 0.0021815645, -0.025762577, -0.005472987, -0.007422175, 0.022974059, -0.021239907, 0.018215543, -0.025124408, 0.020241035, -0.020795964, -0.0008358618, 0.002479839, -0.002015086, 0.01871498, -0.006919271, -0.007373619, -0.0057747294, 0.009288124, 0.021448005, 0.0281349, 0.016203927, 0.016328786, 0.0017965826, -0.03593165, -0.016661743, -0.03446109, -0.009523968, -0.021531245, 0.014220055, -0.0005570967, 0.01738315, 0.0065204157, 0.012707873, -0.021059554, 0.005972423, 0.013096324, 0.00853897, -0.011750621, 0.00027963216, -0.021350892, 0.013276676, 0.02551286, -0.017244417, -0.022613356, 0.0071100276, -0.04586488, 0.034655314, 0.0054417723, -0.026650464, 0.03587616, 0.010973721, -0.002878694, -0.041147985, 0.013117134, 0.0037527073, 0.004373534, -0.008518159, -0.0020827178, 0.003874098, -0.007096154, -0.015857095, 0.0053446596, 0.0017861776, -0.009225694, -0.013394598, -0.0064128977, -0.010120518, 0.008601399, -0.0070441295, -0.037235733, -0.00048036038, 0.0010665042, 0.0007738658, 0.025138283, -0.0030104897, -3.9858423e-05, -0.043228965, -0.032158133, -0.025568353, -0.00015152163, 0.0005345527, -0.0056464025, 0.02839849, 0.017050192, 0.038068127, -0.0045053298, 0.033406723, 0.021198288, -0.008150519, -0.013803858, 0.010890481, -0.0067493236, 0.015482518, -0.020587865, -0.0040197666, 0.0046926183, 0.027260887, -0.0012659318, 0.0040440448, 0.014074386, -0.008941293, -0.033240244, -0.005042917, -0.046669528, -0.0013751833, 0.019283783, -0.002176362, 0.016592376, -0.03998263, -0.007741259, 0.025845816, 0.0009390439, 0.016606249, 0.012451218, 0.013810795, -0.002380992, 0.017674487, 0.0144975195, 0.0054868604, -0.017951952, 0.027011167, -0.023307016, -0.0008904876, 0.030992784, 0.0051885857, 0.020463007, 0.019089557, -0.007512351, -0.013325232, 0.024375254, -0.02641462, -0.011709001, -0.017951952, -0.038761787, -0.011521713, -0.015912589, 0.0065134787, 0.001801785, -0.032685317, 0.0066105914, 0.011549459, 0.028856307, -0.007873055, -0.011674318, 0.025498986, 0.0021347424, 0.052357547, 0.020504626, 0.02090695, 0.012617698, 0.02504117, -0.018118432, 0.013894035, -0.037957143, -0.0025006486, 0.010619953, 0.014594632, 0.0011627496, -0.0015720098, 0.02452786, -0.025915183, -0.0128119225, -0.017840967, 0.018395895, 0.011521713, 0.0127841765, -0.019852584, -0.024624974, 0.0069400803, 0.02762159, -0.027233139, 0.0052891667, 0.0077898153, -0.010779495, 8.4594154e-05, -0.016190052, 0.022779834, 0.020865329, 0.023681594, 0.0139634, -0.0042001186, 0.002063642, -0.0031457536, -0.009184075, -0.00034357907, 0.016120687, -0.022322018, 0.01949188, 0.030687572, 0.0079008015, 0.008032597, 0.009461539, 0.014428154, 0.034655314, -0.04625333, -0.003150956, 0.009364426, -0.009732067, 0.013394598, -0.013928717, -0.002925516, 0.012097452, -0.0082753785, 0.012118261, 0.015995828, 0.0056498707, -0.048806004, 0.008545906, 0.0049562096, -0.038928267, -0.00037869567, 0.02684469, 0.009829179, -0.022349764, -0.00896904, 0.0015390608, -0.020573992, 0.0063920883, 0.018257163, 0.013394598, -0.009648828, 0.024694338, -0.007137774, -0.021434132, 0.003228993, 0.033046022, -0.018132305, 0.025526732, -0.021891948, -0.015829349, 0.02319603, -0.002380992, -0.026581097, -0.01750801, -0.012666253, 0.010918228, 0.009135518, 0.005042917, 0.008310061, 0.0015685414, -0.018076811, -0.008795625, -0.0032914225, 0.023168284, 0.012978401, -0.0005844096, 0.03501602, -0.011493966, 0.020865329, -0.0144559, 0.017133432, 0.0013110196, -0.03562644, -0.013346042, 0.016453644, 0.0063157855, 0.00072487595, 0.0001522803, -0.0047897305, -0.01099453, -0.037790664, 0.006562035, 0.010085834, 0.008684638, 0.0037561755, 0.0077759423, 0.012992275, 0.0022682722, -0.024097791, -0.0035966334, -0.023834199, -0.007075344, -0.023778707, 0.009419919, -0.0127980495, 0.029244756, 0.026830817, -0.042979248, -0.040065873, -0.0006520415, -0.030826304, 0.009433793, -0.02719152, 0.04431108, 0.013519458, 0.03382292, 0.01596808, 0.012832733, 0.004623252, 0.0028492135, -0.025762577, 0.015843222, -0.014955336, -0.017674487, -0.013970337, -0.014053577, -0.033545457, -0.014344914, 0.0026255078, -0.013796922, -0.0081089, 0.03851207, -0.0071863304, -0.015510265, -0.0063851513, -0.01054365, 0.0065759085, 0.013179563, 0.02473596, -0.03665306, 0.006537757, 0.010821115, -0.018465262, -0.00011553794, 0.003034768, 0.018271036, 0.0076788296, 0.012721746, -0.0052614203, -0.008858054, -0.0054660505, 0.013214246, 0.047307696, 0.015413152, -0.013866288, 0.026178773, 0.013789985, -0.014386534, -0.005223269, 0.016120687, 0.009662701, 0.010814179, 0.042452067, -0.02594293, -0.0025873564, -0.03296278, 0.022266526, -0.021933567, -0.019977443, -0.011043087, 0.0144975195, -0.023376383, 0.013491711, 0.016606249, -0.008268442, 0.024860818, -0.005133093, 0.0068152216, 0.0034821793, -0.03171419, -0.009267313, -0.013089387, -0.01570449, 0.022349764, -0.014358787, -0.022127792, 0.029522222, 0.019242162, -0.015316039, 0.009371363, 0.19444712, -0.016439771, -0.011278931, 0.009544779, 0.008698512, 0.0017220139, 0.014136815, 0.016675616, -0.019935824, 0.012395726, -0.0046301885, 0.0144836465, -0.0030052871, -0.0045365444, -0.00550767, -0.032130387, -0.018090684, -0.014053577, -0.013214246, -0.002096591, 0.0034769769, -0.007331999, 0.0040856646, -0.0025145218, -0.0040440448, 0.024319762, -0.0047307694, -0.01596808, 0.009267313, 0.012638507, 0.0034422937, 0.018007446, 0.004859097, 0.041619673, -0.026525605, -0.00681869, -0.012416536, -0.0044775833, 0.015759982, 0.017133432, 0.03024363, 0.019921951, 0.005254484, -0.02598455, -0.006659148, 0.018257163, -0.0031578927, -0.029660953, 0.0013049502, 0.023223778, -0.034100384, 0.01400502, 0.0054070894, -0.0026636592, -0.0065100105, 0.019269908, -0.020365894, 0.01185467, 0.017674487, 0.020768216, -0.013692873, 0.01028006, -0.023182157, 0.029855179, -0.016925333, -0.010210693, -0.021045681, -0.014608505, 0.006856841, 0.01863174, 0.004109943, 0.008830307, -0.012478965, -0.011930973, -0.02413941, -0.029494476, 0.053106703, 0.030382361, 0.01441428, 0.032546584, 0.022904694, 0.012534458, 0.013713682, -0.0025110536, -0.014941462, -0.028939545, 0.029466728, -0.03804038, -0.018173924, -0.00021481821, -0.010980657, -0.034710806, 0.010613017, 0.009829179, -0.009177138, 0.004168904, 0.017494136, -0.0037665805, -0.0045053298, -0.016106814, -0.0060175112, 0.0750264, -0.009267313, 0.0031873733, -0.014969209, 0.010328615, -0.027399618, 0.03390616, 0.0024278143, -0.007581717, -0.003613975, -0.009468475, -0.010370235, 0.010460411, -0.011833861, -0.009156328, -0.006738919, 0.0037908584, 0.008656892, -0.010037278, -0.023681594, -0.02078209, -0.022835327, 0.007977104, -0.01605132, -0.03335123, -0.0169947, -0.011514776, -0.03815137, -0.018784346, 0.022682723, -0.023931311, 0.00222145, 0.017549628, -0.0014774984, 0.013026957, 0.03157546, 0.004383939, 0.0036868094, 0.017133432, 0.0030122239, 0.008150519, 0.0019908077, 0.004172372, 0.01185467, -0.016925333, 0.008192139, 0.012534458, 0.0072418232, -0.008989849, -0.017494136, -0.0041446257, 0.000651608, 0.013013084, 0.03188067, -0.0012815391, -0.010661573, 0.007949358, -0.00018425376, 0.022322018, -0.053078957, -0.007845309, -0.00015368931, -0.0073042526, -0.019214416, -0.009301997, -0.1758015, 0.0031249437, 0.0057400465, -0.05141417, 0.047085725, 0.015440899, -0.0032480687, -0.018603994, -0.0032168538, 0.004973551, 0.018493008, -0.0062360144, -0.02198906, -0.00845573, -0.00233417, 0.005060259, -0.024680465, 0.009662701, 0.040204603, -0.0012919441, 0.00896904, -0.009204884, 0.015288293, -0.012576078, 0.004609379, 0.0007521889, -0.04014911, 0.025679339, 0.005608251, -0.033961654, -0.008858054, -0.020213287, 0.033489965, -0.0049874242, 0.014199245, -0.0048660333, 0.04170291, -0.026081663, -0.040648546, 0.031214755, -0.0015095802, 0.024416875, 0.028065532, -0.0047203647, 0.0036694678, 0.013443154, 0.015024702, -0.013283612, 0.012083578, -0.00158675, 0.033157006, -0.022086173, 0.0018052533, 0.0019023659, -0.012069705, 0.015440899, 0.020032937, 0.010786432, -0.02839849, -0.030687572, 0.01979709, -0.0177716, 0.01228474, -0.009662701, 0.0026705957, -0.00061345665, -0.029577713, 0.028287504, -0.030104896, 0.014955336, 0.00972513, 0.003953869, -0.01316569, 0.0014757642, 0.002028959, 0.025027297, -0.033101514, 0.02190582, 0.039621927, 0.01656463, 0.00056143204, 0.010973721, -0.040759534, 0.011833861, -0.014032766, 0.00078513776, 0.0016127623, 0.010286996, -0.013068577, 0.013831604, 0.0105367135, -0.013769175, -0.00014989586, 0.016176179, -0.0068152216, 0.026428493, 0.02530476, -0.020296527, -0.0007244424, -0.0077898153, 0.013096324, 0.016869841, -0.008303124, 0.0036209116, 0.03798489, 0.011882417, -0.021864202, 0.0026237736, 0.01850688, -0.0017237482, -0.016578503, 0.017799348, 0.004227865, 0.022058427, -0.04483826, 0.017993571, -0.015954208, -0.014747238, 0.0114870295, 0.015413152, 0.04764065, -0.0015806805, -0.0076441467, 0.012173754, -0.02344575, -0.023764834, -0.113482974, -0.017355403, 0.005968955, 0.002028959, -0.013283612, 0.004269485, -0.0047550476, 0.036986016, -0.023265397, 0.02844011, -0.024944058, -0.014171499, 0.0054903287, -0.0144975195, -0.008296188, -0.028315252, -0.013380725, -0.01949188, -0.012527522, 0.0063400636, 0.016925333, -0.01138298, 0.023404129, -0.0045955055, -0.032130387, -0.017286038, -0.03382292, 0.008400237, -0.0055111386, -0.010730939, 0.026220394, -0.02215554, -0.0067909434, -0.011480093, -0.0013474369, -0.0036174431, -0.011105516, -0.020837583, 0.00972513, -0.030104896, -0.017757727, -0.017466389, 0.0056810854, -0.011022277, -0.009482349, 0.00804647, -0.0055909096, 0.0024330167, -0.016661743, -0.014344914, -0.01596808, 0.0029029723, -0.015551885, -0.007859182, 0.015107941, 0.027482858, 0.007158584, 0.009121645, -0.016467517, -0.024583353, -0.0007955427, -0.0063053803, -0.010266186, 0.006062599, 0.007158584, 0.014955336, -0.025415746, -0.014802731, -0.014553012, -0.012444282, -0.005968955, 0.030410107, 0.010980657, 0.0047966675, -0.024125537, -0.004623252, 0.00047559149, -0.02938349, 0.013866288, 0.011223438, -0.022863073, -0.013228119, -0.017147304, -0.029744193, -0.0028596183, 0.0033711935, 0.0025058512, -0.010314742, 0.0014688276, -0.046808258, 0.015551885, 0.027288632, 0.019921951, 0.00722795, -0.0021468815, -0.011688191, -0.030659826, -0.0052302056, 0.008532033, 0.03890052, 0.004928463, -0.0062498874, -0.075137384, 0.012028085, -0.0021676912, -0.022904694, 0.004609379, -0.01441428, 0.0032550052, -0.005656807, -0.009988721, 0.017050192, -0.012485902, 0.0069747637, -0.0028284036, 0.0050359806, -0.0030399703, -0.0067458553, 0.02530476, -0.022044554, 0.008060344, -0.0077135125, -0.0032879543, -0.019158922, 0.00841411, 0.0027486326, -0.0036070384, -0.010640763, -0.030410107, 0.029716447, -0.007893865, -0.011147136, 0.01095291, -0.018354276, 0.0031735, 0.034433343, -0.018146178, 0.010717066, 0.014580758, 0.018104557, 0.0021174008, 0.017785473, -0.041064743, -0.030632079, 0.027718702, -0.026858563, -0.010453475, -0.0116604455, 0.00014350117, -0.014594632, 0.012208438, 0.0062533556, 0.031741936, 0.0047758576, -0.03976066, -0.045948118, -0.005882247, -0.002391397, 0.03701376, 0.0027625058, -0.001801785, 0.01441428, 0.04875051, 0.002686203, 0.0060383207, -0.0060348525, 0.014705618, -0.0050671953, -0.014650125, -0.019644486, 0.0046059103, 0.0009164999, -0.0177716, -0.008747068, 0.00033230707, -0.008053407, 0.025929056, -0.002627242, 0.028065532, -0.008795625, -0.01988033, 0.02056012, 0.021142794, -0.00974594, -0.006187458, 0.013672062, 0.028634336, -0.0067944117, -0.02594293, -0.019436387, -0.025374128, -0.022211032, 0.024458494, -0.0033018275, 0.0061735846, 0.03820686, 0.0027503667, -0.008344744, -0.0029862116, -0.0030382362, 0.016592376, 0.016703362, 0.023376383, -0.005219801, 0.004907653, -0.023043426, -0.033046022, 0.001496574, -0.025831943, -0.033739682, 0.024944058, 0.02215554, 0.01664787, 0.00030022525, 0.0014558214, 0.013887097, -0.012215374, 0.042951502, -0.0065655033, -0.012596888, -0.028204264, 0.008858054, 0.0077343225, 0.01824329, 0.019200543, -0.024597228, 0.03859531, 0.010592206, 0.019810963, -0.015177308, 0.023848072, 0.017452516, -0.029771939, -0.012319423, -0.019630613, -0.021753216, -0.01781322, -0.01863174, -0.04478277, 0.02465272, 0.006378215, 0.10454862, 0.0074846046, -0.0053203814, -0.004057918, -0.015940335, 0.0025457367, 0.0029185796, -0.003525533, -0.011154072, -0.011771431, -0.008143582, -0.0017237482, -0.015690617, -0.01574611, -0.011480093, -0.027122153, -0.0022856137, 0.0021382107, 0.016259419, 0.006992105, 0.024458494, -0.0051677763, 0.014733364, -0.026456239, -0.04211911, 0.019269908, 0.010432664, -0.0048937798, -0.009662701, -0.031436726, 0.04727995, 0.006527352, -0.026664337, -0.010529777, -0.01871498, 0.013110197, -0.00054322346, -0.02749673, 0.007172457, 0.00025730496, 0.020005189, 0.0027607717, -0.02405617, -0.03490503, -0.011771431, 0.010127454, 0.008733194, -0.020435259, -0.014275548], 'metadata': '{"source": "qna/overview_openai.txt"}'})] |
https://python.langchain.com/docs/integrations/retrievers/self_query/weaviate_self_query/ | ## Weaviate
> [Weaviate](https://weaviate.io/) is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML models, and scale seamlessly into billions of data objects.
In the notebook, we’ll demo the `SelfQueryRetriever` wrapped around a `Weaviate` vector store.
## Creating a Weaviate vector store[](#creating-a-weaviate-vector-store "Direct link to Creating a Weaviate vector store")
First we’ll want to create a Weaviate vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
**Note:** The self-query retriever requires you to have `lark` installed (`pip install lark`). We also need the `weaviate-client` package.
```
%pip install --upgrade --quiet lark weaviate-client
```
```
from langchain_community.vectorstores import Weaviatefrom langchain_core.documents import Documentfrom langchain_openai import OpenAIEmbeddingsembeddings = OpenAIEmbeddings()
```
```
docs = [ Document( page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}, ), Document( page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}, ), Document( page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}, ), Document( page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}, ), Document( page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}, ), Document( page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={ "year": 1979, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9, }, ),]vectorstore = Weaviate.from_documents( docs, embeddings, weaviate_url="http://127.0.0.1:8080")
```
## Creating our self-querying retriever[](#creating-our-self-querying-retriever "Direct link to Creating our self-querying retriever")
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
```
from langchain.chains.query_constructor.base import AttributeInfofrom langchain.retrievers.self_query.base import SelfQueryRetrieverfrom langchain_openai import OpenAImetadata_field_info = [ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ),]document_content_description = "Brief summary of a movie"llm = OpenAI(temperature=0)retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
```
## Testing it out[](#testing-it-out "Direct link to Testing it out")
And now we can try actually using our retriever!
```
# This example only specifies a relevant queryretriever.get_relevant_documents("What are some movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=None
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'genre': 'science fiction', 'rating': 9.9, 'year': 1979}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'genre': None, 'rating': 8.6, 'year': 2006})]
```
```
# This example specifies a query and a filterretriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
```
```
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
```
```
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'genre': None, 'rating': 8.3, 'year': 2019})]
```
## Filter k[](#filter-k "Direct link to Filter k")
We can also use the self query retriever to specify `k`: the number of documents to fetch.
We can do this by passing `enable_limit=True` to the constructor.
```
retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True,)
```
```
# This example only specifies a relevant queryretriever.get_relevant_documents("what are two movies about dinosaurs")
```
```
query='dinosaur' filter=None limit=2
```
```
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}), Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995})]
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:35.827Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/weaviate_self_query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/self_query/weaviate_self_query/",
"description": "Weaviate is an open-source vector database. It",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3600",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"weaviate_self_query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:35 GMT",
"etag": "W/\"9c3c7ff4594b867055ed63fcc62cde7f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::zvcms-1713753755473-1dd30f38f706"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/self_query/weaviate_self_query/",
"property": "og:url"
},
{
"content": "Weaviate | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Weaviate is an open-source vector database. It",
"property": "og:description"
}
],
"title": "Weaviate | 🦜️🔗 LangChain"
} | Weaviate
Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML models, and scale seamlessly into billions of data objects.
In the notebook, we’ll demo the SelfQueryRetriever wrapped around a Weaviate vector store.
Creating a Weaviate vector store
First we’ll want to create a Weaviate vector store and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
Note: The self-query retriever requires you to have lark installed (pip install lark). We also need the weaviate-client package.
%pip install --upgrade --quiet lark weaviate-client
from langchain_community.vectorstores import Weaviate
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
"director": "Andrei Tarkovsky",
"genre": "science fiction",
"rating": 9.9,
},
),
]
vectorstore = Weaviate.from_documents(
docs, embeddings, weaviate_url="http://127.0.0.1:8080"
)
Creating our self-querying retriever
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_openai import OpenAI
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
Testing it out
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'genre': 'science fiction', 'rating': 9.9, 'year': 1979}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'genre': None, 'rating': 8.6, 'year': 2006})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'genre': None, 'rating': 8.3, 'year': 2019})]
Filter k
We can also use the self query retriever to specify k: the number of documents to fetch.
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995})] |
https://python.langchain.com/docs/integrations/retrievers/bedrock/ | ## Bedrock (Knowledge Bases)
> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.
> Implementing `RAG` requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the user’s query. This can be time-consuming and inefficient.
> With `Knowledge Bases for Amazon Bedrock`, simply point to the location of your data in `Amazon S3`, and `Knowledge Bases for Amazon Bedrock` takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you. For retrievals, use the Langchain - Amazon Bedrock integration via the Retrieve API to retrieve relevant results for a user query from knowledge bases.
> Knowledge base can be configured through [AWS Console](https://aws.amazon.com/console/) or by using [AWS SDKs](https://aws.amazon.com/developer/tools/).
## Using the Knowledge Bases Retriever[](#using-the-knowledge-bases-retriever "Direct link to Using the Knowledge Bases Retriever")
```
%pip install --upgrade --quiet boto3
```
```
from langchain_community.retrievers import AmazonKnowledgeBasesRetrieverretriever = AmazonKnowledgeBasesRetriever( knowledge_base_id="PUIJP4EQUA", retrieval_config={"vectorSearchConfiguration": {"numberOfResults": 4}},)
```
```
query = "What did the president say about Ketanji Brown?"retriever.get_relevant_documents(query=query)
```
### Using in a QA Chain[](#using-in-a-qa-chain "Direct link to Using in a QA Chain")
```
from botocore.client import Configfrom langchain.chains import RetrievalQAfrom langchain_community.llms import Bedrockmodel_kwargs_claude = {"temperature": 0, "top_k": 10, "max_tokens_to_sample": 3000}llm = Bedrock(model_id="anthropic.claude-v2", model_kwargs=model_kwargs_claude)qa = RetrievalQA.from_chain_type( llm=llm, retriever=retriever, return_source_documents=True)qa(query)
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:36.596Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/bedrock/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/bedrock/",
"description": "[Knowledge bases for Amazon",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3606",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"bedrock\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:36 GMT",
"etag": "W/\"2123702f2089ecb5266d9d20aacd308a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::nhxcp-1713753756527-88d263617f4b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/bedrock/",
"property": "og:url"
},
{
"content": "Bedrock (Knowledge Bases) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "[Knowledge bases for Amazon",
"property": "og:description"
}
],
"title": "Bedrock (Knowledge Bases) | 🦜️🔗 LangChain"
} | Bedrock (Knowledge Bases)
Knowledge bases for Amazon Bedrock is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.
Implementing RAG requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the user’s query. This can be time-consuming and inefficient.
With Knowledge Bases for Amazon Bedrock, simply point to the location of your data in Amazon S3, and Knowledge Bases for Amazon Bedrock takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you. For retrievals, use the Langchain - Amazon Bedrock integration via the Retrieve API to retrieve relevant results for a user query from knowledge bases.
Knowledge base can be configured through AWS Console or by using AWS SDKs.
Using the Knowledge Bases Retriever
%pip install --upgrade --quiet boto3
from langchain_community.retrievers import AmazonKnowledgeBasesRetriever
retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="PUIJP4EQUA",
retrieval_config={"vectorSearchConfiguration": {"numberOfResults": 4}},
)
query = "What did the president say about Ketanji Brown?"
retriever.get_relevant_documents(query=query)
Using in a QA Chain
from botocore.client import Config
from langchain.chains import RetrievalQA
from langchain_community.llms import Bedrock
model_kwargs_claude = {"temperature": 0, "top_k": 10, "max_tokens_to_sample": 3000}
llm = Bedrock(model_id="anthropic.claude-v2", model_kwargs=model_kwargs_claude)
qa = RetrievalQA.from_chain_type(
llm=llm, retriever=retriever, return_source_documents=True
)
qa(query)
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/integrations/retrievers/thirdai_neuraldb/ | ## \*\*NeuralDB\*\*
NeuralDB is a CPU-friendly and fine-tunable retrieval engine developed by ThirdAI.
### **Initialization**[](#initialization "Direct link to initialization")
There are two initialization methods: - From Scratch: Basic model - From Checkpoint: Load a model that was previously saved
For all of the following initialization methods, the `thirdai_key` parameter can be ommitted if the `THIRDAI_KEY` environment variable is set.
ThirdAI API keys can be obtained at [https://www.thirdai.com/try-bolt/](https://www.thirdai.com/try-bolt/)
```
from langchain.retrievers import NeuralDBRetriever# From scratchretriever = NeuralDBRetriever.from_scratch(thirdai_key="your-thirdai-key")# From checkpointretriever = NeuralDBRetriever.from_checkpoint( # Path to a NeuralDB checkpoint. For example, if you call # retriever.save("/path/to/checkpoint.ndb") in one script, then you can # call NeuralDBRetriever.from_checkpoint("/path/to/checkpoint.ndb") in # another script to load the saved model. checkpoint="/path/to/checkpoint.ndb", thirdai_key="your-thirdai-key",)
```
### **Inserting document sources**[](#inserting-document-sources "Direct link to inserting-document-sources")
```
retriever.insert( # If you have PDF, DOCX, or CSV files, you can directly pass the paths to the documents sources=["/path/to/doc.pdf", "/path/to/doc.docx", "/path/to/doc.csv"], # When True this means that the underlying model in the NeuralDB will # undergo unsupervised pretraining on the inserted files. Defaults to True. train=True, # Much faster insertion with a slight drop in performance. Defaults to True. fast_mode=True,)from thirdai import neural_db as ndbretriever.insert( # If you have files in other formats, or prefer to configure how # your files are parsed, then you can pass in NeuralDB document objects # like this. sources=[ ndb.PDF( "/path/to/doc.pdf", version="v2", chunk_size=100, metadata={"published": 2022}, ), ndb.Unstructured("/path/to/deck.pptx"), ])
```
### **Retrieving documents**[](#retrieving-documents "Direct link to retrieving-documents")
To query the retriever, you can use the standard LangChain retriever method `get_relevant_documents`, which returns a list of LangChain Document objects. Each document object represents a chunk of text from the indexed files. For example, it may contain a paragraph from one of the indexed PDF files. In addition to the text, the document’s metadata field contains information such as the document’s ID, the source of this document (which file it came from), and the score of the document.
```
# This returns a list of LangChain Document objectsdocuments = retriever.get_relevant_documents("query", top_k=10)
```
### **Fine tuning**[](#fine-tuning "Direct link to fine-tuning")
NeuralDBRetriever can be fine-tuned to user behavior and domain-specific knowledge. It can be fine-tuned in two ways: 1. Association: the retriever associates a source phrase with a target phrase. When the retriever sees the source phrase, it will also consider results that are relevant to the target phrase. 2. Upvoting: the retriever upweights the score of a document for a specific query. This is useful when you want to fine-tune the retriever to user behavior. For example, if a user searches “how is a car manufactured” and likes the returned document with id 52, then we can upvote the document with id 52 for the query “how is a car manufactured”.
```
retriever.associate(source="source phrase", target="target phrase")retriever.associate_batch( [ ("source phrase 1", "target phrase 1"), ("source phrase 2", "target phrase 2"), ])retriever.upvote(query="how is a car manufactured", document_id=52)retriever.upvote_batch( [ ("query 1", 52), ("query 2", 20), ])
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:42:37.216Z",
"loadedUrl": "https://python.langchain.com/docs/integrations/retrievers/thirdai_neuraldb/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/integrations/retrievers/thirdai_neuraldb/",
"description": "NeuralDB is a CPU-friendly and fine-tunable retrieval engine developed",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3601",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"thirdai_neuraldb\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:42:37 GMT",
"etag": "W/\"7af987714afaa1fe17e7376b8b89b091\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::4gtsm-1713753757161-504cb99b29ca"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/integrations/retrievers/thirdai_neuraldb/",
"property": "og:url"
},
{
"content": "**NeuralDB** | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "NeuralDB is a CPU-friendly and fine-tunable retrieval engine developed",
"property": "og:description"
}
],
"title": "**NeuralDB** | 🦜️🔗 LangChain"
} | **NeuralDB**
NeuralDB is a CPU-friendly and fine-tunable retrieval engine developed by ThirdAI.
Initialization
There are two initialization methods: - From Scratch: Basic model - From Checkpoint: Load a model that was previously saved
For all of the following initialization methods, the thirdai_key parameter can be ommitted if the THIRDAI_KEY environment variable is set.
ThirdAI API keys can be obtained at https://www.thirdai.com/try-bolt/
from langchain.retrievers import NeuralDBRetriever
# From scratch
retriever = NeuralDBRetriever.from_scratch(thirdai_key="your-thirdai-key")
# From checkpoint
retriever = NeuralDBRetriever.from_checkpoint(
# Path to a NeuralDB checkpoint. For example, if you call
# retriever.save("/path/to/checkpoint.ndb") in one script, then you can
# call NeuralDBRetriever.from_checkpoint("/path/to/checkpoint.ndb") in
# another script to load the saved model.
checkpoint="/path/to/checkpoint.ndb",
thirdai_key="your-thirdai-key",
)
Inserting document sources
retriever.insert(
# If you have PDF, DOCX, or CSV files, you can directly pass the paths to the documents
sources=["/path/to/doc.pdf", "/path/to/doc.docx", "/path/to/doc.csv"],
# When True this means that the underlying model in the NeuralDB will
# undergo unsupervised pretraining on the inserted files. Defaults to True.
train=True,
# Much faster insertion with a slight drop in performance. Defaults to True.
fast_mode=True,
)
from thirdai import neural_db as ndb
retriever.insert(
# If you have files in other formats, or prefer to configure how
# your files are parsed, then you can pass in NeuralDB document objects
# like this.
sources=[
ndb.PDF(
"/path/to/doc.pdf",
version="v2",
chunk_size=100,
metadata={"published": 2022},
),
ndb.Unstructured("/path/to/deck.pptx"),
]
)
Retrieving documents
To query the retriever, you can use the standard LangChain retriever method get_relevant_documents, which returns a list of LangChain Document objects. Each document object represents a chunk of text from the indexed files. For example, it may contain a paragraph from one of the indexed PDF files. In addition to the text, the document’s metadata field contains information such as the document’s ID, the source of this document (which file it came from), and the score of the document.
# This returns a list of LangChain Document objects
documents = retriever.get_relevant_documents("query", top_k=10)
Fine tuning
NeuralDBRetriever can be fine-tuned to user behavior and domain-specific knowledge. It can be fine-tuned in two ways: 1. Association: the retriever associates a source phrase with a target phrase. When the retriever sees the source phrase, it will also consider results that are relevant to the target phrase. 2. Upvoting: the retriever upweights the score of a document for a specific query. This is useful when you want to fine-tune the retriever to user behavior. For example, if a user searches “how is a car manufactured” and likes the returned document with id 52, then we can upvote the document with id 52 for the query “how is a car manufactured”.
retriever.associate(source="source phrase", target="target phrase")
retriever.associate_batch(
[
("source phrase 1", "target phrase 1"),
("source phrase 2", "target phrase 2"),
]
)
retriever.upvote(query="how is a car manufactured", document_id=52)
retriever.upvote_batch(
[
("query 1", 52),
("query 2", 20),
]
)
Help us out by providing feedback on this documentation page: |