url
stringlengths 34
116
| markdown
stringlengths 0
150k
⌀ | screenshotUrl
null | crawl
dict | metadata
dict | text
stringlengths 0
147k
|
---|---|---|---|---|---|
https://python.langchain.com/docs/modules/model_io/output_parsers/types/yaml/ | ## YAML parser
This output parser allows users to specify an arbitrary schema and query LLMs for outputs that conform to that schema, using YAML to format their response.
Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed YAML. In the OpenAI family, DaVinci can do reliably but Curie’s ability already drops off dramatically.
You can optionally use Pydantic to declare your data model.
```
from typing import Listfrom langchain.output_parsers import YamlOutputParserfrom langchain_core.prompts import PromptTemplatefrom langchain_core.pydantic_v1 import BaseModel, Fieldfrom langchain_openai import ChatOpenAI
```
```
model = ChatOpenAI(temperature=0)
```
```
# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke")
```
```
# And a query intented to prompt a language model to populate the data structure.joke_query = "Tell me a joke."# Set up a parser + inject instructions into the prompt template.parser = YamlOutputParser(pydantic_object=Joke)prompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)chain = prompt | model | parserchain.invoke({"query": joke_query})
```
```
Joke(setup="Why don't scientists trust atoms?", punchline='Because they make up everything!')
```
Find out api documentation for [YamlOutputParser](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.yaml.YamlOutputParser.html#langchain.output_parsers.yaml.YamlOutputParser). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:30.566Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/output_parsers/types/yaml/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/output_parsers/types/yaml/",
"description": "This output parser allows users to specify an arbitrary schema and query",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3972",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"yaml\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:30 GMT",
"etag": "W/\"f4c590a2092e62564d97da12b6783a0d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::zhqkj-1713753930514-ae4797000229"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/output_parsers/types/yaml/",
"property": "og:url"
},
{
"content": "YAML parser | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This output parser allows users to specify an arbitrary schema and query",
"property": "og:description"
}
],
"title": "YAML parser | 🦜️🔗 LangChain"
} | YAML parser
This output parser allows users to specify an arbitrary schema and query LLMs for outputs that conform to that schema, using YAML to format their response.
Keep in mind that large language models are leaky abstractions! You’ll have to use an LLM with sufficient capacity to generate well-formed YAML. In the OpenAI family, DaVinci can do reliably but Curie’s ability already drops off dramatically.
You can optionally use Pydantic to declare your data model.
from typing import List
from langchain.output_parsers import YamlOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
model = ChatOpenAI(temperature=0)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
# And a query intented to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
# Set up a parser + inject instructions into the prompt template.
parser = YamlOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
chain = prompt | model | parser
chain.invoke({"query": joke_query})
Joke(setup="Why don't scientists trust atoms?", punchline='Because they make up everything!')
Find out api documentation for YamlOutputParser. |
https://python.langchain.com/docs/modules/model_io/prompts/ | ## Prompts
A prompt for a language model is a set of instructions or input provided by a user to guide the model's response, helping it understand the context and generate relevant and coherent language-based output, such as answering questions, completing sentences, or engaging in a conversation.
## [Quickstart](https://python.langchain.com/docs/modules/model_io/prompts/quick_start/)[](#quickstart "Direct link to quickstart")
This [quick start](https://python.langchain.com/docs/modules/model_io/prompts/quick_start/) provides a basic overview of how to work with prompts.
## How-To Guides[](#how-to-guides "Direct link to How-To Guides")
We have many how-to guides for working with prompts. These include:
* [How to use few-shot examples with LLMs](https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples/)
* [How to use few-shot examples with chat models](https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples_chat/)
* [How to use example selectors](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/)
* [How to partial prompts](https://python.langchain.com/docs/modules/model_io/prompts/partial/)
* [How to compose prompts together](https://python.langchain.com/docs/modules/model_io/prompts/composition/)
## [Example Selector Types](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/)[](#example-selector-types "Direct link to example-selector-types")
LangChain has a few different types of example selectors you can use off the shelf. You can explore those types [here](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/) | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:31.032Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/",
"description": "A prompt for a language model is a set of instructions or input provided by a user to",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "9111",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"prompts\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:30 GMT",
"etag": "W/\"c24d1f182a1437e7e1d3531927eac156\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::xhz4j-1713753930959-c9e561eee8f7"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/",
"property": "og:url"
},
{
"content": "Prompts | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "A prompt for a language model is a set of instructions or input provided by a user to",
"property": "og:description"
}
],
"title": "Prompts | 🦜️🔗 LangChain"
} | Prompts
A prompt for a language model is a set of instructions or input provided by a user to guide the model's response, helping it understand the context and generate relevant and coherent language-based output, such as answering questions, completing sentences, or engaging in a conversation.
Quickstart
This quick start provides a basic overview of how to work with prompts.
How-To Guides
We have many how-to guides for working with prompts. These include:
How to use few-shot examples with LLMs
How to use few-shot examples with chat models
How to use example selectors
How to partial prompts
How to compose prompts together
Example Selector Types
LangChain has a few different types of example selectors you can use off the shelf. You can explore those types here |
https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ngram_overlap/ | The `NGramOverlapExampleSelector` selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive.
The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.
```
from langchain_community.example_selector.ngram_overlap import ( NGramOverlapExampleSelector,)from langchain_core.prompts import FewShotPromptTemplate, PromptTemplateexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# Examples of a fictional translation task.examples = [ {"input": "See Spot run.", "output": "Ver correr a Spot."}, {"input": "My dog barks.", "output": "Mi perro ladra."}, {"input": "Spot can run.", "output": "Spot puede correr."},]
```
```
example_selector = NGramOverlapExampleSelector( # The examples it has available to choose from. examples=examples, # The PromptTemplate being used to format the examples. example_prompt=example_prompt, # The threshold, at which selector stops. # It is set to -1.0 by default. threshold=-1.0, # For negative threshold: # Selector sorts examples by ngram overlap score, and excludes none. # For threshold greater than 1.0: # Selector excludes all examples, and returns an empty list. # For threshold equal to 0.0: # Selector sorts examples by ngram overlap score, # and excludes those with no ngram overlap with input.)dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the Spanish translation of every input", suffix="Input: {sentence}\nOutput:", input_variables=["sentence"],)
```
```
Give the Spanish translation of every inputInput: Spot can run.Output: Spot puede correr.Input: See Spot run.Output: Ver correr a Spot.Input: My dog barks.Output: Mi perro ladra.Input: Spot can run fast.Output:
```
```
# You can add examples to NGramOverlapExampleSelector as well.new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."}example_selector.add_example(new_example)print(dynamic_prompt.format(sentence="Spot can run fast."))
```
```
Give the Spanish translation of every inputInput: Spot can run.Output: Spot puede correr.Input: See Spot run.Output: Ver correr a Spot.Input: Spot plays fetch.Output: Spot juega a buscar.Input: My dog barks.Output: Mi perro ladra.Input: Spot can run fast.Output:
```
```
# You can set a threshold at which examples are excluded.# For example, setting threshold equal to 0.0# excludes examples with no ngram overlaps with input.# Since "My dog barks." has no ngram overlaps with "Spot can run fast."# it is excluded.example_selector.threshold = 0.0print(dynamic_prompt.format(sentence="Spot can run fast."))
```
```
Give the Spanish translation of every inputInput: Spot can run.Output: Spot puede correr.Input: See Spot run.Output: Ver correr a Spot.Input: Spot plays fetch.Output: Spot juega a buscar.Input: Spot can run fast.Output:
```
```
Give the Spanish translation of every inputInput: Spot can run.Output: Spot puede correr.Input: Spot plays fetch.Output: Spot juega a buscar.Input: Spot can play fetch.Output:
```
```
Give the Spanish translation of every inputInput: Spot can play fetch.Output:
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:31.394Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ngram_overlap/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ngram_overlap/",
"description": "The NGramOverlapExampleSelector selects and orders examples based on",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "693",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"ngram_overlap\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:31 GMT",
"etag": "W/\"cef5b2462b9412044d7fe9de604b9a24\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::xbglm-1713753931339-81eab280ff6a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ngram_overlap/",
"property": "og:url"
},
{
"content": "Select by n-gram overlap | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "The NGramOverlapExampleSelector selects and orders examples based on",
"property": "og:description"
}
],
"title": "Select by n-gram overlap | 🦜️🔗 LangChain"
} | The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive.
The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.
from langchain_community.example_selector.ngram_overlap import (
NGramOverlapExampleSelector,
)
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# Examples of a fictional translation task.
examples = [
{"input": "See Spot run.", "output": "Ver correr a Spot."},
{"input": "My dog barks.", "output": "Mi perro ladra."},
{"input": "Spot can run.", "output": "Spot puede correr."},
]
example_selector = NGramOverlapExampleSelector(
# The examples it has available to choose from.
examples=examples,
# The PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# The threshold, at which selector stops.
# It is set to -1.0 by default.
threshold=-1.0,
# For negative threshold:
# Selector sorts examples by ngram overlap score, and excludes none.
# For threshold greater than 1.0:
# Selector excludes all examples, and returns an empty list.
# For threshold equal to 0.0:
# Selector sorts examples by ngram overlap score,
# and excludes those with no ngram overlap with input.
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the Spanish translation of every input",
suffix="Input: {sentence}\nOutput:",
input_variables=["sentence"],
)
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can add examples to NGramOverlapExampleSelector as well.
new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."}
example_selector.add_example(new_example)
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: My dog barks.
Output: Mi perro ladra.
Input: Spot can run fast.
Output:
# You can set a threshold at which examples are excluded.
# For example, setting threshold equal to 0.0
# excludes examples with no ngram overlaps with input.
# Since "My dog barks." has no ngram overlaps with "Spot can run fast."
# it is excluded.
example_selector.threshold = 0.0
print(dynamic_prompt.format(sentence="Spot can run fast."))
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: See Spot run.
Output: Ver correr a Spot.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can run fast.
Output:
Give the Spanish translation of every input
Input: Spot can run.
Output: Spot puede correr.
Input: Spot plays fetch.
Output: Spot juega a buscar.
Input: Spot can play fetch.
Output:
Give the Spanish translation of every input
Input: Spot can play fetch.
Output: |
https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/mmr/ | The `MaxMarginalRelevanceExampleSelector` selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.
```
from langchain_community.vectorstores import FAISSfrom langchain_core.example_selectors import ( MaxMarginalRelevanceExampleSelector, SemanticSimilarityExampleSelector,)from langchain_core.prompts import FewShotPromptTemplate, PromptTemplatefrom langchain_openai import OpenAIEmbeddingsexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# Examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]
```
```
example_selector = MaxMarginalRelevanceExampleSelector.from_examples( # The list of examples available to select from. examples, # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # The VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # The number of examples to produce. k=2,)mmr_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)
```
```
# Input is a feeling, so should select the happy/sad example as the first oneprint(mmr_prompt.format(adjective="worried"))
```
```
Give the antonym of every inputInput: happyOutput: sadInput: windyOutput: calmInput: worriedOutput:
```
```
# Let's compare this to what we would just get if we went solely off of similarity,# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.example_selector = SemanticSimilarityExampleSelector.from_examples( # The list of examples available to select from. examples, # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # The VectorStore class that is used to store the embeddings and do a similarity search over. FAISS, # The number of examples to produce. k=2,)similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)print(similar_prompt.format(adjective="worried"))
```
```
Give the antonym of every inputInput: happyOutput: sadInput: sunnyOutput: gloomyInput: worriedOutput:
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:31.700Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/mmr/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/mmr/",
"description": "The MaxMarginalRelevanceExampleSelector selects examples based on a",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3738",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"mmr\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:31 GMT",
"etag": "W/\"148ea208c6fa69c25f40a60065fa8ee3\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::k85gt-1713753931402-9e950c580c79"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/mmr/",
"property": "og:url"
},
{
"content": "Select by maximal marginal relevance (MMR) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "The MaxMarginalRelevanceExampleSelector selects examples based on a",
"property": "og:description"
}
],
"title": "Select by maximal marginal relevance (MMR) | 🦜️🔗 LangChain"
} | The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.
from langchain_community.vectorstores import FAISS
from langchain_core.example_selectors import (
MaxMarginalRelevanceExampleSelector,
SemanticSimilarityExampleSelector,
)
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
from langchain_openai import OpenAIEmbeddings
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# Examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = MaxMarginalRelevanceExampleSelector.from_examples(
# The list of examples available to select from.
examples,
# The embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# The VectorStore class that is used to store the embeddings and do a similarity search over.
FAISS,
# The number of examples to produce.
k=2,
)
mmr_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# Input is a feeling, so should select the happy/sad example as the first one
print(mmr_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: windy
Output: calm
Input: worried
Output:
# Let's compare this to what we would just get if we went solely off of similarity,
# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.
example_selector = SemanticSimilarityExampleSelector.from_examples(
# The list of examples available to select from.
examples,
# The embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# The VectorStore class that is used to store the embeddings and do a similarity search over.
FAISS,
# The number of examples to produce.
k=2,
)
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
print(similar_prompt.format(adjective="worried"))
Give the antonym of every input
Input: happy
Output: sad
Input: sunny
Output: gloomy
Input: worried
Output: |
https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity/ | This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
```
from langchain_chroma import Chromafrom langchain_core.example_selectors import SemanticSimilarityExampleSelectorfrom langchain_core.prompts import FewShotPromptTemplate, PromptTemplatefrom langchain_openai import OpenAIEmbeddingsexample_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)# Examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]
```
```
example_selector = SemanticSimilarityExampleSelector.from_examples( # The list of examples available to select from. examples, # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # The VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # The number of examples to produce. k=1,)similar_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)
```
```
Give the antonym of every inputInput: happyOutput: sadInput: worriedOutput:
```
```
Give the antonym of every inputInput: tallOutput: shortInput: largeOutput:
```
```
# You can add new examples to the SemanticSimilarityExampleSelector as wellsimilar_prompt.example_selector.add_example( {"input": "enthusiastic", "output": "apathetic"})print(similar_prompt.format(adjective="passionate"))
```
```
Give the antonym of every inputInput: enthusiasticOutput: apatheticInput: passionateOutput:
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:31.580Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity/",
"description": "This object selects examples based on similarity to the inputs. It does",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "7986",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"similarity\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:31 GMT",
"etag": "W/\"de47995404e8a1e1457523b2c75b56de\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::pdtx6-1713753931402-fdc80e07cbf9"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity/",
"property": "og:url"
},
{
"content": "Select by similarity | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This object selects examples based on similarity to the inputs. It does",
"property": "og:description"
}
],
"title": "Select by similarity | 🦜️🔗 LangChain"
} | This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
from langchain_chroma import Chroma
from langchain_core.example_selectors import SemanticSimilarityExampleSelector
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
from langchain_openai import OpenAIEmbeddings
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
# Examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = SemanticSimilarityExampleSelector.from_examples(
# The list of examples available to select from.
examples,
# The embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# The VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# The number of examples to produce.
k=1,
)
similar_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
Give the antonym of every input
Input: happy
Output: sad
Input: worried
Output:
Give the antonym of every input
Input: tall
Output: short
Input: large
Output:
# You can add new examples to the SemanticSimilarityExampleSelector as well
similar_prompt.example_selector.add_example(
{"input": "enthusiastic", "output": "apathetic"}
)
print(similar_prompt.format(adjective="passionate"))
Give the antonym of every input
Input: enthusiastic
Output: apathetic
Input: passionate
Output: |
https://python.langchain.com/docs/modules/model_io/prompts/composition/ | ## Composition
LangChain provides a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components.
## String prompt composition[](#string-prompt-composition "Direct link to String prompt composition")
When working with string prompts, each template is joined together. You can work with either prompts directly or strings (the first element in the list needs to be a prompt).
```
from langchain_core.prompts import PromptTemplate
```
```
prompt = ( PromptTemplate.from_template("Tell me a joke about {topic}") + ", make it funny" + "\n\nand in {language}")
```
```
PromptTemplate(input_variables=['language', 'topic'], template='Tell me a joke about {topic}, make it funny\n\nand in {language}')
```
```
prompt.format(topic="sports", language="spanish")
```
```
'Tell me a joke about sports, make it funny\n\nand in spanish'
```
You can also use it in an LLMChain, just like before.
```
from langchain.chains import LLMChainfrom langchain_openai import ChatOpenAI
```
```
chain = LLMChain(llm=model, prompt=prompt)
```
```
chain.run(topic="sports", language="spanish")
```
```
'¿Por qué el futbolista llevaba un paraguas al partido?\n\nPorque pronosticaban lluvia de goles.'
```
## Chat prompt composition[](#chat-prompt-composition "Direct link to Chat prompt composition")
A chat prompt is made up a of a list of messages. Purely for developer experience, we’ve added a convenient way to create these prompts. In this pipeline, each new element is a new message in the final prompt.
```
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
```
First, let’s initialize the base ChatPromptTemplate with a system message. It doesn’t have to start with a system, but it’s often good practice
```
prompt = SystemMessage(content="You are a nice pirate")
```
You can then easily create a pipeline combining it with other messages _or_ message templates. Use a `Message` when there is no variables to be formatted, use a `MessageTemplate` when there are variables to be formatted. You can also use just a string (note: this will automatically get inferred as a HumanMessagePromptTemplate.)
```
new_prompt = ( prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}")
```
Under the hood, this creates an instance of the ChatPromptTemplate class, so you can use it just as you did before!
```
new_prompt.format_messages(input="i said hi")
```
```
[SystemMessage(content='You are a nice pirate', additional_kwargs={}), HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='what?', additional_kwargs={}, example=False), HumanMessage(content='i said hi', additional_kwargs={}, example=False)]
```
You can also use it in an LLMChain, just like before.
```
from langchain.chains import LLMChainfrom langchain_openai import ChatOpenAI
```
```
chain = LLMChain(llm=model, prompt=new_prompt)
```
```
'Oh, hello! How can I assist you today?'
```
## Using PipelinePrompt[](#using-pipelineprompt "Direct link to Using PipelinePrompt")
LangChain includes an abstraction [PipelinePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html), which can be useful when you want to reuse parts of prompts. A PipelinePrompt consists of two main parts:
* Final prompt: The final prompt that is returned
* Pipeline prompts: A list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.
```
from langchain_core.prompts.pipeline import PipelinePromptTemplatefrom langchain_core.prompts.prompt import PromptTemplate
```
```
full_template = """{introduction}{example}{start}"""full_prompt = PromptTemplate.from_template(full_template)
```
```
introduction_template = """You are impersonating {person}."""introduction_prompt = PromptTemplate.from_template(introduction_template)
```
```
example_template = """Here's an example of an interaction:Q: {example_q}A: {example_a}"""example_prompt = PromptTemplate.from_template(example_template)
```
```
start_template = """Now, do this for real!Q: {input}A:"""start_prompt = PromptTemplate.from_template(start_template)
```
```
input_prompts = [ ("introduction", introduction_prompt), ("example", example_prompt), ("start", start_prompt),]pipeline_prompt = PipelinePromptTemplate( final_prompt=full_prompt, pipeline_prompts=input_prompts)
```
```
pipeline_prompt.input_variables
```
```
['example_q', 'person', 'input', 'example_a']
```
```
print( pipeline_prompt.format( person="Elon Musk", example_q="What's your favorite car?", example_a="Tesla", input="What's your favorite social media site?", ))
```
```
You are impersonating Elon Musk.Here's an example of an interaction:Q: What's your favorite car?A: TeslaNow, do this for real!Q: What's your favorite social media site?A:
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:31.882Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/composition/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/composition/",
"description": "LangChain provides a user friendly interface for composing different",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "8927",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"composition\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:31 GMT",
"etag": "W/\"8d799b8f92b044b911601e937bb2f21a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::s6nqz-1713753931358-2fb45a22ddda"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/composition/",
"property": "og:url"
},
{
"content": "Composition | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "LangChain provides a user friendly interface for composing different",
"property": "og:description"
}
],
"title": "Composition | 🦜️🔗 LangChain"
} | Composition
LangChain provides a user friendly interface for composing different parts of prompts together. You can do this with either string prompts or chat prompts. Constructing prompts this way allows for easy reuse of components.
String prompt composition
When working with string prompts, each template is joined together. You can work with either prompts directly or strings (the first element in the list needs to be a prompt).
from langchain_core.prompts import PromptTemplate
prompt = (
PromptTemplate.from_template("Tell me a joke about {topic}")
+ ", make it funny"
+ "\n\nand in {language}"
)
PromptTemplate(input_variables=['language', 'topic'], template='Tell me a joke about {topic}, make it funny\n\nand in {language}')
prompt.format(topic="sports", language="spanish")
'Tell me a joke about sports, make it funny\n\nand in spanish'
You can also use it in an LLMChain, just like before.
from langchain.chains import LLMChain
from langchain_openai import ChatOpenAI
chain = LLMChain(llm=model, prompt=prompt)
chain.run(topic="sports", language="spanish")
'¿Por qué el futbolista llevaba un paraguas al partido?\n\nPorque pronosticaban lluvia de goles.'
Chat prompt composition
A chat prompt is made up a of a list of messages. Purely for developer experience, we’ve added a convenient way to create these prompts. In this pipeline, each new element is a new message in the final prompt.
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
First, let’s initialize the base ChatPromptTemplate with a system message. It doesn’t have to start with a system, but it’s often good practice
prompt = SystemMessage(content="You are a nice pirate")
You can then easily create a pipeline combining it with other messages or message templates. Use a Message when there is no variables to be formatted, use a MessageTemplate when there are variables to be formatted. You can also use just a string (note: this will automatically get inferred as a HumanMessagePromptTemplate.)
new_prompt = (
prompt + HumanMessage(content="hi") + AIMessage(content="what?") + "{input}"
)
Under the hood, this creates an instance of the ChatPromptTemplate class, so you can use it just as you did before!
new_prompt.format_messages(input="i said hi")
[SystemMessage(content='You are a nice pirate', additional_kwargs={}),
HumanMessage(content='hi', additional_kwargs={}, example=False),
AIMessage(content='what?', additional_kwargs={}, example=False),
HumanMessage(content='i said hi', additional_kwargs={}, example=False)]
You can also use it in an LLMChain, just like before.
from langchain.chains import LLMChain
from langchain_openai import ChatOpenAI
chain = LLMChain(llm=model, prompt=new_prompt)
'Oh, hello! How can I assist you today?'
Using PipelinePrompt
LangChain includes an abstraction PipelinePromptTemplate, which can be useful when you want to reuse parts of prompts. A PipelinePrompt consists of two main parts:
Final prompt: The final prompt that is returned
Pipeline prompts: A list of tuples, consisting of a string name and a prompt template. Each prompt template will be formatted and then passed to future prompt templates as a variable with the same name.
from langchain_core.prompts.pipeline import PipelinePromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
full_template = """{introduction}
{example}
{start}"""
full_prompt = PromptTemplate.from_template(full_template)
introduction_template = """You are impersonating {person}."""
introduction_prompt = PromptTemplate.from_template(introduction_template)
example_template = """Here's an example of an interaction:
Q: {example_q}
A: {example_a}"""
example_prompt = PromptTemplate.from_template(example_template)
start_template = """Now, do this for real!
Q: {input}
A:"""
start_prompt = PromptTemplate.from_template(start_template)
input_prompts = [
("introduction", introduction_prompt),
("example", example_prompt),
("start", start_prompt),
]
pipeline_prompt = PipelinePromptTemplate(
final_prompt=full_prompt, pipeline_prompts=input_prompts
)
pipeline_prompt.input_variables
['example_q', 'person', 'input', 'example_a']
print(
pipeline_prompt.format(
person="Elon Musk",
example_q="What's your favorite car?",
example_a="Tesla",
input="What's your favorite social media site?",
)
)
You are impersonating Elon Musk.
Here's an example of an interaction:
Q: What's your favorite car?
A: Tesla
Now, do this for real!
Q: What's your favorite social media site?
A: |
https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/length_based/ | This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
```
from langchain_core.example_selectors import LengthBasedExampleSelectorfrom langchain_core.prompts import FewShotPromptTemplate, PromptTemplate# Examples of a pretend task of creating antonyms.examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"},]example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}",)example_selector = LengthBasedExampleSelector( # The examples it has available to choose from. examples=examples, # The PromptTemplate being used to format the examples. example_prompt=example_prompt, # The maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, # The function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)))dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"],)
```
```
# An example with small input, so it selects all examples.print(dynamic_prompt.format(adjective="big"))
```
```
Give the antonym of every inputInput: happyOutput: sadInput: tallOutput: shortInput: energeticOutput: lethargicInput: sunnyOutput: gloomyInput: windyOutput: calmInput: bigOutput:
```
```
# An example with long input, so it selects only one example.long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"print(dynamic_prompt.format(adjective=long_string))
```
```
Give the antonym of every inputInput: happyOutput: sadInput: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything elseOutput:
```
```
# You can add an example to an example selector as well.new_example = {"input": "big", "output": "small"}dynamic_prompt.example_selector.add_example(new_example)print(dynamic_prompt.format(adjective="enthusiastic"))
```
```
Give the antonym of every inputInput: happyOutput: sadInput: tallOutput: shortInput: energeticOutput: lethargicInput: sunnyOutput: gloomyInput: windyOutput: calmInput: bigOutput: smallInput: enthusiasticOutput:
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:32.141Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/length_based/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/length_based/",
"description": "This example selector selects which examples to use based on length.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "5057",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"length_based\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:31 GMT",
"etag": "W/\"2ba8a3a1ff801b1aa43c3d39b8990ed1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::hf2cn-1713753931589-427e926fd460"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/length_based/",
"property": "og:url"
},
{
"content": "Select by length | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This example selector selects which examples to use based on length.",
"property": "og:description"
}
],
"title": "Select by length | 🦜️🔗 LangChain"
} | This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
from langchain_core.example_selectors import LengthBasedExampleSelector
from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate
# Examples of a pretend task of creating antonyms.
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
example_selector = LengthBasedExampleSelector(
# The examples it has available to choose from.
examples=examples,
# The PromptTemplate being used to format the examples.
example_prompt=example_prompt,
# The maximum length that the formatted examples should be.
# Length is measured by the get_text_length function below.
max_length=25,
# The function used to get the length of a string, which is used
# to determine which examples to include. It is commented out because
# it is provided as a default value if none is specified.
# get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x))
)
dynamic_prompt = FewShotPromptTemplate(
# We provide an ExampleSelector instead of examples.
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
# An example with small input, so it selects all examples.
print(dynamic_prompt.format(adjective="big"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output:
# An example with long input, so it selects only one example.
long_string = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"
print(dynamic_prompt.format(adjective=long_string))
Give the antonym of every input
Input: happy
Output: sad
Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else
Output:
# You can add an example to an example selector as well.
new_example = {"input": "big", "output": "small"}
dynamic_prompt.example_selector.add_example(new_example)
print(dynamic_prompt.format(adjective="enthusiastic"))
Give the antonym of every input
Input: happy
Output: sad
Input: tall
Output: short
Input: energetic
Output: lethargic
Input: sunny
Output: gloomy
Input: windy
Output: calm
Input: big
Output: small
Input: enthusiastic
Output: |
https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples_chat/ | This notebook covers how to use few-shot examples in chat models. There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html?highlight=fewshot#langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate) as a flexible starting point, and you can modify or replace them as you see fit.
The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.
**Note:** The following code examples are for chat models. For similar few-shot prompt examples for completion models (LLMs), see the [few-shot prompt templates](https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples/) guide.
### Fixed Examples[](#fixed-examples "Direct link to Fixed Examples")
The most basic (and common) few-shot prompting technique is to use a fixed prompt example. This way you can select a chain, evaluate it, and avoid worrying about additional moving parts in production.
The basic components of the template are: - `examples`: A list of dictionary examples to include in the final prompt. - `example_prompt`: converts each example into 1 or more messages through its [`format_messages`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html?highlight=format_messages#langchain_core.prompts.chat.ChatPromptTemplate.format_messages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.
Below is a simple demonstration. First, import the modules for this example:
```
from langchain_core.prompts import ( ChatPromptTemplate, FewShotChatMessagePromptTemplate,)
```
Then, define the examples you’d like to include.
```
examples = [ {"input": "2+2", "output": "4"}, {"input": "2+3", "output": "5"},]
```
Next, assemble them into the few-shot prompt template.
```
# This is a prompt template used to format each individual example.example_prompt = ChatPromptTemplate.from_messages( [ ("human", "{input}"), ("ai", "{output}"), ])few_shot_prompt = FewShotChatMessagePromptTemplate( example_prompt=example_prompt, examples=examples,)print(few_shot_prompt.format())
```
```
Human: 2+2AI: 4Human: 2+3AI: 5
```
Finally, assemble your final prompt and use it with a model.
```
final_prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a wondrous wizard of math."), few_shot_prompt, ("human", "{input}"), ])
```
```
from langchain_community.chat_models import ChatAnthropicchain = final_prompt | ChatAnthropic(temperature=0.0)chain.invoke({"input": "What's the square of a triangle?"})
```
```
AIMessage(content=' Triangles do not have a "square". A square refers to a shape with 4 equal sides and 4 right angles. Triangles have 3 sides and 3 angles.\n\nThe area of a triangle can be calculated using the formula:\n\nA = 1/2 * b * h\n\nWhere:\n\nA is the area \nb is the base (the length of one of the sides)\nh is the height (the length from the base to the opposite vertex)\n\nSo the area depends on the specific dimensions of the triangle. There is no single "square of a triangle". The area can vary greatly depending on the base and height measurements.', additional_kwargs={}, example=False)
```
## Dynamic few-shot prompting[](#dynamic-few-shot-prompting "Direct link to Dynamic few-shot prompting")
Sometimes you may want to condition which examples are shown based on the input. For this, you can replace the `examples` with an `example_selector`. The other components remain the same as above! To review, the dynamic few-shot prompt template would look like:
* `example_selector`: responsible for selecting few-shot examples (and the order in which they are returned) for a given input. These implement the [BaseExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.base.BaseExampleSelector.html?highlight=baseexampleselector#langchain_core.example_selectors.base.BaseExampleSelector) interface. A common example is the vectorstore-backed [SemanticSimilarityExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html?highlight=semanticsimilarityexampleselector#langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector)
* `example_prompt`: convert each example into 1 or more messages through its [`format_messages`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html?highlight=chatprompttemplate#langchain_core.prompts.chat.ChatPromptTemplate.format_messages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.
These once again can be composed with other messages and chat templates to assemble your final prompt.
```
from langchain_chroma import Chromafrom langchain_core.example_selectors import SemanticSimilarityExampleSelectorfrom langchain_openai import OpenAIEmbeddings
```
Since we are using a vectorstore to select examples based on semantic similarity, we will want to first populate the store.
```
examples = [ {"input": "2+2", "output": "4"}, {"input": "2+3", "output": "5"}, {"input": "2+4", "output": "6"}, {"input": "What did the cow say to the moon?", "output": "nothing at all"}, { "input": "Write me a poem about the moon", "output": "One for the moon, and one for me, who are we to talk about the moon?", },]to_vectorize = [" ".join(example.values()) for example in examples]embeddings = OpenAIEmbeddings()vectorstore = Chroma.from_texts(to_vectorize, embeddings, metadatas=examples)
```
#### Create the `example_selector`[](#create-the-example_selector "Direct link to create-the-example_selector")
With a vectorstore created, you can create the `example_selector`. Here we will isntruct it to only fetch the top 2 examples.
```
example_selector = SemanticSimilarityExampleSelector( vectorstore=vectorstore, k=2,)# The prompt template will load examples by passing the input do the `select_examples` methodexample_selector.select_examples({"input": "horse"})
```
```
[{'input': 'What did the cow say to the moon?', 'output': 'nothing at all'}, {'input': '2+4', 'output': '6'}]
```
#### Create prompt template[](#create-prompt-template "Direct link to Create prompt template")
Assemble the prompt template, using the `example_selector` created above.
```
from langchain_core.prompts import ( ChatPromptTemplate, FewShotChatMessagePromptTemplate,)# Define the few-shot prompt.few_shot_prompt = FewShotChatMessagePromptTemplate( # The input variables select the values to pass to the example_selector input_variables=["input"], example_selector=example_selector, # Define how each example will be formatted. # In this case, each example will become 2 messages: # 1 human, and 1 AI example_prompt=ChatPromptTemplate.from_messages( [("human", "{input}"), ("ai", "{output}")] ),)
```
Below is an example of how this would be assembled.
```
print(few_shot_prompt.format(input="What's 3+3?"))
```
```
Human: 2+3AI: 5Human: 2+2AI: 4
```
Assemble the final prompt template:
```
final_prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a wondrous wizard of math."), few_shot_prompt, ("human", "{input}"), ])
```
```
print(few_shot_prompt.format(input="What's 3+3?"))
```
```
Human: 2+3AI: 5Human: 2+2AI: 4
```
#### Use with an LLM[](#use-with-an-llm "Direct link to Use with an LLM")
Now, you can connect your model to the few-shot prompt.
```
from langchain_community.chat_models import ChatAnthropicchain = final_prompt | ChatAnthropic(temperature=0.0)chain.invoke({"input": "What's 3+3?"})
```
```
AIMessage(content=' 3 + 3 = 6', additional_kwargs={}, example=False)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:32.324Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples_chat/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples_chat/",
"description": "This notebook covers how to use few-shot examples in chat models. There",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3738",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"few_shot_examples_chat\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:31 GMT",
"etag": "W/\"9765d0be084aa9776b93807c1248db7a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dhf8l-1713753931588-cb75342eae16"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples_chat/",
"property": "og:url"
},
{
"content": "Few-shot examples for chat models | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook covers how to use few-shot examples in chat models. There",
"property": "og:description"
}
],
"title": "Few-shot examples for chat models | 🦜️🔗 LangChain"
} | This notebook covers how to use few-shot examples in chat models. There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the FewShotChatMessagePromptTemplate as a flexible starting point, and you can modify or replace them as you see fit.
The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.
Note: The following code examples are for chat models. For similar few-shot prompt examples for completion models (LLMs), see the few-shot prompt templates guide.
Fixed Examples
The most basic (and common) few-shot prompting technique is to use a fixed prompt example. This way you can select a chain, evaluate it, and avoid worrying about additional moving parts in production.
The basic components of the template are: - examples: A list of dictionary examples to include in the final prompt. - example_prompt: converts each example into 1 or more messages through its format_messages method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.
Below is a simple demonstration. First, import the modules for this example:
from langchain_core.prompts import (
ChatPromptTemplate,
FewShotChatMessagePromptTemplate,
)
Then, define the examples you’d like to include.
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
]
Next, assemble them into the few-shot prompt template.
# This is a prompt template used to format each individual example.
example_prompt = ChatPromptTemplate.from_messages(
[
("human", "{input}"),
("ai", "{output}"),
]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
example_prompt=example_prompt,
examples=examples,
)
print(few_shot_prompt.format())
Human: 2+2
AI: 4
Human: 2+3
AI: 5
Finally, assemble your final prompt and use it with a model.
final_prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a wondrous wizard of math."),
few_shot_prompt,
("human", "{input}"),
]
)
from langchain_community.chat_models import ChatAnthropic
chain = final_prompt | ChatAnthropic(temperature=0.0)
chain.invoke({"input": "What's the square of a triangle?"})
AIMessage(content=' Triangles do not have a "square". A square refers to a shape with 4 equal sides and 4 right angles. Triangles have 3 sides and 3 angles.\n\nThe area of a triangle can be calculated using the formula:\n\nA = 1/2 * b * h\n\nWhere:\n\nA is the area \nb is the base (the length of one of the sides)\nh is the height (the length from the base to the opposite vertex)\n\nSo the area depends on the specific dimensions of the triangle. There is no single "square of a triangle". The area can vary greatly depending on the base and height measurements.', additional_kwargs={}, example=False)
Dynamic few-shot prompting
Sometimes you may want to condition which examples are shown based on the input. For this, you can replace the examples with an example_selector. The other components remain the same as above! To review, the dynamic few-shot prompt template would look like:
example_selector: responsible for selecting few-shot examples (and the order in which they are returned) for a given input. These implement the BaseExampleSelector interface. A common example is the vectorstore-backed SemanticSimilarityExampleSelector
example_prompt: convert each example into 1 or more messages through its format_messages method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.
These once again can be composed with other messages and chat templates to assemble your final prompt.
from langchain_chroma import Chroma
from langchain_core.example_selectors import SemanticSimilarityExampleSelector
from langchain_openai import OpenAIEmbeddings
Since we are using a vectorstore to select examples based on semantic similarity, we will want to first populate the store.
examples = [
{"input": "2+2", "output": "4"},
{"input": "2+3", "output": "5"},
{"input": "2+4", "output": "6"},
{"input": "What did the cow say to the moon?", "output": "nothing at all"},
{
"input": "Write me a poem about the moon",
"output": "One for the moon, and one for me, who are we to talk about the moon?",
},
]
to_vectorize = [" ".join(example.values()) for example in examples]
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_texts(to_vectorize, embeddings, metadatas=examples)
Create the example_selector
With a vectorstore created, you can create the example_selector. Here we will isntruct it to only fetch the top 2 examples.
example_selector = SemanticSimilarityExampleSelector(
vectorstore=vectorstore,
k=2,
)
# The prompt template will load examples by passing the input do the `select_examples` method
example_selector.select_examples({"input": "horse"})
[{'input': 'What did the cow say to the moon?', 'output': 'nothing at all'},
{'input': '2+4', 'output': '6'}]
Create prompt template
Assemble the prompt template, using the example_selector created above.
from langchain_core.prompts import (
ChatPromptTemplate,
FewShotChatMessagePromptTemplate,
)
# Define the few-shot prompt.
few_shot_prompt = FewShotChatMessagePromptTemplate(
# The input variables select the values to pass to the example_selector
input_variables=["input"],
example_selector=example_selector,
# Define how each example will be formatted.
# In this case, each example will become 2 messages:
# 1 human, and 1 AI
example_prompt=ChatPromptTemplate.from_messages(
[("human", "{input}"), ("ai", "{output}")]
),
)
Below is an example of how this would be assembled.
print(few_shot_prompt.format(input="What's 3+3?"))
Human: 2+3
AI: 5
Human: 2+2
AI: 4
Assemble the final prompt template:
final_prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a wondrous wizard of math."),
few_shot_prompt,
("human", "{input}"),
]
)
print(few_shot_prompt.format(input="What's 3+3?"))
Human: 2+3
AI: 5
Human: 2+2
AI: 4
Use with an LLM
Now, you can connect your model to the few-shot prompt.
from langchain_community.chat_models import ChatAnthropic
chain = final_prompt | ChatAnthropic(temperature=0.0)
chain.invoke({"input": "What's 3+3?"})
AIMessage(content=' 3 + 3 = 6', additional_kwargs={}, example=False) |
https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/ | ## Example selectors
If you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.
The base interface is defined as below:
```
class BaseExampleSelector(ABC): """Interface for selecting examples to include in prompts.""" @abstractmethod def select_examples(self, input_variables: Dict[str, str]) -> List[dict]: """Select which examples to use based on the inputs.""" @abstractmethod def add_example(self, example: Dict[str, str]) -> Any: """Add new example to store."""
```
The only method it needs to define is a `select_examples` method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected.
LangChain has a few different types of example selectors. For an overview of all these types, see the below table.
In this guide, we will walk through creating a custom example selector.
## Examples[](#examples "Direct link to Examples")
In order to use an example selector, we need to create a list of examples. These should generally be example inputs and outputs. For this demo purpose, let’s imagine we are selecting examples of how to translate English to Italian.
```
examples = [ {"input": "hi", "output": "ciao"}, {"input": "bye", "output": "arrivaderci"}, {"input": "soccer", "output": "calcio"},]
```
## Custom Example Selector[](#custom-example-selector "Direct link to Custom Example Selector")
Let’s write an example selector that chooses what example to pick based on the length of the word.
```
from langchain_core.example_selectors.base import BaseExampleSelectorclass CustomExampleSelector(BaseExampleSelector): def __init__(self, examples): self.examples = examples def add_example(self, example): self.examples.append(example) def select_examples(self, input_variables): # This assumes knowledge that part of the input will be a 'text' key new_word = input_variables["input"] new_word_length = len(new_word) # Initialize variables to store the best match and its length difference best_match = None smallest_diff = float("inf") # Iterate through each example for example in self.examples: # Calculate the length difference with the first word of the example current_diff = abs(len(example["input"]) - new_word_length) # Update the best match if the current one is closer in length if current_diff < smallest_diff: smallest_diff = current_diff best_match = example return [best_match]
```
```
example_selector = CustomExampleSelector(examples)
```
```
example_selector.select_examples({"input": "okay"})
```
```
[{'input': 'bye', 'output': 'arrivaderci'}]
```
```
example_selector.add_example({"input": "hand", "output": "mano"})
```
```
example_selector.select_examples({"input": "okay"})
```
```
[{'input': 'hand', 'output': 'mano'}]
```
## Use in a Prompt[](#use-in-a-prompt "Direct link to Use in a Prompt")
We can now use this example selector in a prompt
```
from langchain_core.prompts.few_shot import FewShotPromptTemplatefrom langchain_core.prompts.prompt import PromptTemplateexample_prompt = PromptTemplate.from_template("Input: {input} -> Output: {output}")
```
```
prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, suffix="Input: {input} -> Output:", prefix="Translate the following words from English to Italain:", input_variables=["input"],)print(prompt.format(input="word"))
```
```
Translate the following words from English to Italain:Input: hand -> Output: manoInput: word -> Output:
```
## Example Selector Types[](#example-selector-types "Direct link to Example Selector Types")
| Name | Description |
| --- | --- |
| Similarity | Uses semantic similarity between inputs and examples to decide which examples to choose. |
| MMR | Uses Max Marginal Relevance between inputs and examples to decide which examples to choose. |
| Length | Selects examples based on how many can fit within a certain length |
| Ngram | Uses ngram overlap between inputs and examples to decide which examples to choose. |
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:32.615Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/",
"description": "If you have a large number of examples, you may need to select which",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"example_selectors\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:32 GMT",
"etag": "W/\"8d6bb2df2a26125a977124cd45c93466\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::rcjd5-1713753931891-763ff3415fce"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/",
"property": "og:url"
},
{
"content": "Example selectors | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "If you have a large number of examples, you may need to select which",
"property": "og:description"
}
],
"title": "Example selectors | 🦜️🔗 LangChain"
} | Example selectors
If you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.
The base interface is defined as below:
class BaseExampleSelector(ABC):
"""Interface for selecting examples to include in prompts."""
@abstractmethod
def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
"""Select which examples to use based on the inputs."""
@abstractmethod
def add_example(self, example: Dict[str, str]) -> Any:
"""Add new example to store."""
The only method it needs to define is a select_examples method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected.
LangChain has a few different types of example selectors. For an overview of all these types, see the below table.
In this guide, we will walk through creating a custom example selector.
Examples
In order to use an example selector, we need to create a list of examples. These should generally be example inputs and outputs. For this demo purpose, let’s imagine we are selecting examples of how to translate English to Italian.
examples = [
{"input": "hi", "output": "ciao"},
{"input": "bye", "output": "arrivaderci"},
{"input": "soccer", "output": "calcio"},
]
Custom Example Selector
Let’s write an example selector that chooses what example to pick based on the length of the word.
from langchain_core.example_selectors.base import BaseExampleSelector
class CustomExampleSelector(BaseExampleSelector):
def __init__(self, examples):
self.examples = examples
def add_example(self, example):
self.examples.append(example)
def select_examples(self, input_variables):
# This assumes knowledge that part of the input will be a 'text' key
new_word = input_variables["input"]
new_word_length = len(new_word)
# Initialize variables to store the best match and its length difference
best_match = None
smallest_diff = float("inf")
# Iterate through each example
for example in self.examples:
# Calculate the length difference with the first word of the example
current_diff = abs(len(example["input"]) - new_word_length)
# Update the best match if the current one is closer in length
if current_diff < smallest_diff:
smallest_diff = current_diff
best_match = example
return [best_match]
example_selector = CustomExampleSelector(examples)
example_selector.select_examples({"input": "okay"})
[{'input': 'bye', 'output': 'arrivaderci'}]
example_selector.add_example({"input": "hand", "output": "mano"})
example_selector.select_examples({"input": "okay"})
[{'input': 'hand', 'output': 'mano'}]
Use in a Prompt
We can now use this example selector in a prompt
from langchain_core.prompts.few_shot import FewShotPromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
example_prompt = PromptTemplate.from_template("Input: {input} -> Output: {output}")
prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
suffix="Input: {input} -> Output:",
prefix="Translate the following words from English to Italain:",
input_variables=["input"],
)
print(prompt.format(input="word"))
Translate the following words from English to Italain:
Input: hand -> Output: mano
Input: word -> Output:
Example Selector Types
NameDescription
Similarity Uses semantic similarity between inputs and examples to decide which examples to choose.
MMR Uses Max Marginal Relevance between inputs and examples to decide which examples to choose.
Length Selects examples based on how many can fit within a certain length
Ngram Uses ngram overlap between inputs and examples to decide which examples to choose.
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/modules/model_io/prompts/quick_start/ | ## Quick reference
Prompt templates are predefined recipes for generating prompts for language models.
A template may include instructions, few-shot examples, and specific context and questions appropriate for a given task.
LangChain provides tooling to create and work with prompt templates.
LangChain strives to create model agnostic templates to make it easy to reuse existing templates across different language models.
Typically, language models expect the prompt to either be a string or else a list of chat messages.
## `PromptTemplate`[](#prompttemplate "Direct link to prompttemplate")
Use `PromptTemplate` to create a template for a string prompt.
By default, `PromptTemplate` uses [Python’s str.format](https://docs.python.org/3/library/stdtypes.html#str.format) syntax for templating.
```
from langchain_core.prompts import PromptTemplateprompt_template = PromptTemplate.from_template( "Tell me a {adjective} joke about {content}.")prompt_template.format(adjective="funny", content="chickens")
```
```
'Tell me a funny joke about chickens.'
```
The template supports any number of variables, including no variables:
```
from langchain_core.prompts import PromptTemplateprompt_template = PromptTemplate.from_template("Tell me a joke")prompt_template.format()
```
You can create custom prompt templates that format the prompt in any way you want. For more information, see [Prompt Template Composition](https://python.langchain.com/docs/modules/model_io/prompts/composition/).
## `ChatPromptTemplate`[](#chatprompttemplate "Direct link to chatprompttemplate")
The prompt to [chat models](https://python.langchain.com/docs/modules/model_io/chat/)/ is a list of [chat messages](https://python.langchain.com/docs/modules/model_io/chat/message_types/).
Each chat message is associated with content, and an additional parameter called `role`. For example, in the OpenAI [Chat Completions API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with an AI assistant, a human or a system role.
Create a chat prompt template like this:
```
from langchain_core.prompts import ChatPromptTemplatechat_template = ChatPromptTemplate.from_messages( [ ("system", "You are a helpful AI bot. Your name is {name}."), ("human", "Hello, how are you doing?"), ("ai", "I'm doing well, thanks!"), ("human", "{user_input}"), ])messages = chat_template.format_messages(name="Bob", user_input="What is your name?")
```
Piping these formatted messages into LangChain’s `ChatOpenAI` chat model class is roughly equivalent to the following with using the OpenAI client directly:
```
from openai import OpenAIclient = OpenAI()response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful AI bot. Your name is Bob."}, {"role": "user", "content": "Hello, how are you doing?"}, {"role": "assistant", "content": "I'm doing well, thanks!"}, {"role": "user", "content": "What is your name?"}, ],)
```
The `ChatPromptTemplate.from_messages` static method accepts a variety of message representations and is a convenient way to format input to chat models with exactly the messages you want.
For example, in addition to using the 2-tuple representation of (type, content) used above, you could pass in an instance of `MessagePromptTemplate` or `BaseMessage`.
```
from langchain_core.messages import SystemMessagefrom langchain_core.prompts import HumanMessagePromptTemplatechat_template = ChatPromptTemplate.from_messages( [ SystemMessage( content=( "You are a helpful assistant that re-writes the user's text to " "sound more upbeat." ) ), HumanMessagePromptTemplate.from_template("{text}"), ])messages = chat_template.format_messages(text="I don't like eating tasty things")print(messages)
```
```
[SystemMessage(content="You are a helpful assistant that re-writes the user's text to sound more upbeat."), HumanMessage(content="I don't like eating tasty things")]
```
This provides you with a lot of flexibility in how you construct your chat prompts.
## Message Prompts[](#message-prompts "Direct link to Message Prompts")
LangChain provides different types of `MessagePromptTemplate`. The most commonly used are `AIMessagePromptTemplate`, `SystemMessagePromptTemplate` and `HumanMessagePromptTemplate`, which create an AI message, system message and human message respectively. You can read more about the [different types of messages here](https://python.langchain.com/docs/modules/model_io/chat/message_types/).
In cases where the chat model supports taking chat message with arbitrary role, you can use `ChatMessagePromptTemplate`, which allows user to specify the role name.
```
from langchain_core.prompts import ChatMessagePromptTemplateprompt = "May the {subject} be with you"chat_message_prompt = ChatMessagePromptTemplate.from_template( role="Jedi", template=prompt)chat_message_prompt.format(subject="force")
```
```
ChatMessage(content='May the force be with you', role='Jedi')
```
## `MessagesPlaceholder`[](#messagesplaceholder "Direct link to messagesplaceholder")
LangChain also provides `MessagesPlaceholder`, which gives you full control of what messages to be rendered during formatting. This can be useful when you are uncertain of what role you should be using for your message prompt templates or when you wish to insert a list of messages during formatting.
```
from langchain_core.prompts import ( ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder,)human_prompt = "Summarize our conversation so far in {word_count} words."human_message_template = HumanMessagePromptTemplate.from_template(human_prompt)chat_prompt = ChatPromptTemplate.from_messages( [MessagesPlaceholder(variable_name="conversation"), human_message_template])
```
```
from langchain_core.messages import AIMessage, HumanMessagehuman_message = HumanMessage(content="What is the best way to learn programming?")ai_message = AIMessage( content="""\1. Choose a programming language: Decide on a programming language that you want to learn.2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.3. Practice, practice, practice: The best way to learn programming is through hands-on experience\""")chat_prompt.format_prompt( conversation=[human_message, ai_message], word_count="10").to_messages()
```
```
[HumanMessage(content='What is the best way to learn programming?'), AIMessage(content='1. Choose a programming language: Decide on a programming language that you want to learn.\n\n2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.\n\n3. Practice, practice, practice: The best way to learn programming is through hands-on experience'), HumanMessage(content='Summarize our conversation so far in 10 words.')]
```
The full list of message prompt template types includes:
* [AIMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.AIMessagePromptTemplate.html), for AI assistant messages;
* [SystemMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.SystemMessagePromptTemplate.html), for system messages;
* [HumanMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.HumanMessagePromptTemplate.html), for user messages;
* [ChatMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatMessagePromptTemplate.html), for messages with arbitrary roles;
* [MessagesPlaceholder](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.MessagesPlaceholder.html), which accommodates a list of messages.
## LCEL[](#lcel "Direct link to LCEL")
`PromptTemplate` and `ChatPromptTemplate` implement the [Runnable interface](https://python.langchain.com/docs/expression_language/interface/), the basic building block of the [LangChain Expression Language (LCEL)](https://python.langchain.com/docs/expression_language/). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.
`PromptTemplate` accepts a dictionary (of the prompt variables) and returns a `StringPromptValue`. A `ChatPromptTemplate` accepts a dictionary and returns a `ChatPromptValue`.
```
prompt_template = PromptTemplate.from_template( "Tell me a {adjective} joke about {content}.")prompt_val = prompt_template.invoke({"adjective": "funny", "content": "chickens"})prompt_val
```
```
StringPromptValue(text='Tell me a funny joke about chickens.')
```
```
'Tell me a funny joke about chickens.'
```
```
[HumanMessage(content='Tell me a funny joke about chickens.')]
```
```
chat_template = ChatPromptTemplate.from_messages( [ SystemMessage( content=( "You are a helpful assistant that re-writes the user's text to " "sound more upbeat." ) ), HumanMessagePromptTemplate.from_template("{text}"), ])chat_val = chat_template.invoke({"text": "i dont like eating tasty things."})
```
```
[SystemMessage(content="You are a helpful assistant that re-writes the user's text to sound more upbeat."), HumanMessage(content='i dont like eating tasty things.')]
```
```
"System: You are a helpful assistant that re-writes the user's text to sound more upbeat.\nHuman: i dont like eating tasty things."
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:33.086Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/quick_start/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/quick_start/",
"description": "quick-reference}",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "6669",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"quick_start\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:33 GMT",
"etag": "W/\"034ff86a8ca95d973e937e41b06a5fe9\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dhf8l-1713753933020-ac43e4e1f1c4"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/quick_start/",
"property": "og:url"
},
{
"content": "Quick reference | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "quick-reference}",
"property": "og:description"
}
],
"title": "Quick reference | 🦜️🔗 LangChain"
} | Quick reference
Prompt templates are predefined recipes for generating prompts for language models.
A template may include instructions, few-shot examples, and specific context and questions appropriate for a given task.
LangChain provides tooling to create and work with prompt templates.
LangChain strives to create model agnostic templates to make it easy to reuse existing templates across different language models.
Typically, language models expect the prompt to either be a string or else a list of chat messages.
PromptTemplate
Use PromptTemplate to create a template for a string prompt.
By default, PromptTemplate uses Python’s str.format syntax for templating.
from langchain_core.prompts import PromptTemplate
prompt_template = PromptTemplate.from_template(
"Tell me a {adjective} joke about {content}."
)
prompt_template.format(adjective="funny", content="chickens")
'Tell me a funny joke about chickens.'
The template supports any number of variables, including no variables:
from langchain_core.prompts import PromptTemplate
prompt_template = PromptTemplate.from_template("Tell me a joke")
prompt_template.format()
You can create custom prompt templates that format the prompt in any way you want. For more information, see Prompt Template Composition.
ChatPromptTemplate
The prompt to chat models/ is a list of chat messages.
Each chat message is associated with content, and an additional parameter called role. For example, in the OpenAI Chat Completions API, a chat message can be associated with an AI assistant, a human or a system role.
Create a chat prompt template like this:
from langchain_core.prompts import ChatPromptTemplate
chat_template = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
]
)
messages = chat_template.format_messages(name="Bob", user_input="What is your name?")
Piping these formatted messages into LangChain’s ChatOpenAI chat model class is roughly equivalent to the following with using the OpenAI client directly:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful AI bot. Your name is Bob."},
{"role": "user", "content": "Hello, how are you doing?"},
{"role": "assistant", "content": "I'm doing well, thanks!"},
{"role": "user", "content": "What is your name?"},
],
)
The ChatPromptTemplate.from_messages static method accepts a variety of message representations and is a convenient way to format input to chat models with exactly the messages you want.
For example, in addition to using the 2-tuple representation of (type, content) used above, you could pass in an instance of MessagePromptTemplate or BaseMessage.
from langchain_core.messages import SystemMessage
from langchain_core.prompts import HumanMessagePromptTemplate
chat_template = ChatPromptTemplate.from_messages(
[
SystemMessage(
content=(
"You are a helpful assistant that re-writes the user's text to "
"sound more upbeat."
)
),
HumanMessagePromptTemplate.from_template("{text}"),
]
)
messages = chat_template.format_messages(text="I don't like eating tasty things")
print(messages)
[SystemMessage(content="You are a helpful assistant that re-writes the user's text to sound more upbeat."), HumanMessage(content="I don't like eating tasty things")]
This provides you with a lot of flexibility in how you construct your chat prompts.
Message Prompts
LangChain provides different types of MessagePromptTemplate. The most commonly used are AIMessagePromptTemplate, SystemMessagePromptTemplate and HumanMessagePromptTemplate, which create an AI message, system message and human message respectively. You can read more about the different types of messages here.
In cases where the chat model supports taking chat message with arbitrary role, you can use ChatMessagePromptTemplate, which allows user to specify the role name.
from langchain_core.prompts import ChatMessagePromptTemplate
prompt = "May the {subject} be with you"
chat_message_prompt = ChatMessagePromptTemplate.from_template(
role="Jedi", template=prompt
)
chat_message_prompt.format(subject="force")
ChatMessage(content='May the force be with you', role='Jedi')
MessagesPlaceholder
LangChain also provides MessagesPlaceholder, which gives you full control of what messages to be rendered during formatting. This can be useful when you are uncertain of what role you should be using for your message prompt templates or when you wish to insert a list of messages during formatting.
from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
human_prompt = "Summarize our conversation so far in {word_count} words."
human_message_template = HumanMessagePromptTemplate.from_template(human_prompt)
chat_prompt = ChatPromptTemplate.from_messages(
[MessagesPlaceholder(variable_name="conversation"), human_message_template]
)
from langchain_core.messages import AIMessage, HumanMessage
human_message = HumanMessage(content="What is the best way to learn programming?")
ai_message = AIMessage(
content="""\
1. Choose a programming language: Decide on a programming language that you want to learn.
2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.
3. Practice, practice, practice: The best way to learn programming is through hands-on experience\
"""
)
chat_prompt.format_prompt(
conversation=[human_message, ai_message], word_count="10"
).to_messages()
[HumanMessage(content='What is the best way to learn programming?'),
AIMessage(content='1. Choose a programming language: Decide on a programming language that you want to learn.\n\n2. Start with the basics: Familiarize yourself with the basic programming concepts such as variables, data types and control structures.\n\n3. Practice, practice, practice: The best way to learn programming is through hands-on experience'),
HumanMessage(content='Summarize our conversation so far in 10 words.')]
The full list of message prompt template types includes:
AIMessagePromptTemplate, for AI assistant messages;
SystemMessagePromptTemplate, for system messages;
HumanMessagePromptTemplate, for user messages;
ChatMessagePromptTemplate, for messages with arbitrary roles;
MessagesPlaceholder, which accommodates a list of messages.
LCEL
PromptTemplate and ChatPromptTemplate implement the Runnable interface, the basic building block of the LangChain Expression Language (LCEL). This means they support invoke, ainvoke, stream, astream, batch, abatch, astream_log calls.
PromptTemplate accepts a dictionary (of the prompt variables) and returns a StringPromptValue. A ChatPromptTemplate accepts a dictionary and returns a ChatPromptValue.
prompt_template = PromptTemplate.from_template(
"Tell me a {adjective} joke about {content}."
)
prompt_val = prompt_template.invoke({"adjective": "funny", "content": "chickens"})
prompt_val
StringPromptValue(text='Tell me a funny joke about chickens.')
'Tell me a funny joke about chickens.'
[HumanMessage(content='Tell me a funny joke about chickens.')]
chat_template = ChatPromptTemplate.from_messages(
[
SystemMessage(
content=(
"You are a helpful assistant that re-writes the user's text to "
"sound more upbeat."
)
),
HumanMessagePromptTemplate.from_template("{text}"),
]
)
chat_val = chat_template.invoke({"text": "i dont like eating tasty things."})
[SystemMessage(content="You are a helpful assistant that re-writes the user's text to sound more upbeat."),
HumanMessage(content='i dont like eating tasty things.')]
"System: You are a helpful assistant that re-writes the user's text to sound more upbeat.\nHuman: i dont like eating tasty things." |
https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples/ | ## Few-shot prompt templates
In this tutorial, we’ll learn how to create a prompt template that uses few-shot examples. A few-shot prompt template can be constructed from either a set of examples, or from an Example Selector object.
### Use Case[](#use-case "Direct link to Use Case")
In this tutorial, we’ll configure few-shot examples for self-ask with search.
## Using an example set[](#using-an-example-set "Direct link to Using an example set")
### Create the example set[](#create-the-example-set "Direct link to Create the example set")
To get started, create a list of few-shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.
```
from langchain_core.prompts.few_shot import FewShotPromptTemplatefrom langchain_core.prompts.prompt import PromptTemplateexamples = [ { "question": "Who lived longer, Muhammad Ali or Alan Turing?", "answer": """Are follow up questions needed here: Yes.Follow up: How old was Muhammad Ali when he died?Intermediate answer: Muhammad Ali was 74 years old when he died.Follow up: How old was Alan Turing when he died?Intermediate answer: Alan Turing was 41 years old when he died.So the final answer is: Muhammad Ali""", }, { "question": "When was the founder of craigslist born?", "answer": """Are follow up questions needed here: Yes.Follow up: Who was the founder of craigslist?Intermediate answer: Craigslist was founded by Craig Newmark.Follow up: When was Craig Newmark born?Intermediate answer: Craig Newmark was born on December 6, 1952.So the final answer is: December 6, 1952""", }, { "question": "Who was the maternal grandfather of George Washington?", "answer": """Are follow up questions needed here: Yes.Follow up: Who was the mother of George Washington?Intermediate answer: The mother of George Washington was Mary Ball Washington.Follow up: Who was the father of Mary Ball Washington?Intermediate answer: The father of Mary Ball Washington was Joseph Ball.So the final answer is: Joseph Ball""", }, { "question": "Are both the directors of Jaws and Casino Royale from the same country?", "answer": """Are follow up questions needed here: Yes.Follow up: Who is the director of Jaws?Intermediate Answer: The director of Jaws is Steven Spielberg.Follow up: Where is Steven Spielberg from?Intermediate Answer: The United States.Follow up: Who is the director of Casino Royale?Intermediate Answer: The director of Casino Royale is Martin Campbell.Follow up: Where is Martin Campbell from?Intermediate Answer: New Zealand.So the final answer is: No""", },]
```
### Create a formatter for the few-shot examples[](#create-a-formatter-for-the-few-shot-examples "Direct link to Create a formatter for the few-shot examples")
Configure a formatter that will format the few-shot examples into a string. This formatter should be a `PromptTemplate` object.
```
example_prompt = PromptTemplate( input_variables=["question", "answer"], template="Question: {question}\n{answer}")print(example_prompt.format(**examples[0]))
```
```
Question: Who lived longer, Muhammad Ali or Alan Turing?Are follow up questions needed here: Yes.Follow up: How old was Muhammad Ali when he died?Intermediate answer: Muhammad Ali was 74 years old when he died.Follow up: How old was Alan Turing when he died?Intermediate answer: Alan Turing was 41 years old when he died.So the final answer is: Muhammad Ali
```
### Feed examples and formatter to `FewShotPromptTemplate`[](#feed-examples-and-formatter-to-fewshotprompttemplate "Direct link to feed-examples-and-formatter-to-fewshotprompttemplate")
Finally, create a `FewShotPromptTemplate` object. This object takes in the few-shot examples and the formatter for the few-shot examples.
```
prompt = FewShotPromptTemplate( examples=examples, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"],)print(prompt.format(input="Who was the father of Mary Ball Washington?"))
```
```
Question: Who lived longer, Muhammad Ali or Alan Turing?Are follow up questions needed here: Yes.Follow up: How old was Muhammad Ali when he died?Intermediate answer: Muhammad Ali was 74 years old when he died.Follow up: How old was Alan Turing when he died?Intermediate answer: Alan Turing was 41 years old when he died.So the final answer is: Muhammad AliQuestion: When was the founder of craigslist born?Are follow up questions needed here: Yes.Follow up: Who was the founder of craigslist?Intermediate answer: Craigslist was founded by Craig Newmark.Follow up: When was Craig Newmark born?Intermediate answer: Craig Newmark was born on December 6, 1952.So the final answer is: December 6, 1952Question: Who was the maternal grandfather of George Washington?Are follow up questions needed here: Yes.Follow up: Who was the mother of George Washington?Intermediate answer: The mother of George Washington was Mary Ball Washington.Follow up: Who was the father of Mary Ball Washington?Intermediate answer: The father of Mary Ball Washington was Joseph Ball.So the final answer is: Joseph BallQuestion: Are both the directors of Jaws and Casino Royale from the same country?Are follow up questions needed here: Yes.Follow up: Who is the director of Jaws?Intermediate Answer: The director of Jaws is Steven Spielberg.Follow up: Where is Steven Spielberg from?Intermediate Answer: The United States.Follow up: Who is the director of Casino Royale?Intermediate Answer: The director of Casino Royale is Martin Campbell.Follow up: Where is Martin Campbell from?Intermediate Answer: New Zealand.So the final answer is: NoQuestion: Who was the father of Mary Ball Washington?
```
## Using an example selector[](#using-an-example-selector "Direct link to Using an example selector")
### Feed examples into `ExampleSelector`[](#feed-examples-into-exampleselector "Direct link to feed-examples-into-exampleselector")
We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the `FewShotPromptTemplate` object, we will feed them into an `ExampleSelector` object.
In this tutorial, we will use the `SemanticSimilarityExampleSelector` class. This class selects few-shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few-shot examples, as well as a vector store to perform the nearest neighbor search.
```
from langchain_chroma import Chromafrom langchain_core.example_selectors import SemanticSimilarityExampleSelectorfrom langchain_openai import OpenAIEmbeddingsexample_selector = SemanticSimilarityExampleSelector.from_examples( # This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, # This is the number of examples to produce. k=1,)# Select the most similar example to the input.question = "Who was the father of Mary Ball Washington?"selected_examples = example_selector.select_examples({"question": question})print(f"Examples most similar to the input: {question}")for example in selected_examples: print("\n") for k, v in example.items(): print(f"{k}: {v}")
```
```
Examples most similar to the input: Who was the father of Mary Ball Washington?answer: Are follow up questions needed here: Yes.Follow up: Who was the mother of George Washington?Intermediate answer: The mother of George Washington was Mary Ball Washington.Follow up: Who was the father of Mary Ball Washington?Intermediate answer: The father of Mary Ball Washington was Joseph Ball.So the final answer is: Joseph Ballquestion: Who was the maternal grandfather of George Washington?
```
### Feed example selector into `FewShotPromptTemplate`[](#feed-example-selector-into-fewshotprompttemplate "Direct link to feed-example-selector-into-fewshotprompttemplate")
Finally, create a `FewShotPromptTemplate` object. This object takes in the example selector and the formatter for the few-shot examples.
```
prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"],)print(prompt.format(input="Who was the father of Mary Ball Washington?"))
```
```
Question: Who was the maternal grandfather of George Washington?Are follow up questions needed here: Yes.Follow up: Who was the mother of George Washington?Intermediate answer: The mother of George Washington was Mary Ball Washington.Follow up: Who was the father of Mary Ball Washington?Intermediate answer: The father of Mary Ball Washington was Joseph Ball.So the final answer is: Joseph BallQuestion: Who was the father of Mary Ball Washington?
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:33.401Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples/",
"description": "In this tutorial, we’ll learn how to create a prompt template that uses",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3740",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"few_shot_examples\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:32 GMT",
"etag": "W/\"bed5126508285cf6da58adcf2427735d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::kt9bz-1713753932975-9d4d4c70e884"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/few_shot_examples/",
"property": "og:url"
},
{
"content": "Few-shot prompt templates | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "In this tutorial, we’ll learn how to create a prompt template that uses",
"property": "og:description"
}
],
"title": "Few-shot prompt templates | 🦜️🔗 LangChain"
} | Few-shot prompt templates
In this tutorial, we’ll learn how to create a prompt template that uses few-shot examples. A few-shot prompt template can be constructed from either a set of examples, or from an Example Selector object.
Use Case
In this tutorial, we’ll configure few-shot examples for self-ask with search.
Using an example set
Create the example set
To get started, create a list of few-shot examples. Each example should be a dictionary with the keys being the input variables and the values being the values for those input variables.
from langchain_core.prompts.few_shot import FewShotPromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
examples = [
{
"question": "Who lived longer, Muhammad Ali or Alan Turing?",
"answer": """
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
""",
},
{
"question": "When was the founder of craigslist born?",
"answer": """
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
""",
},
{
"question": "Who was the maternal grandfather of George Washington?",
"answer": """
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
""",
},
{
"question": "Are both the directors of Jaws and Casino Royale from the same country?",
"answer": """
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
""",
},
]
Create a formatter for the few-shot examples
Configure a formatter that will format the few-shot examples into a string. This formatter should be a PromptTemplate object.
example_prompt = PromptTemplate(
input_variables=["question", "answer"], template="Question: {question}\n{answer}"
)
print(example_prompt.format(**examples[0]))
Question: Who lived longer, Muhammad Ali or Alan Turing?
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
Feed examples and formatter to FewShotPromptTemplate
Finally, create a FewShotPromptTemplate object. This object takes in the few-shot examples and the formatter for the few-shot examples.
prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"],
)
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
Question: Who lived longer, Muhammad Ali or Alan Turing?
Are follow up questions needed here: Yes.
Follow up: How old was Muhammad Ali when he died?
Intermediate answer: Muhammad Ali was 74 years old when he died.
Follow up: How old was Alan Turing when he died?
Intermediate answer: Alan Turing was 41 years old when he died.
So the final answer is: Muhammad Ali
Question: When was the founder of craigslist born?
Are follow up questions needed here: Yes.
Follow up: Who was the founder of craigslist?
Intermediate answer: Craigslist was founded by Craig Newmark.
Follow up: When was Craig Newmark born?
Intermediate answer: Craig Newmark was born on December 6, 1952.
So the final answer is: December 6, 1952
Question: Who was the maternal grandfather of George Washington?
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Question: Are both the directors of Jaws and Casino Royale from the same country?
Are follow up questions needed here: Yes.
Follow up: Who is the director of Jaws?
Intermediate Answer: The director of Jaws is Steven Spielberg.
Follow up: Where is Steven Spielberg from?
Intermediate Answer: The United States.
Follow up: Who is the director of Casino Royale?
Intermediate Answer: The director of Casino Royale is Martin Campbell.
Follow up: Where is Martin Campbell from?
Intermediate Answer: New Zealand.
So the final answer is: No
Question: Who was the father of Mary Ball Washington?
Using an example selector
Feed examples into ExampleSelector
We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the FewShotPromptTemplate object, we will feed them into an ExampleSelector object.
In this tutorial, we will use the SemanticSimilarityExampleSelector class. This class selects few-shot examples based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few-shot examples, as well as a vector store to perform the nearest neighbor search.
from langchain_chroma import Chroma
from langchain_core.example_selectors import SemanticSimilarityExampleSelector
from langchain_openai import OpenAIEmbeddings
example_selector = SemanticSimilarityExampleSelector.from_examples(
# This is the list of examples available to select from.
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1,
)
# Select the most similar example to the input.
question = "Who was the father of Mary Ball Washington?"
selected_examples = example_selector.select_examples({"question": question})
print(f"Examples most similar to the input: {question}")
for example in selected_examples:
print("\n")
for k, v in example.items():
print(f"{k}: {v}")
Examples most similar to the input: Who was the father of Mary Ball Washington?
answer:
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
question: Who was the maternal grandfather of George Washington?
Feed example selector into FewShotPromptTemplate
Finally, create a FewShotPromptTemplate object. This object takes in the example selector and the formatter for the few-shot examples.
prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"],
)
print(prompt.format(input="Who was the father of Mary Ball Washington?"))
Question: Who was the maternal grandfather of George Washington?
Are follow up questions needed here: Yes.
Follow up: Who was the mother of George Washington?
Intermediate answer: The mother of George Washington was Mary Ball Washington.
Follow up: Who was the father of Mary Ball Washington?
Intermediate answer: The father of Mary Ball Washington was Joseph Ball.
So the final answer is: Joseph Ball
Question: Who was the father of Mary Ball Washington? |
https://python.langchain.com/docs/modules/model_io/prompts/partial/ | ## Partial prompt templates
Like other methods, it can make sense to “partial” a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
LangChain supports this in two ways: 1. Partial formatting with string values. 2. Partial formatting with functions that return string values.
These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.
## Partial with strings[](#partial-with-strings "Direct link to Partial with strings")
One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, `foo` and `baz`. If you get the `foo` value early on in the chain, but the `baz` value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the `foo` value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:
```
from langchain_core.prompts import PromptTemplateprompt = PromptTemplate.from_template("{foo}{bar}")partial_prompt = prompt.partial(foo="foo")print(partial_prompt.format(bar="baz"))
```
You can also just initialize the prompt with the partialed variables.
```
prompt = PromptTemplate( template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"})print(prompt.format(bar="baz"))
```
## Partial with functions[](#partial-with-functions "Direct link to Partial with functions")
The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can’t hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it’s very handy to be able to partial the prompt with a function that always returns the current date.
```
from datetime import datetimedef _get_datetime(): now = datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S")
```
```
prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective", "date"],)partial_prompt = prompt.partial(date=_get_datetime)print(partial_prompt.format(adjective="funny"))
```
```
Tell me a funny joke about the day 12/27/2023, 10:45:22
```
You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.
```
prompt = PromptTemplate( template="Tell me a {adjective} joke about the day {date}", input_variables=["adjective"], partial_variables={"date": _get_datetime},)print(prompt.format(adjective="funny"))
```
```
Tell me a funny joke about the day 12/27/2023, 10:45:36
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:33.783Z",
"loadedUrl": "https://python.langchain.com/docs/modules/model_io/prompts/partial/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/model_io/prompts/partial/",
"description": "Like other methods, it can make sense to “partial” a prompt template -",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3740",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"partial\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:33 GMT",
"etag": "W/\"f127a925d74864dce785d869a99aaf19\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::jrbzs-1713753933078-88bc27183dae"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/model_io/prompts/partial/",
"property": "og:url"
},
{
"content": "Partial prompt templates | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Like other methods, it can make sense to “partial” a prompt template -",
"property": "og:description"
}
],
"title": "Partial prompt templates | 🦜️🔗 LangChain"
} | Partial prompt templates
Like other methods, it can make sense to “partial” a prompt template - e.g. pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
LangChain supports this in two ways: 1. Partial formatting with string values. 2. Partial formatting with functions that return string values.
These two different ways support different use cases. In the examples below, we go over the motivations for both use cases as well as how to do it in LangChain.
Partial with strings
One common use case for wanting to partial a prompt template is if you get some of the variables before others. For example, suppose you have a prompt template that requires two variables, foo and baz. If you get the foo value early on in the chain, but the baz value later, it can be annoying to wait until you have both variables in the same place to pass them to the prompt template. Instead, you can partial the prompt template with the foo value, and then pass the partialed prompt template along and just use that. Below is an example of doing this:
from langchain_core.prompts import PromptTemplate
prompt = PromptTemplate.from_template("{foo}{bar}")
partial_prompt = prompt.partial(foo="foo")
print(partial_prompt.format(bar="baz"))
You can also just initialize the prompt with the partialed variables.
prompt = PromptTemplate(
template="{foo}{bar}", input_variables=["bar"], partial_variables={"foo": "foo"}
)
print(prompt.format(bar="baz"))
Partial with functions
The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to have the current date. You can’t hard code it in the prompt, and passing it along with the other input variables is a bit annoying. In this case, it’s very handy to be able to partial the prompt with a function that always returns the current date.
from datetime import datetime
def _get_datetime():
now = datetime.now()
return now.strftime("%m/%d/%Y, %H:%M:%S")
prompt = PromptTemplate(
template="Tell me a {adjective} joke about the day {date}",
input_variables=["adjective", "date"],
)
partial_prompt = prompt.partial(date=_get_datetime)
print(partial_prompt.format(adjective="funny"))
Tell me a funny joke about the day 12/27/2023, 10:45:22
You can also just initialize the prompt with the partialed variables, which often makes more sense in this workflow.
prompt = PromptTemplate(
template="Tell me a {adjective} joke about the day {date}",
input_variables=["adjective"],
partial_variables={"date": _get_datetime},
)
print(prompt.format(adjective="funny"))
Tell me a funny joke about the day 12/27/2023, 10:45:36 |
https://python.langchain.com/docs/modules/tools/ | ## Tools
Tools are interfaces that an agent, chain, or LLM can use to interact with the world. They combine a few things:
1. The name of the tool
2. A description of what the tool is
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user
It is useful to have all this information because this information can be used to build action-taking systems! The name, description, and JSON schema can be used to prompt the LLM so it knows how to specify what action to take, and then the function to call is equivalent to taking that action.
The simpler the input to a tool is, the easier it is for an LLM to be able to use it. Many agents will only work with tools that have a single string input. For a list of agent types and which ones work with more complicated inputs, please see [this documentation](https://python.langchain.com/docs/modules/agents/agent_types/)
Importantly, the name, description, and JSON schema (if used) are all used in the prompt. Therefore, it is really important that they are clear and describe exactly how the tool should be used. You may need to change the default name, description, or JSON schema if the LLM is not understanding how to use the tool.
Let’s take a look at how to work with tools. To do this, we’ll work with a built in tool.
```
from langchain_community.tools import WikipediaQueryRunfrom langchain_community.utilities import WikipediaAPIWrapper
```
Now we initialize the tool. This is where we can configure it as we please
```
api_wrapper = WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=100)tool = WikipediaQueryRun(api_wrapper=api_wrapper)
```
This is the default name
This is the default description
```
'A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.'
```
This is the default JSON schema of the inputs
```
{'query': {'title': 'Query', 'type': 'string'}}
```
We can see if the tool should return directly to the user
We can call this tool with a dictionary input
```
tool.run({"query": "langchain"})
```
```
'Page: LangChain\nSummary: LangChain is a framework designed to simplify the creation of applications '
```
We can also call this tool with a single string input. We can do this because this tool expects only a single input. If it required multiple inputs, we would not be able to do that.
```
'Page: LangChain\nSummary: LangChain is a framework designed to simplify the creation of applications '
```
We can also modify the built in name, description, and JSON schema of the arguments.
When defining the JSON schema of the arguments, it is important that the inputs remain the same as the function, so you shouldn’t change that. But you can define custom descriptions for each input easily.
```
from langchain_core.pydantic_v1 import BaseModel, Fieldclass WikiInputs(BaseModel): """Inputs to the wikipedia tool.""" query: str = Field( description="query to look up in Wikipedia, should be 3 or less words" )
```
```
tool = WikipediaQueryRun( name="wiki-tool", description="look up things in wikipedia", args_schema=WikiInputs, api_wrapper=api_wrapper, return_direct=True,)
```
```
'look up things in wikipedia'
```
```
{'query': {'title': 'Query', 'description': 'query to look up in Wikipedia, should be 3 or less words', 'type': 'string'}}
```
```
'Page: LangChain\nSummary: LangChain is a framework designed to simplify the creation of applications '
```
## More Topics[](#more-topics "Direct link to More Topics")
This was a quick introduction to tools in LangChain, but there is a lot more to learn
**[Built-In Tools](https://python.langchain.com/docs/integrations/tools/)**: For a list of all built-in tools, see [this page](https://python.langchain.com/docs/integrations/tools/)
**[Custom Tools](https://python.langchain.com/docs/modules/tools/custom_tools/)**: Although built-in tools are useful, it’s highly likely that you’ll have to define your own tools. See [this guide](https://python.langchain.com/docs/modules/tools/custom_tools/) for instructions on how to do so.
**[Toolkits](https://python.langchain.com/docs/modules/tools/toolkits/)**: Toolkits are collections of tools that work well together. For a more in depth description as well as a list of all built-in toolkits, see [this page](https://python.langchain.com/docs/modules/tools/toolkits/)
**[Tools as OpenAI Functions](https://python.langchain.com/docs/modules/tools/tools_as_openai_functions/)**: Tools are very similar to OpenAI Functions, and can easily be converted to that format. See [this notebook](https://python.langchain.com/docs/modules/tools/tools_as_openai_functions/) for instructions on how to do that. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:33.997Z",
"loadedUrl": "https://python.langchain.com/docs/modules/tools/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/tools/",
"description": "Tools are interfaces that an agent, chain, or LLM can use to interact",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "8879",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tools\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:33 GMT",
"etag": "W/\"386ecc4de6a13da178d6f6c6746bc01b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::gwsts-1713753933758-d33dc6f916c4"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/tools/",
"property": "og:url"
},
{
"content": "Tools | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Tools are interfaces that an agent, chain, or LLM can use to interact",
"property": "og:description"
}
],
"title": "Tools | 🦜️🔗 LangChain"
} | Tools
Tools are interfaces that an agent, chain, or LLM can use to interact with the world. They combine a few things:
The name of the tool
A description of what the tool is
JSON schema of what the inputs to the tool are
The function to call
Whether the result of a tool should be returned directly to the user
It is useful to have all this information because this information can be used to build action-taking systems! The name, description, and JSON schema can be used to prompt the LLM so it knows how to specify what action to take, and then the function to call is equivalent to taking that action.
The simpler the input to a tool is, the easier it is for an LLM to be able to use it. Many agents will only work with tools that have a single string input. For a list of agent types and which ones work with more complicated inputs, please see this documentation
Importantly, the name, description, and JSON schema (if used) are all used in the prompt. Therefore, it is really important that they are clear and describe exactly how the tool should be used. You may need to change the default name, description, or JSON schema if the LLM is not understanding how to use the tool.
Let’s take a look at how to work with tools. To do this, we’ll work with a built in tool.
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
Now we initialize the tool. This is where we can configure it as we please
api_wrapper = WikipediaAPIWrapper(top_k_results=1, doc_content_chars_max=100)
tool = WikipediaQueryRun(api_wrapper=api_wrapper)
This is the default name
This is the default description
'A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.'
This is the default JSON schema of the inputs
{'query': {'title': 'Query', 'type': 'string'}}
We can see if the tool should return directly to the user
We can call this tool with a dictionary input
tool.run({"query": "langchain"})
'Page: LangChain\nSummary: LangChain is a framework designed to simplify the creation of applications '
We can also call this tool with a single string input. We can do this because this tool expects only a single input. If it required multiple inputs, we would not be able to do that.
'Page: LangChain\nSummary: LangChain is a framework designed to simplify the creation of applications '
We can also modify the built in name, description, and JSON schema of the arguments.
When defining the JSON schema of the arguments, it is important that the inputs remain the same as the function, so you shouldn’t change that. But you can define custom descriptions for each input easily.
from langchain_core.pydantic_v1 import BaseModel, Field
class WikiInputs(BaseModel):
"""Inputs to the wikipedia tool."""
query: str = Field(
description="query to look up in Wikipedia, should be 3 or less words"
)
tool = WikipediaQueryRun(
name="wiki-tool",
description="look up things in wikipedia",
args_schema=WikiInputs,
api_wrapper=api_wrapper,
return_direct=True,
)
'look up things in wikipedia'
{'query': {'title': 'Query',
'description': 'query to look up in Wikipedia, should be 3 or less words',
'type': 'string'}}
'Page: LangChain\nSummary: LangChain is a framework designed to simplify the creation of applications '
More Topics
This was a quick introduction to tools in LangChain, but there is a lot more to learn
Built-In Tools: For a list of all built-in tools, see this page
Custom Tools: Although built-in tools are useful, it’s highly likely that you’ll have to define your own tools. See this guide for instructions on how to do so.
Toolkits: Toolkits are collections of tools that work well together. For a more in depth description as well as a list of all built-in toolkits, see this page
Tools as OpenAI Functions: Tools are very similar to OpenAI Functions, and can easily be converted to that format. See this notebook for instructions on how to do that. |
https://python.langchain.com/docs/modules/tools/toolkits/ | Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods. For a complete list of available ready-made toolkits, visit [Integrations](https://python.langchain.com/docs/integrations/toolkits/).
All Toolkits expose a `get_tools` method which returns a list of tools. You can therefore do:
```
# Initialize a toolkittoolkit = ExampleTookit(...)# Get list of toolstools = toolkit.get_tools()# Create agentagent = create_agent_method(llm, tools, prompt)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:34.198Z",
"loadedUrl": "https://python.langchain.com/docs/modules/tools/toolkits/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/tools/toolkits/",
"description": "Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4817",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"toolkits\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:33 GMT",
"etag": "W/\"2d5ab0f2cffd0c645e1515dd1108f42d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::r5b2z-1713753933971-fdf82361f765"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/tools/toolkits/",
"property": "og:url"
},
{
"content": "Toolkits | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.",
"property": "og:description"
}
],
"title": "Toolkits | 🦜️🔗 LangChain"
} | Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods. For a complete list of available ready-made toolkits, visit Integrations.
All Toolkits expose a get_tools method which returns a list of tools. You can therefore do:
# Initialize a toolkit
toolkit = ExampleTookit(...)
# Get list of tools
tools = toolkit.get_tools()
# Create agent
agent = create_agent_method(llm, tools, prompt) |
https://python.langchain.com/docs/modules/tools/custom_tools/ | ## Defining Custom Tools
When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:
* `name` (str), is required and must be unique within a set of tools provided to an agent
* `description` (str), is optional but recommended, as it is used by an agent to determine tool use
* `args_schema` (Pydantic BaseModel), is optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters.
There are multiple ways to define a tool. In this guide, we will walk through how to do for two functions:
1. A made up search function that always returns the string “LangChain”
2. A multiplier function that will multiply two numbers by eachother
The biggest difference here is that the first function only requires one input, while the second one requires multiple. Many agents only work with functions that require single inputs, so it’s important to know how to work with those. For the most part, defining these custom tools is the same, but there are some differences.
```
# Import things that are needed genericallyfrom langchain.pydantic_v1 import BaseModel, Fieldfrom langchain.tools import BaseTool, StructuredTool, tool
```
This `@tool` decorator is the simplest way to define a custom tool. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function’s docstring as the tool’s description - so a docstring MUST be provided.
```
@tooldef search(query: str) -> str: """Look up things online.""" return "LangChain"
```
```
print(search.name)print(search.description)print(search.args)
```
```
searchsearch(query: str) -> str - Look up things online.{'query': {'title': 'Query', 'type': 'string'}}
```
```
@tooldef multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * b
```
```
print(multiply.name)print(multiply.description)print(multiply.args)
```
```
multiplymultiply(a: int, b: int) -> int - Multiply two numbers.{'a': {'title': 'A', 'type': 'integer'}, 'b': {'title': 'B', 'type': 'integer'}}
```
You can also customize the tool name and JSON args by passing them into the tool decorator.
```
class SearchInput(BaseModel): query: str = Field(description="should be a search query")@tool("search-tool", args_schema=SearchInput, return_direct=True)def search(query: str) -> str: """Look up things online.""" return "LangChain"
```
```
print(search.name)print(search.description)print(search.args)print(search.return_direct)
```
```
search-toolsearch-tool(query: str) -> str - Look up things online.{'query': {'title': 'Query', 'description': 'should be a search query', 'type': 'string'}}True
```
You can also explicitly define a custom tool by subclassing the BaseTool class. This provides maximal control over the tool definition, but is a bit more work.
```
from typing import Optional, Typefrom langchain.callbacks.manager import ( AsyncCallbackManagerForToolRun, CallbackManagerForToolRun,)class SearchInput(BaseModel): query: str = Field(description="should be a search query")class CalculatorInput(BaseModel): a: int = Field(description="first number") b: int = Field(description="second number")class CustomSearchTool(BaseTool): name = "custom_search" description = "useful for when you need to answer questions about current events" args_schema: Type[BaseModel] = SearchInput def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None ) -> str: """Use the tool.""" return "LangChain" async def _arun( self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("custom_search does not support async")class CustomCalculatorTool(BaseTool): name = "Calculator" description = "useful for when you need to answer questions about math" args_schema: Type[BaseModel] = CalculatorInput return_direct: bool = True def _run( self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None ) -> str: """Use the tool.""" return a * b async def _arun( self, a: int, b: int, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("Calculator does not support async")
```
```
search = CustomSearchTool()print(search.name)print(search.description)print(search.args)
```
```
custom_searchuseful for when you need to answer questions about current events{'query': {'title': 'Query', 'description': 'should be a search query', 'type': 'string'}}
```
```
multiply = CustomCalculatorTool()print(multiply.name)print(multiply.description)print(multiply.args)print(multiply.return_direct)
```
```
Calculatoruseful for when you need to answer questions about math{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}True
```
You can also use a `StructuredTool` dataclass. This methods is a mix between the previous two. It’s more convenient than inheriting from the BaseTool class, but provides more functionality than just using a decorator.
```
def search_function(query: str): return "LangChain"search = StructuredTool.from_function( func=search_function, name="Search", description="useful for when you need to answer questions about current events", # coroutine= ... <- you can specify an async method if desired as well)
```
```
print(search.name)print(search.description)print(search.args)
```
```
SearchSearch(query: str) - useful for when you need to answer questions about current events{'query': {'title': 'Query', 'type': 'string'}}
```
You can also define a custom `args_schema` to provide more information about inputs.
```
class CalculatorInput(BaseModel): a: int = Field(description="first number") b: int = Field(description="second number")def multiply(a: int, b: int) -> int: """Multiply two numbers.""" return a * bcalculator = StructuredTool.from_function( func=multiply, name="Calculator", description="multiply numbers", args_schema=CalculatorInput, return_direct=True, # coroutine= ... <- you can specify an async method if desired as well)
```
```
print(calculator.name)print(calculator.description)print(calculator.args)
```
```
CalculatorCalculator(a: int, b: int) -> int - multiply numbers{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}
```
When a tool encounters an error and the exception is not caught, the agent will stop executing. If you want the agent to continue execution, you can raise a `ToolException` and set `handle_tool_error` accordingly.
When `ToolException` is thrown, the agent will not stop working, but will handle the exception according to the `handle_tool_error` variable of the tool, and the processing result will be returned to the agent as observation, and printed in red.
You can set `handle_tool_error` to `True`, set it a unified string value, or set it as a function. If it’s set as a function, the function should take a `ToolException` as a parameter and return a `str` value.
Please note that only raising a `ToolException` won’t be effective. You need to first set the `handle_tool_error` of the tool because its default value is `False`.
```
from langchain_core.tools import ToolExceptiondef search_tool1(s: str): raise ToolException("The search tool1 is not available.")
```
First, let’s see what happens if we don’t set `handle_tool_error` - it will error.
```
search = StructuredTool.from_function( func=search_tool1, name="Search_tool1", description="A bad tool",)search.run("test")
```
```
ToolException: The search tool1 is not available.
```
Now, let’s set `handle_tool_error` to be True
```
search = StructuredTool.from_function( func=search_tool1, name="Search_tool1", description="A bad tool", handle_tool_error=True,)search.run("test")
```
```
'The search tool1 is not available.'
```
We can also define a custom way to handle the tool error
```
def _handle_error(error: ToolException) -> str: return ( "The following errors occurred during tool execution:" + error.args[0] + "Please try another tool." )search = StructuredTool.from_function( func=search_tool1, name="Search_tool1", description="A bad tool", handle_tool_error=_handle_error,)search.run("test")
```
```
'The following errors occurred during tool execution:The search tool1 is not available.Please try another tool.'
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:34.240Z",
"loadedUrl": "https://python.langchain.com/docs/modules/tools/custom_tools/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/tools/custom_tools/",
"description": "When constructing your own agent, you will need to provide it with a",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "7271",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"custom_tools\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:33 GMT",
"etag": "W/\"bced82bffeffe9f8c1f66609d2af29c4\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::9tn2v-1713753933767-42d515e9bd07"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/tools/custom_tools/",
"property": "og:url"
},
{
"content": "Defining Custom Tools | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "When constructing your own agent, you will need to provide it with a",
"property": "og:description"
}
],
"title": "Defining Custom Tools | 🦜️🔗 LangChain"
} | Defining Custom Tools
When constructing your own agent, you will need to provide it with a list of Tools that it can use. Besides the actual function that is called, the Tool consists of several components:
name (str), is required and must be unique within a set of tools provided to an agent
description (str), is optional but recommended, as it is used by an agent to determine tool use
args_schema (Pydantic BaseModel), is optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters.
There are multiple ways to define a tool. In this guide, we will walk through how to do for two functions:
A made up search function that always returns the string “LangChain”
A multiplier function that will multiply two numbers by eachother
The biggest difference here is that the first function only requires one input, while the second one requires multiple. Many agents only work with functions that require single inputs, so it’s important to know how to work with those. For the most part, defining these custom tools is the same, but there are some differences.
# Import things that are needed generically
from langchain.pydantic_v1 import BaseModel, Field
from langchain.tools import BaseTool, StructuredTool, tool
This @tool decorator is the simplest way to define a custom tool. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function’s docstring as the tool’s description - so a docstring MUST be provided.
@tool
def search(query: str) -> str:
"""Look up things online."""
return "LangChain"
print(search.name)
print(search.description)
print(search.args)
search
search(query: str) -> str - Look up things online.
{'query': {'title': 'Query', 'type': 'string'}}
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
print(multiply.name)
print(multiply.description)
print(multiply.args)
multiply
multiply(a: int, b: int) -> int - Multiply two numbers.
{'a': {'title': 'A', 'type': 'integer'}, 'b': {'title': 'B', 'type': 'integer'}}
You can also customize the tool name and JSON args by passing them into the tool decorator.
class SearchInput(BaseModel):
query: str = Field(description="should be a search query")
@tool("search-tool", args_schema=SearchInput, return_direct=True)
def search(query: str) -> str:
"""Look up things online."""
return "LangChain"
print(search.name)
print(search.description)
print(search.args)
print(search.return_direct)
search-tool
search-tool(query: str) -> str - Look up things online.
{'query': {'title': 'Query', 'description': 'should be a search query', 'type': 'string'}}
True
You can also explicitly define a custom tool by subclassing the BaseTool class. This provides maximal control over the tool definition, but is a bit more work.
from typing import Optional, Type
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
class SearchInput(BaseModel):
query: str = Field(description="should be a search query")
class CalculatorInput(BaseModel):
a: int = Field(description="first number")
b: int = Field(description="second number")
class CustomSearchTool(BaseTool):
name = "custom_search"
description = "useful for when you need to answer questions about current events"
args_schema: Type[BaseModel] = SearchInput
def _run(
self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
return "LangChain"
async def _arun(
self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
return_direct: bool = True
def _run(
self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
return a * b
async def _arun(
self,
a: int,
b: int,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("Calculator does not support async")
search = CustomSearchTool()
print(search.name)
print(search.description)
print(search.args)
custom_search
useful for when you need to answer questions about current events
{'query': {'title': 'Query', 'description': 'should be a search query', 'type': 'string'}}
multiply = CustomCalculatorTool()
print(multiply.name)
print(multiply.description)
print(multiply.args)
print(multiply.return_direct)
Calculator
useful for when you need to answer questions about math
{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}
True
You can also use a StructuredTool dataclass. This methods is a mix between the previous two. It’s more convenient than inheriting from the BaseTool class, but provides more functionality than just using a decorator.
def search_function(query: str):
return "LangChain"
search = StructuredTool.from_function(
func=search_function,
name="Search",
description="useful for when you need to answer questions about current events",
# coroutine= ... <- you can specify an async method if desired as well
)
print(search.name)
print(search.description)
print(search.args)
Search
Search(query: str) - useful for when you need to answer questions about current events
{'query': {'title': 'Query', 'type': 'string'}}
You can also define a custom args_schema to provide more information about inputs.
class CalculatorInput(BaseModel):
a: int = Field(description="first number")
b: int = Field(description="second number")
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
calculator = StructuredTool.from_function(
func=multiply,
name="Calculator",
description="multiply numbers",
args_schema=CalculatorInput,
return_direct=True,
# coroutine= ... <- you can specify an async method if desired as well
)
print(calculator.name)
print(calculator.description)
print(calculator.args)
Calculator
Calculator(a: int, b: int) -> int - multiply numbers
{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}
When a tool encounters an error and the exception is not caught, the agent will stop executing. If you want the agent to continue execution, you can raise a ToolException and set handle_tool_error accordingly.
When ToolException is thrown, the agent will not stop working, but will handle the exception according to the handle_tool_error variable of the tool, and the processing result will be returned to the agent as observation, and printed in red.
You can set handle_tool_error to True, set it a unified string value, or set it as a function. If it’s set as a function, the function should take a ToolException as a parameter and return a str value.
Please note that only raising a ToolException won’t be effective. You need to first set the handle_tool_error of the tool because its default value is False.
from langchain_core.tools import ToolException
def search_tool1(s: str):
raise ToolException("The search tool1 is not available.")
First, let’s see what happens if we don’t set handle_tool_error - it will error.
search = StructuredTool.from_function(
func=search_tool1,
name="Search_tool1",
description="A bad tool",
)
search.run("test")
ToolException: The search tool1 is not available.
Now, let’s set handle_tool_error to be True
search = StructuredTool.from_function(
func=search_tool1,
name="Search_tool1",
description="A bad tool",
handle_tool_error=True,
)
search.run("test")
'The search tool1 is not available.'
We can also define a custom way to handle the tool error
def _handle_error(error: ToolException) -> str:
return (
"The following errors occurred during tool execution:"
+ error.args[0]
+ "Please try another tool."
)
search = StructuredTool.from_function(
func=search_tool1,
name="Search_tool1",
description="A bad tool",
handle_tool_error=_handle_error,
)
search.run("test")
'The following errors occurred during tool execution:The search tool1 is not available.Please try another tool.' |
https://python.langchain.com/docs/modules/tools/tools_as_openai_functions/ | ## Tools as OpenAI Functions
This notebook goes over how to use LangChain tools as OpenAI functions.
```
%pip install -qU langchain-community langchain-openai
```
```
from langchain_community.tools import MoveFileToolfrom langchain_core.messages import HumanMessagefrom langchain_core.utils.function_calling import convert_to_openai_functionfrom langchain_openai import ChatOpenAI
```
```
model = ChatOpenAI(model="gpt-3.5-turbo")
```
```
tools = [MoveFileTool()]functions = [convert_to_openai_function(t) for t in tools]
```
```
{'name': 'move_file', 'description': 'Move or rename a file from one location to another', 'parameters': {'type': 'object', 'properties': {'source_path': {'description': 'Path of the file to move', 'type': 'string'}, 'destination_path': {'description': 'New path for the moved file', 'type': 'string'}}, 'required': ['source_path', 'destination_path']}}
```
```
message = model.invoke( [HumanMessage(content="move file foo to bar")], functions=functions)
```
```
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}})
```
```
message.additional_kwargs["function_call"]
```
```
{'name': 'move_file', 'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}'}
```
With OpenAI chat models we can also automatically bind and convert function-like objects with `bind_functions`
```
model_with_functions = model.bind_functions(tools)model_with_functions.invoke([HumanMessage(content="move file foo to bar")])
```
```
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}})
```
Or we can use the update OpenAI API that uses `tools` and `tool_choice` instead of `functions` and `function_call` by using `ChatOpenAI.bind_tools`:
```
model_with_tools = model.bind_tools(tools)model_with_tools.invoke([HumanMessage(content="move file foo to bar")])
```
```
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_btkY3xV71cEVAOHnNa5qwo44', 'function': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}, 'type': 'function'}]})
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:34.876Z",
"loadedUrl": "https://python.langchain.com/docs/modules/tools/tools_as_openai_functions/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/modules/tools/tools_as_openai_functions/",
"description": "This notebook goes over how to use LangChain tools as OpenAI functions.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "5108",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"tools_as_openai_functions\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:34 GMT",
"etag": "W/\"4b6c6efaaece2090a555a0411095c63c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::pgznm-1713753934802-f3e9831030aa"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/modules/tools/tools_as_openai_functions/",
"property": "og:url"
},
{
"content": "Tools as OpenAI Functions | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This notebook goes over how to use LangChain tools as OpenAI functions.",
"property": "og:description"
}
],
"title": "Tools as OpenAI Functions | 🦜️🔗 LangChain"
} | Tools as OpenAI Functions
This notebook goes over how to use LangChain tools as OpenAI functions.
%pip install -qU langchain-community langchain-openai
from langchain_community.tools import MoveFileTool
from langchain_core.messages import HumanMessage
from langchain_core.utils.function_calling import convert_to_openai_function
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-3.5-turbo")
tools = [MoveFileTool()]
functions = [convert_to_openai_function(t) for t in tools]
{'name': 'move_file',
'description': 'Move or rename a file from one location to another',
'parameters': {'type': 'object',
'properties': {'source_path': {'description': 'Path of the file to move',
'type': 'string'},
'destination_path': {'description': 'New path for the moved file',
'type': 'string'}},
'required': ['source_path', 'destination_path']}}
message = model.invoke(
[HumanMessage(content="move file foo to bar")], functions=functions
)
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}})
message.additional_kwargs["function_call"]
{'name': 'move_file',
'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}'}
With OpenAI chat models we can also automatically bind and convert function-like objects with bind_functions
model_with_functions = model.bind_functions(tools)
model_with_functions.invoke([HumanMessage(content="move file foo to bar")])
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}})
Or we can use the update OpenAI API that uses tools and tool_choice instead of functions and function_call by using ChatOpenAI.bind_tools:
model_with_tools = model.bind_tools(tools)
model_with_tools.invoke([HumanMessage(content="move file foo to bar")])
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_btkY3xV71cEVAOHnNa5qwo44', 'function': {'arguments': '{\n "source_path": "foo",\n "destination_path": "bar"\n}', 'name': 'move_file'}, 'type': 'function'}]})
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/packages/ | ## 📕 Package Versioning
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by a maintainer and published to [PyPI](https://pypi.org/). The different packages are versioned slightly differently.
## `langchain-core`[](#langchain-core "Direct link to langchain-core")
`langchain-core` is currently on version `0.1.x`.
As `langchain-core` contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything marked with the `beta` decorator (you can see this in the API reference and will see warnings when using such functionality). The reason for beta features is that given the rate of change of the field, being able to move quickly is still a priority.
Minor version increases will occur for:
* Breaking changes for any public interfaces marked as `beta`.
Patch version increases will occur for:
* Bug fixes
* New features
* Any changes to private interfaces
* Any changes to `beta` features
## `langchain`[](#langchain "Direct link to langchain")
`langchain` is currently on version `0.1.x`
Minor version increases will occur for:
* Breaking changes for any public interfaces NOT marked as `beta`.
Patch version increases will occur for:
* Bug fixes
* New features
* Any changes to private interfaces
* Any changes to `beta` features
We are working on the `langchain` v0.2 release, which will have some breaking changes to legacy Chains and Agents. Additionally, we will remove `langchain-community` as a dependency and stop re-exporting integrations that have been moved to `langchain-community`.
`langchain-community` is currently on version `0.0.x`
All changes will be accompanied by a patch version increase.
## `langchain-experimental`[](#langchain-experimental "Direct link to langchain-experimental")
`langchain-experimental` is currently on version `0.0.x`
All changes will be accompanied by a patch version increase.
## Partner Packages[](#partner-packages "Direct link to Partner Packages")
Partner packages are versioned independently. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:35.676Z",
"loadedUrl": "https://python.langchain.com/docs/packages/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/packages/",
"description": "As of now, LangChain has an ad hoc release process: releases are cut with high frequency by",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"packages\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:35 GMT",
"etag": "W/\"145be9cc2a63112e65885d9c9cf3d502\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::ktknr-1713753935526-531b69983f7a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/packages/",
"property": "og:url"
},
{
"content": "📕 Package Versioning | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "As of now, LangChain has an ad hoc release process: releases are cut with high frequency by",
"property": "og:description"
}
],
"title": "📕 Package Versioning | 🦜️🔗 LangChain"
} | 📕 Package Versioning
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by a maintainer and published to PyPI. The different packages are versioned slightly differently.
langchain-core
langchain-core is currently on version 0.1.x.
As langchain-core contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything marked with the beta decorator (you can see this in the API reference and will see warnings when using such functionality). The reason for beta features is that given the rate of change of the field, being able to move quickly is still a priority.
Minor version increases will occur for:
Breaking changes for any public interfaces marked as beta.
Patch version increases will occur for:
Bug fixes
New features
Any changes to private interfaces
Any changes to beta features
langchain
langchain is currently on version 0.1.x
Minor version increases will occur for:
Breaking changes for any public interfaces NOT marked as beta.
Patch version increases will occur for:
Bug fixes
New features
Any changes to private interfaces
Any changes to beta features
We are working on the langchain v0.2 release, which will have some breaking changes to legacy Chains and Agents. Additionally, we will remove langchain-community as a dependency and stop re-exporting integrations that have been moved to langchain-community.
langchain-community is currently on version 0.0.x
All changes will be accompanied by a patch version increase.
langchain-experimental
langchain-experimental is currently on version 0.0.x
All changes will be accompanied by a patch version increase.
Partner Packages
Partner packages are versioned independently. |
https://python.langchain.com/docs/people/ | * * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:36.436Z",
"loadedUrl": "https://python.langchain.com/docs/people/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/people/",
"description": "There are some incredible humans from all over the world who have been instrumental in helping the LangChain community flourish 🌐!",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3742",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"people\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:36 GMT",
"etag": "W/\"98ab19b3cc02bb7b805b5c578dadc92c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::kfn55-1713753936167-7add5ec1b9ea"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/people/",
"property": "og:url"
},
{
"content": "People | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "There are some incredible humans from all over the world who have been instrumental in helping the LangChain community flourish 🌐!",
"property": "og:description"
}
],
"title": "People | 🦜️🔗 LangChain"
} | Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/security/ | ## Security
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
## Best Practices[](#best-practices "Direct link to Best Practices")
When building such applications developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
## Reporting a Vulnerability[](#reporting-a-vulnerability "Direct link to Reporting a Vulnerability")
Please report security vulnerabilities by email to [security@langchain.dev.](mailto:security@langchain.dev.) This will ensure the issue is promptly triaged and acted upon as needed. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:40.926Z",
"loadedUrl": "https://python.langchain.com/docs/security/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/security/",
"description": "LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "9110",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"security\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:40 GMT",
"etag": "W/\"4c52ac28fe6f14640020d70efd4da8e3\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wbz5r-1713753940797-b1853f22b4cf"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/security/",
"property": "og:url"
},
{
"content": "Security | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.",
"property": "og:description"
}
],
"title": "Security | 🦜️🔗 LangChain"
} | Security
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
Best Practices
When building such applications developers should remember to follow good security practices:
Limit Permissions: Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application.
Anticipate Potential Misuse: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data.
Defense in Depth: No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
Data corruption or loss.
Unauthorized access to confidential information.
Compromised performance or availability of critical resources.
Example scenarios with mitigation strategies:
A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.
A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
Reporting a Vulnerability
Please report security vulnerabilities by email to security@langchain.dev. This will ensure the issue is promptly triaged and acted upon as needed. |
https://python.langchain.com/docs/templates/ | ## Templates
Highlighting a few different categories of templates
## ⭐ Popular[](#-popular "Direct link to ⭐ Popular")
These are some of the more popular templates to get started with.
* [Retrieval Augmented Generation Chatbot](https://python.langchain.com/docs/templates/rag-conversation/): Build a chatbot over your data. Defaults to OpenAI and PineconeVectorStore.
* [Extraction with OpenAI Functions](https://python.langchain.com/docs/templates/extraction-openai-functions/): Do extraction of structured data from unstructured data. Uses OpenAI function calling.
* [Local Retrieval Augmented Generation](https://python.langchain.com/docs/templates/rag-chroma-private/): Build a chatbot over your data. Uses only local tooling: Ollama, GPT4all, Chroma.
* [OpenAI Functions Agent](https://python.langchain.com/docs/templates/openai-functions-agent/): Build a chatbot that can take actions. Uses OpenAI function calling and Tavily.
* [XML Agent](https://python.langchain.com/docs/templates/xml-agent/): Build a chatbot that can take actions. Uses Anthropic and You.com.
## 📥 Advanced Retrieval[](#-advanced-retrieval "Direct link to 📥 Advanced Retrieval")
These templates cover advanced retrieval techniques, which can be used for chat and QA over databases or documents.
* [Reranking](https://python.langchain.com/docs/templates/rag-pinecone-rerank/): This retrieval technique uses Cohere's reranking endpoint to rerank documents from an initial retrieval step.
* [Anthropic Iterative Search](https://python.langchain.com/docs/templates/anthropic-iterative-search/): This retrieval technique uses iterative prompting to determine what to retrieve and whether the retriever documents are good enough.
* **Parent Document Retrieval** using [Neo4j](https://python.langchain.com/docs/templates/neo4j-parent/) or [MongoDB](https://python.langchain.com/docs/templates/mongo-parent-document-retrieval/): This retrieval technique stores embeddings for smaller chunks, but then returns larger chunks to pass to the model for generation.
* [Semi-Structured RAG](https://python.langchain.com/docs/templates/rag-semi-structured/): The template shows how to do retrieval over semi-structured data (e.g. data that involves both text and tables).
* [Temporal RAG](https://python.langchain.com/docs/templates/rag-timescale-hybrid-search-time/): The template shows how to do hybrid search over data with a time-based component using [Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral).
## 🔍Advanced Retrieval - Query Transformation[](#advanced-retrieval---query-transformation "Direct link to 🔍Advanced Retrieval - Query Transformation")
A selection of advanced retrieval methods that involve transforming the original user query, which can improve retrieval quality.
* [Hypothetical Document Embeddings](https://python.langchain.com/docs/templates/hyde/): A retrieval technique that generates a hypothetical document for a given query, and then uses the embedding of that document to do semantic search. [Paper](https://arxiv.org/abs/2212.10496).
* [Rewrite-Retrieve-Read](https://python.langchain.com/docs/templates/rewrite-retrieve-read/): A retrieval technique that rewrites a given query before passing it to a search engine. [Paper](https://arxiv.org/abs/2305.14283).
* [Step-back QA Prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting/): A retrieval technique that generates a "step-back" question and then retrieves documents relevant to both that question and the original question. [Paper](https://arxiv.org/abs//2310.06117).
* [RAG-Fusion](https://python.langchain.com/docs/templates/rag-fusion/): A retrieval technique that generates multiple queries and then reranks the retrieved documents using reciprocal rank fusion. [Article](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1).
* [Multi-Query Retriever](https://python.langchain.com/docs/templates/rag-pinecone-multi-query/): This retrieval technique uses an LLM to generate multiple queries and then fetches documents for all queries.
## 🧠Advanced Retrieval - Query Construction[](#advanced-retrieval---query-construction "Direct link to 🧠Advanced Retrieval - Query Construction")
A selection of advanced retrieval methods that involve constructing a query in a separate DSL from natural language, which enable natural language chat over various structured databases.
* [Elastic Query Generator](https://python.langchain.com/docs/templates/elastic-query-generator/): Generate elastic search queries from natural language.
* [Neo4j Cypher Generation](https://python.langchain.com/docs/templates/neo4j-cypher/): Generate cypher statements from natural language. Available with a ["full text" option](https://python.langchain.com/docs/templates/neo4j-cypher-ft/) as well.
* [Supabase Self Query](https://python.langchain.com/docs/templates/self-query-supabase/): Parse a natural language query into a semantic query as well as a metadata filter for Supabase.
## 🦙 OSS Models[](#-oss-models "Direct link to 🦙 OSS Models")
These templates use OSS models, which enable privacy for sensitive data.
* [Local Retrieval Augmented Generation](https://python.langchain.com/docs/templates/rag-chroma-private/): Build a chatbot over your data. Uses only local tooling: Ollama, GPT4all, Chroma.
* [SQL Question Answering (Replicate)](https://python.langchain.com/docs/templates/sql-llama2/): Question answering over a SQL database, using Llama2 hosted on [Replicate](https://replicate.com/).
* [SQL Question Answering (LlamaCpp)](https://python.langchain.com/docs/templates/sql-llamacpp/): Question answering over a SQL database, using Llama2 through [LlamaCpp](https://github.com/ggerganov/llama.cpp).
* [SQL Question Answering (Ollama)](https://python.langchain.com/docs/templates/sql-ollama/): Question answering over a SQL database, using Llama2 through [Ollama](https://github.com/jmorganca/ollama).
These templates extract data in a structured format based upon a user-specified schema.
* [Extraction Using OpenAI Functions](https://python.langchain.com/docs/templates/extraction-openai-functions/): Extract information from text using OpenAI Function Calling.
* [Extraction Using Anthropic Functions](https://python.langchain.com/docs/templates/extraction-anthropic-functions/): Extract information from text using a LangChain wrapper around the Anthropic endpoints intended to simulate function calling.
* [Extract BioTech Plate Data](https://python.langchain.com/docs/templates/plate-chain/): Extract microplate data from messy Excel spreadsheets into a more normalized format.
## ⛏️Summarization and tagging[](#️summarization-and-tagging "Direct link to ⛏️Summarization and tagging")
These templates summarize or categorize documents and text.
* [Summarization using Anthropic](https://python.langchain.com/docs/templates/summarize-anthropic/): Uses Anthropic's Claude2 to summarize long documents.
## 🤖 Agents[](#-agents "Direct link to 🤖 Agents")
These templates build chatbots that can take actions, helping to automate tasks.
* [OpenAI Functions Agent](https://python.langchain.com/docs/templates/openai-functions-agent/): Build a chatbot that can take actions. Uses OpenAI function calling and Tavily.
* [XML Agent](https://python.langchain.com/docs/templates/xml-agent/): Build a chatbot that can take actions. Uses Anthropic and You.com.
## 🚨 Safety and evaluation[](#-safety-and-evaluation "Direct link to 🚨 Safety and evaluation")
These templates enable moderation or evaluation of LLM outputs.
* [Guardrails Output Parser](https://python.langchain.com/docs/templates/guardrails-output-parser/): Use guardrails-ai to validate LLM output.
* [Chatbot Feedback](https://python.langchain.com/docs/templates/chat-bot-feedback/): Use LangSmith to evaluate chatbot responses. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:41.571Z",
"loadedUrl": "https://python.langchain.com/docs/templates/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/",
"description": "Highlighting a few different categories of templates",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "5268",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"templates\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:41 GMT",
"etag": "W/\"c475dd6bd4b9734ea36cf93de92cb80c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::hhtvz-1713753941386-82f8785aea02"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/",
"property": "og:url"
},
{
"content": "Templates | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Highlighting a few different categories of templates",
"property": "og:description"
}
],
"title": "Templates | 🦜️🔗 LangChain"
} | Templates
Highlighting a few different categories of templates
⭐ Popular
These are some of the more popular templates to get started with.
Retrieval Augmented Generation Chatbot: Build a chatbot over your data. Defaults to OpenAI and PineconeVectorStore.
Extraction with OpenAI Functions: Do extraction of structured data from unstructured data. Uses OpenAI function calling.
Local Retrieval Augmented Generation: Build a chatbot over your data. Uses only local tooling: Ollama, GPT4all, Chroma.
OpenAI Functions Agent: Build a chatbot that can take actions. Uses OpenAI function calling and Tavily.
XML Agent: Build a chatbot that can take actions. Uses Anthropic and You.com.
📥 Advanced Retrieval
These templates cover advanced retrieval techniques, which can be used for chat and QA over databases or documents.
Reranking: This retrieval technique uses Cohere's reranking endpoint to rerank documents from an initial retrieval step.
Anthropic Iterative Search: This retrieval technique uses iterative prompting to determine what to retrieve and whether the retriever documents are good enough.
Parent Document Retrieval using Neo4j or MongoDB: This retrieval technique stores embeddings for smaller chunks, but then returns larger chunks to pass to the model for generation.
Semi-Structured RAG: The template shows how to do retrieval over semi-structured data (e.g. data that involves both text and tables).
Temporal RAG: The template shows how to do hybrid search over data with a time-based component using Timescale Vector.
🔍Advanced Retrieval - Query Transformation
A selection of advanced retrieval methods that involve transforming the original user query, which can improve retrieval quality.
Hypothetical Document Embeddings: A retrieval technique that generates a hypothetical document for a given query, and then uses the embedding of that document to do semantic search. Paper.
Rewrite-Retrieve-Read: A retrieval technique that rewrites a given query before passing it to a search engine. Paper.
Step-back QA Prompting: A retrieval technique that generates a "step-back" question and then retrieves documents relevant to both that question and the original question. Paper.
RAG-Fusion: A retrieval technique that generates multiple queries and then reranks the retrieved documents using reciprocal rank fusion. Article.
Multi-Query Retriever: This retrieval technique uses an LLM to generate multiple queries and then fetches documents for all queries.
🧠Advanced Retrieval - Query Construction
A selection of advanced retrieval methods that involve constructing a query in a separate DSL from natural language, which enable natural language chat over various structured databases.
Elastic Query Generator: Generate elastic search queries from natural language.
Neo4j Cypher Generation: Generate cypher statements from natural language. Available with a "full text" option as well.
Supabase Self Query: Parse a natural language query into a semantic query as well as a metadata filter for Supabase.
🦙 OSS Models
These templates use OSS models, which enable privacy for sensitive data.
Local Retrieval Augmented Generation: Build a chatbot over your data. Uses only local tooling: Ollama, GPT4all, Chroma.
SQL Question Answering (Replicate): Question answering over a SQL database, using Llama2 hosted on Replicate.
SQL Question Answering (LlamaCpp): Question answering over a SQL database, using Llama2 through LlamaCpp.
SQL Question Answering (Ollama): Question answering over a SQL database, using Llama2 through Ollama.
These templates extract data in a structured format based upon a user-specified schema.
Extraction Using OpenAI Functions: Extract information from text using OpenAI Function Calling.
Extraction Using Anthropic Functions: Extract information from text using a LangChain wrapper around the Anthropic endpoints intended to simulate function calling.
Extract BioTech Plate Data: Extract microplate data from messy Excel spreadsheets into a more normalized format.
⛏️Summarization and tagging
These templates summarize or categorize documents and text.
Summarization using Anthropic: Uses Anthropic's Claude2 to summarize long documents.
🤖 Agents
These templates build chatbots that can take actions, helping to automate tasks.
OpenAI Functions Agent: Build a chatbot that can take actions. Uses OpenAI function calling and Tavily.
XML Agent: Build a chatbot that can take actions. Uses Anthropic and You.com.
🚨 Safety and evaluation
These templates enable moderation or evaluation of LLM outputs.
Guardrails Output Parser: Use guardrails-ai to validate LLM output.
Chatbot Feedback: Use LangSmith to evaluate chatbot responses. |
https://python.langchain.com/docs/templates/bedrock-jcvd/ | ## Bedrock JCVD 🕺🥋
## Overview[](#overview "Direct link to Overview")
LangChain template that uses [Anthropic's Claude on Amazon Bedrock](https://aws.amazon.com/bedrock/claude/) to behave like JCVD.
> I am the Fred Astaire of Chatbots! 🕺
'
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
### AWS Credentials[](#aws-credentials "Direct link to AWS Credentials")
This template uses [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html), the AWS SDK for Python, to call [Amazon Bedrock](https://aws.amazon.com/bedrock/). You **must** configure both AWS credentials _and_ an AWS Region in order to make requests.
> For information on how to do this, see [AWS Boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) (Developer Guide > Credentials).
### Foundation Models[](#foundation-models "Direct link to Foundation Models")
By default, this template uses [Anthropic's Claude v2](https://aws.amazon.com/about-aws/whats-new/2023/08/claude-2-foundation-model-anthropic-amazon-bedrock/) (`anthropic.claude-v2`).
> To request access to a specific model, check out the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html) (Model access)
To use a different model, set the environment variable `BEDROCK_JCVD_MODEL_ID`. A list of base models is available in the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html) (Use the API > API operations > Run inference > Base Model IDs).
> The full list of available models (including base and [custom models](https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html)) is available in the [Amazon Bedrock Console](https://docs.aws.amazon.com/bedrock/latest/userguide/using-console.html) under **Foundation Models** or by calling [`aws bedrock list-foundation-models`](https://docs.aws.amazon.com/cli/latest/reference/bedrock/list-foundation-models.html).
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package bedrock-jcvd
```
If you want to add this to an existing project, you can just run:
```
langchain app add bedrock-jcvd
```
And add the following code to your `server.py` file:
```
from bedrock_jcvd import chain as bedrock_jcvd_chainadd_routes(app, bedrock_jcvd_chain, path="/bedrock-jcvd")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs).
We can also access the playground at [http://127.0.0.1:8000/bedrock-jcvd/playground](http://127.0.0.1:8000/bedrock-jcvd/playground) | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:41.746Z",
"loadedUrl": "https://python.langchain.com/docs/templates/bedrock-jcvd/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/bedrock-jcvd/",
"description": "Overview",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3747",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"bedrock-jcvd\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:41 GMT",
"etag": "W/\"2222159c7f693cd78c758393d21197f7\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::54c7l-1713753941696-89ae2c2b2a3d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/bedrock-jcvd/",
"property": "og:url"
},
{
"content": "Bedrock JCVD 🕺🥋 | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Overview",
"property": "og:description"
}
],
"title": "Bedrock JCVD 🕺🥋 | 🦜️🔗 LangChain"
} | Bedrock JCVD 🕺🥋
Overview
LangChain template that uses Anthropic's Claude on Amazon Bedrock to behave like JCVD.
I am the Fred Astaire of Chatbots! 🕺
'
Environment Setup
AWS Credentials
This template uses Boto3, the AWS SDK for Python, to call Amazon Bedrock. You must configure both AWS credentials and an AWS Region in order to make requests.
For information on how to do this, see AWS Boto3 documentation (Developer Guide > Credentials).
Foundation Models
By default, this template uses Anthropic's Claude v2 (anthropic.claude-v2).
To request access to a specific model, check out the Amazon Bedrock User Guide (Model access)
To use a different model, set the environment variable BEDROCK_JCVD_MODEL_ID. A list of base models is available in the Amazon Bedrock User Guide (Use the API > API operations > Run inference > Base Model IDs).
The full list of available models (including base and custom models) is available in the Amazon Bedrock Console under Foundation Models or by calling aws bedrock list-foundation-models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package bedrock-jcvd
If you want to add this to an existing project, you can just run:
langchain app add bedrock-jcvd
And add the following code to your server.py file:
from bedrock_jcvd import chain as bedrock_jcvd_chain
add_routes(app, bedrock_jcvd_chain, path="/bedrock-jcvd")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs.
We can also access the playground at http://127.0.0.1:8000/bedrock-jcvd/playground |
https://python.langchain.com/docs/templates/rag-gpt-crawler/ | ## rag-gpt-crawler
GPT-crawler will crawl websites to produce files for use in custom GPTs or other apps (RAG).
This template uses [gpt-crawler](https://github.com/BuilderIO/gpt-crawler) to build a RAG app
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Crawling[](#crawling "Direct link to Crawling")
Run GPT-crawler to extact content from a set of urls, using the config file in GPT-crawler repo.
Here is example config for LangChain use-case docs:
```
export const config: Config = { url: "https://python.langchain.com/docs/use_cases/", match: "https://python.langchain.com/docs/use_cases/**", selector: ".docMainContainer_gTbr", maxPagesToCrawl: 10, outputFileName: "output.json",};
```
Then, run this as described in the [gpt-crawler](https://github.com/BuilderIO/gpt-crawler) README:
And copy the `output.json` file into the folder containing this README.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-gpt-crawler
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-gpt-crawler
```
And add the following code to your `server.py` file:
```
from rag_chroma import chain as rag_gpt_crawleradd_routes(app, rag_gpt_crawler, path="/rag-gpt-crawler")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-gpt-crawler/playground](http://127.0.0.1:8000/rag-gpt-crawler/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-gpt-crawler")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:41.886Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-gpt-crawler/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-gpt-crawler/",
"description": "GPT-crawler will crawl websites to produce files for use in custom GPTs or other apps (RAG).",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-gpt-crawler\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:41 GMT",
"etag": "W/\"f3928f083f5dd8a9276da905166dbfd0\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::7dlgj-1713753941716-53670da9d630"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-gpt-crawler/",
"property": "og:url"
},
{
"content": "rag-gpt-crawler | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "GPT-crawler will crawl websites to produce files for use in custom GPTs or other apps (RAG).",
"property": "og:description"
}
],
"title": "rag-gpt-crawler | 🦜️🔗 LangChain"
} | rag-gpt-crawler
GPT-crawler will crawl websites to produce files for use in custom GPTs or other apps (RAG).
This template uses gpt-crawler to build a RAG app
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Crawling
Run GPT-crawler to extact content from a set of urls, using the config file in GPT-crawler repo.
Here is example config for LangChain use-case docs:
export const config: Config = {
url: "https://python.langchain.com/docs/use_cases/",
match: "https://python.langchain.com/docs/use_cases/**",
selector: ".docMainContainer_gTbr",
maxPagesToCrawl: 10,
outputFileName: "output.json",
};
Then, run this as described in the gpt-crawler README:
And copy the output.json file into the folder containing this README.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-gpt-crawler
If you want to add this to an existing project, you can just run:
langchain app add rag-gpt-crawler
And add the following code to your server.py file:
from rag_chroma import chain as rag_gpt_crawler
add_routes(app, rag_gpt_crawler, path="/rag-gpt-crawler")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-gpt-crawler/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-gpt-crawler") |
https://python.langchain.com/docs/templates/rag-jaguardb/ | ## rag-jaguardb
This template performs RAG using JaguarDB and OpenAI.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
You should export two environment variables, one being your Jaguar URI, the other being your OpenAI API KEY. If you do not have JaguarDB set up, see the `Setup Jaguar` section at the bottom for instructions on how to do so.
```
export JAGUAR_API_KEY=...export OPENAI_API_KEY=...
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-jaguardb
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-jagaurdb
```
And add the following code to your `server.py` file:
```
from rag_jaguardb import chain as rag_jaguardbadd_routes(app, rag_jaguardb_chain, path="/rag-jaguardb")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-jaguardb/playground](http://127.0.0.1:8000/rag-jaguardb/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-jaguardb")
```
## JaguarDB Setup[](#jaguardb-setup "Direct link to JaguarDB Setup")
To utilize JaguarDB, you can use docker pull and docker run commands to quickly setup JaguarDB.
```
docker pull jaguardb/jaguardb docker run -d -p 8888:8888 --name jaguardb jaguardb/jaguardb
```
To launch the JaguarDB client terminal to interact with JaguarDB server:
```
docker exec -it jaguardb /home/jaguar/jaguar/bin/jag
```
Another option is to download an already-built binary package of JaguarDB on Linux, and deploy the database on a single node or in a cluster of nodes. The streamlined process enables you to quickly start using JaguarDB and leverage its powerful features and functionalities. [here](http://www.jaguardb.com/download.html). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:41.984Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-jaguardb/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-jaguardb/",
"description": "This template performs RAG using JaguarDB and OpenAI.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3742",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-jaguardb\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:41 GMT",
"etag": "W/\"5df1325a39a08e8d6a1ffe2edb683caa\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::tl469-1713753941829-bc0ed796067d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-jaguardb/",
"property": "og:url"
},
{
"content": "rag-jaguardb | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using JaguarDB and OpenAI.",
"property": "og:description"
}
],
"title": "rag-jaguardb | 🦜️🔗 LangChain"
} | rag-jaguardb
This template performs RAG using JaguarDB and OpenAI.
Environment Setup
You should export two environment variables, one being your Jaguar URI, the other being your OpenAI API KEY. If you do not have JaguarDB set up, see the Setup Jaguar section at the bottom for instructions on how to do so.
export JAGUAR_API_KEY=...
export OPENAI_API_KEY=...
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-jaguardb
If you want to add this to an existing project, you can just run:
langchain app add rag-jagaurdb
And add the following code to your server.py file:
from rag_jaguardb import chain as rag_jaguardb
add_routes(app, rag_jaguardb_chain, path="/rag-jaguardb")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-jaguardb/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-jaguardb")
JaguarDB Setup
To utilize JaguarDB, you can use docker pull and docker run commands to quickly setup JaguarDB.
docker pull jaguardb/jaguardb
docker run -d -p 8888:8888 --name jaguardb jaguardb/jaguardb
To launch the JaguarDB client terminal to interact with JaguarDB server:
docker exec -it jaguardb /home/jaguar/jaguar/bin/jag
Another option is to download an already-built binary package of JaguarDB on Linux, and deploy the database on a single node or in a cluster of nodes. The streamlined process enables you to quickly start using JaguarDB and leverage its powerful features and functionalities. here. |
https://python.langchain.com/docs/templates/anthropic-iterative-search/ | ## anthropic-iterative-search
This template will create a virtual research assistant with the ability to search Wikipedia to find answers to your questions.
It is heavily inspired by [this notebook](https://github.com/anthropics/anthropic-cookbook/blob/main/long_context/wikipedia-search-cookbook.ipynb).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package anthropic-iterative-search
```
If you want to add this to an existing project, you can just run:
```
langchain app add anthropic-iterative-search
```
And add the following code to your `server.py` file:
```
from anthropic_iterative_search import chain as anthropic_iterative_search_chainadd_routes(app, anthropic_iterative_search_chain, path="/anthropic-iterative-search")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/anthropic-iterative-search/playground](http://127.0.0.1:8000/anthropic-iterative-search/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/anthropic-iterative-search")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:42.137Z",
"loadedUrl": "https://python.langchain.com/docs/templates/anthropic-iterative-search/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/anthropic-iterative-search/",
"description": "This template will create a virtual research assistant with the ability to search Wikipedia to find answers to your questions.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3747",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"anthropic-iterative-search\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:41 GMT",
"etag": "W/\"07431ae56d13f94d7e427d9c2361b374\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::kt9bz-1713753941808-902c11f7adad"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/anthropic-iterative-search/",
"property": "og:url"
},
{
"content": "anthropic-iterative-search | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template will create a virtual research assistant with the ability to search Wikipedia to find answers to your questions.",
"property": "og:description"
}
],
"title": "anthropic-iterative-search | 🦜️🔗 LangChain"
} | anthropic-iterative-search
This template will create a virtual research assistant with the ability to search Wikipedia to find answers to your questions.
It is heavily inspired by this notebook.
Environment Setup
Set the ANTHROPIC_API_KEY environment variable to access the Anthropic models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package anthropic-iterative-search
If you want to add this to an existing project, you can just run:
langchain app add anthropic-iterative-search
And add the following code to your server.py file:
from anthropic_iterative_search import chain as anthropic_iterative_search_chain
add_routes(app, anthropic_iterative_search_chain, path="/anthropic-iterative-search")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/anthropic-iterative-search/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/anthropic-iterative-search")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/basic-critique-revise/ | ## basic-critique-revise
Iteratively generate schema candidates and revise them based on errors.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This template uses OpenAI function calling, so you will need to set the `OPENAI_API_KEY` environment variable in order to use this template.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package basic-critique-revise
```
If you want to add this to an existing project, you can just run:
```
langchain app add basic-critique-revise
```
And add the following code to your `server.py` file:
```
from basic_critique_revise import chain as basic_critique_revise_chainadd_routes(app, basic_critique_revise_chain, path="/basic-critique-revise")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/basic-critique-revise/playground](http://127.0.0.1:8000/basic-critique-revise/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/basic-critique-revise")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:42.293Z",
"loadedUrl": "https://python.langchain.com/docs/templates/basic-critique-revise/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/basic-critique-revise/",
"description": "Iteratively generate schema candidates and revise them based on errors.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4935",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"basic-critique-revise\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:41 GMT",
"etag": "W/\"4751937a0d4618c5ce8fa87786141e11\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::qrh8j-1713753941913-d57d1cce0a9f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/basic-critique-revise/",
"property": "og:url"
},
{
"content": "basic-critique-revise | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Iteratively generate schema candidates and revise them based on errors.",
"property": "og:description"
}
],
"title": "basic-critique-revise | 🦜️🔗 LangChain"
} | basic-critique-revise
Iteratively generate schema candidates and revise them based on errors.
Environment Setup
This template uses OpenAI function calling, so you will need to set the OPENAI_API_KEY environment variable in order to use this template.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package basic-critique-revise
If you want to add this to an existing project, you can just run:
langchain app add basic-critique-revise
And add the following code to your server.py file:
from basic_critique_revise import chain as basic_critique_revise_chain
add_routes(app, basic_critique_revise_chain, path="/basic-critique-revise")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/basic-critique-revise/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/basic-critique-revise")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-lancedb/ | ## rag-lancedb
This template performs RAG using LanceDB and OpenAI.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-lancedb
```
If you want to add this to as existing project, you can just run:
```
langchain app add rag-lancedb
```
And add the following code to your `server.py` file:
```
from rag_lancedb import chain as rag_lancedb_chainadd_routes(app, rag_lancedb_chain, path="/rag-lancedb")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-lancedb/playground](http://127.0.0.1:8000/rag-lancedb/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-lancedb")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:42.472Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-lancedb/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-lancedb/",
"description": "This template performs RAG using LanceDB and OpenAI.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3742",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-lancedb\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:42 GMT",
"etag": "W/\"4de092ef8a3fb48f6a2106a4fcf8b086\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::kvzzb-1713753942170-ef531ed7f932"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-lancedb/",
"property": "og:url"
},
{
"content": "rag-lancedb | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using LanceDB and OpenAI.",
"property": "og:description"
}
],
"title": "rag-lancedb | 🦜️🔗 LangChain"
} | rag-lancedb
This template performs RAG using LanceDB and OpenAI.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-lancedb
If you want to add this to as existing project, you can just run:
langchain app add rag-lancedb
And add the following code to your server.py file:
from rag_lancedb import chain as rag_lancedb_chain
add_routes(app, rag_lancedb_chain, path="/rag-lancedb")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-lancedb/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-lancedb")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/cassandra-synonym-caching/ | ## cassandra-synonym-caching
This template provides a simple chain template showcasing the usage of LLM Caching backed by Apache Cassandra® or Astra DB through CQL.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
To set up your environment, you will need the following:
* an [Astra](https://astra.datastax.com/) Vector Database (free tier is fine!). **You need a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure)**, in particular the string starting with `AstraCS:...`;
* likewise, get your [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier) ready, you will have to enter it below;
* an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access), note that out-of-the-box this demo supports OpenAI unless you tinker with the code.)
_Note:_ you can alternatively use a regular Cassandra cluster: to do so, make sure you provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package cassandra-synonym-caching
```
If you want to add this to an existing project, you can just run:
```
langchain app add cassandra-synonym-caching
```
And add the following code to your `server.py` file:
```
from cassandra_synonym_caching import chain as cassandra_synonym_caching_chainadd_routes(app, cassandra_synonym_caching_chain, path="/cassandra-synonym-caching")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/cassandra-synonym-caching/playground](http://127.0.0.1:8000/cassandra-synonym-caching/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/cassandra-synonym-caching")
```
## Reference[](#reference "Direct link to Reference")
Stand-alone LangServe template repo: [here](https://github.com/hemidactylus/langserve_cassandra_synonym_caching). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:42.814Z",
"loadedUrl": "https://python.langchain.com/docs/templates/cassandra-synonym-caching/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/cassandra-synonym-caching/",
"description": "This template provides a simple chain template showcasing the usage of LLM Caching backed by Apache Cassandra® or Astra DB through CQL.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4935",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"cassandra-synonym-caching\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:42 GMT",
"etag": "W/\"c928f2eebfba40e2d1d2b22fde9f5942\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::vrnmv-1713753942699-78148ecde9cb"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/cassandra-synonym-caching/",
"property": "og:url"
},
{
"content": "cassandra-synonym-caching | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template provides a simple chain template showcasing the usage of LLM Caching backed by Apache Cassandra® or Astra DB through CQL.",
"property": "og:description"
}
],
"title": "cassandra-synonym-caching | 🦜️🔗 LangChain"
} | cassandra-synonym-caching
This template provides a simple chain template showcasing the usage of LLM Caching backed by Apache Cassandra® or Astra DB through CQL.
Environment Setup
To set up your environment, you will need the following:
an Astra Vector Database (free tier is fine!). You need a Database Administrator token, in particular the string starting with AstraCS:...;
likewise, get your Database ID ready, you will have to enter it below;
an OpenAI API Key. (More info here, note that out-of-the-box this demo supports OpenAI unless you tinker with the code.)
Note: you can alternatively use a regular Cassandra cluster: to do so, make sure you provide the USE_CASSANDRA_CLUSTER entry as shown in .env.template and the subsequent environment variables to specify how to connect to it.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package cassandra-synonym-caching
If you want to add this to an existing project, you can just run:
langchain app add cassandra-synonym-caching
And add the following code to your server.py file:
from cassandra_synonym_caching import chain as cassandra_synonym_caching_chain
add_routes(app, cassandra_synonym_caching_chain, path="/cassandra-synonym-caching")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/cassandra-synonym-caching/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/cassandra-synonym-caching")
Reference
Stand-alone LangServe template repo: here. |
https://python.langchain.com/docs/templates/cassandra-entomology-rag/ | ## cassandra-entomology-rag
This template will perform RAG using Apache Cassandra® or Astra DB through CQL (`Cassandra` vector store class)
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
For the setup, you will require:
* an [Astra](https://astra.datastax.com/) Vector Database. You must have a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure), specifically the string starting with `AstraCS:...`.
* [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier).
* an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access))
You may also use a regular Cassandra cluster. In this case, provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it.
The connection parameters and secrets must be provided through environment variables. Refer to `.env.template` for the required variables.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package cassandra-entomology-rag
```
If you want to add this to an existing project, you can just run:
```
langchain app add cassandra-entomology-rag
```
And add the following code to your `server.py` file:
```
from cassandra_entomology_rag import chain as cassandra_entomology_rag_chainadd_routes(app, cassandra_entomology_rag_chain, path="/cassandra-entomology-rag")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/cassandra-entomology-rag/playground](http://127.0.0.1:8000/cassandra-entomology-rag/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/cassandra-entomology-rag")
```
## Reference[](#reference "Direct link to Reference")
Stand-alone repo with LangServe chain: [here](https://github.com/hemidactylus/langserve_cassandra_entomology_rag). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:42.893Z",
"loadedUrl": "https://python.langchain.com/docs/templates/cassandra-entomology-rag/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/cassandra-entomology-rag/",
"description": "This template will perform RAG using Apache Cassandra® or Astra DB through CQL (Cassandra vector store class)",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4935",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"cassandra-entomology-rag\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:42 GMT",
"etag": "W/\"d52b214708ab1df10dfdf7629496f73f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::kfqs7-1713753942596-4b08fdcd2a72"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/cassandra-entomology-rag/",
"property": "og:url"
},
{
"content": "cassandra-entomology-rag | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template will perform RAG using Apache Cassandra® or Astra DB through CQL (Cassandra vector store class)",
"property": "og:description"
}
],
"title": "cassandra-entomology-rag | 🦜️🔗 LangChain"
} | cassandra-entomology-rag
This template will perform RAG using Apache Cassandra® or Astra DB through CQL (Cassandra vector store class)
Environment Setup
For the setup, you will require:
an Astra Vector Database. You must have a Database Administrator token, specifically the string starting with AstraCS:....
Database ID.
an OpenAI API Key. (More info here)
You may also use a regular Cassandra cluster. In this case, provide the USE_CASSANDRA_CLUSTER entry as shown in .env.template and the subsequent environment variables to specify how to connect to it.
The connection parameters and secrets must be provided through environment variables. Refer to .env.template for the required variables.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package cassandra-entomology-rag
If you want to add this to an existing project, you can just run:
langchain app add cassandra-entomology-rag
And add the following code to your server.py file:
from cassandra_entomology_rag import chain as cassandra_entomology_rag_chain
add_routes(app, cassandra_entomology_rag_chain, path="/cassandra-entomology-rag")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/cassandra-entomology-rag/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/cassandra-entomology-rag")
Reference
Stand-alone repo with LangServe chain: here. |
https://python.langchain.com/docs/templates/rag-momento-vector-index/ | ## rag-momento-vector-index
This template performs RAG using Momento Vector Index (MVI) and OpenAI.
> MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs. Combine with other Momento services such as Momento Cache to cache prompts and as a session store or Momento Topics as a pub/sub system to broadcast events to your application.
To sign up and access MVI, visit the [Momento Console](https://console.gomomento.com/).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This template uses Momento Vector Index as a vectorstore and requires that `MOMENTO_API_KEY`, and `MOMENTO_INDEX_NAME` are set.
Go to the [console](https://console.gomomento.com/) to get an API key.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-momento-vector-index
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-momento-vector-index
```
And add the following code to your `server.py` file:
```
from rag_momento_vector_index import chain as rag_momento_vector_index_chainadd_routes(app, rag_momento_vector_index_chain, path="/rag-momento-vector-index")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-momento-vector-index/playground](http://127.0.0.1:8000/rag-momento-vector-index/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-momento-vector-index")
```
## Indexing Data[](#indexing-data "Direct link to Indexing Data")
We have included a sample module to index data. That is available at `rag_momento_vector_index/ingest.py`. You will see a commented out line in `chain.py` that invokes this. Uncomment to use. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:43.095Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-momento-vector-index/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-momento-vector-index/",
"description": "This template performs RAG using Momento Vector Index (MVI) and OpenAI.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3743",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-momento-vector-index\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:42 GMT",
"etag": "W/\"6ea80d5f2cf714663fe637c757bdf6fe\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::qvg7r-1713753942900-7a3bda5886b3"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-momento-vector-index/",
"property": "og:url"
},
{
"content": "rag-momento-vector-index | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Momento Vector Index (MVI) and OpenAI.",
"property": "og:description"
}
],
"title": "rag-momento-vector-index | 🦜️🔗 LangChain"
} | rag-momento-vector-index
This template performs RAG using Momento Vector Index (MVI) and OpenAI.
MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs. Combine with other Momento services such as Momento Cache to cache prompts and as a session store or Momento Topics as a pub/sub system to broadcast events to your application.
To sign up and access MVI, visit the Momento Console.
Environment Setup
This template uses Momento Vector Index as a vectorstore and requires that MOMENTO_API_KEY, and MOMENTO_INDEX_NAME are set.
Go to the console to get an API key.
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-momento-vector-index
If you want to add this to an existing project, you can just run:
langchain app add rag-momento-vector-index
And add the following code to your server.py file:
from rag_momento_vector_index import chain as rag_momento_vector_index_chain
add_routes(app, rag_momento_vector_index_chain, path="/rag-momento-vector-index")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-momento-vector-index/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-momento-vector-index")
Indexing Data
We have included a sample module to index data. That is available at rag_momento_vector_index/ingest.py. You will see a commented out line in chain.py that invokes this. Uncomment to use. |
https://python.langchain.com/docs/templates/rag-matching-engine/ | ## rag-matching-engine
This template performs RAG using Google Cloud Platform's Vertex AI with the matching engine.
It will utilize a previously created index to retrieve relevant documents or contexts based on user-provided questions.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
An index should be created before running the code.
The process to create this index can be found [here](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/language/use-cases/document-qa/question_answering_documents_langchain_matching_engine.ipynb).
Environment variables for Vertex should be set:
```
PROJECT_IDME_REGIONGCS_BUCKETME_INDEX_IDME_ENDPOINT_ID
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-matching-engine
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-matching-engine
```
And add the following code to your `server.py` file:
```
from rag_matching_engine import chain as rag_matching_engine_chainadd_routes(app, rag_matching_engine_chain, path="/rag-matching-engine")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-matching-engine/playground](http://127.0.0.1:8000/rag-matching-engine/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-matching-engine")
```
For more details on how to connect to the template, refer to the Jupyter notebook `rag_matching_engine`. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:43.169Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-matching-engine/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-matching-engine/",
"description": "This template performs RAG using Google Cloud Platform's Vertex AI with the matching engine.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3743",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-matching-engine\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:42 GMT",
"etag": "W/\"19932de04ed1fec7db20531b1421dd23\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dkdrz-1713753942899-fb9d96284b6c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-matching-engine/",
"property": "og:url"
},
{
"content": "rag-matching-engine | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Google Cloud Platform's Vertex AI with the matching engine.",
"property": "og:description"
}
],
"title": "rag-matching-engine | 🦜️🔗 LangChain"
} | rag-matching-engine
This template performs RAG using Google Cloud Platform's Vertex AI with the matching engine.
It will utilize a previously created index to retrieve relevant documents or contexts based on user-provided questions.
Environment Setup
An index should be created before running the code.
The process to create this index can be found here.
Environment variables for Vertex should be set:
PROJECT_ID
ME_REGION
GCS_BUCKET
ME_INDEX_ID
ME_ENDPOINT_ID
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-matching-engine
If you want to add this to an existing project, you can just run:
langchain app add rag-matching-engine
And add the following code to your server.py file:
from rag_matching_engine import chain as rag_matching_engine_chain
add_routes(app, rag_matching_engine_chain, path="/rag-matching-engine")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-matching-engine/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-matching-engine")
For more details on how to connect to the template, refer to the Jupyter notebook rag_matching_engine. |
https://python.langchain.com/docs/templates/chain-of-note-wiki/ | ## Chain-of-Note (Wikipedia)
Implements Chain-of-Note as described in [https://arxiv.org/pdf/2311.09210.pdf](https://arxiv.org/pdf/2311.09210.pdf) by Yu, et al. Uses Wikipedia for retrieval.
Check out the prompt being used here [https://smith.langchain.com/hub/bagatur/chain-of-note-wiki](https://smith.langchain.com/hub/bagatur/chain-of-note-wiki).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Uses Anthropic claude-3-sonnet-20240229 chat model. Set Anthropic API key:
```
export ANTHROPIC_API_KEY="..."
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package chain-of-note-wiki
```
If you want to add this to an existing project, you can just run:
```
langchain app add chain-of-note-wiki
```
And add the following code to your `server.py` file:
```
from chain_of_note_wiki import chain as chain_of_note_wiki_chainadd_routes(app, chain_of_note_wiki_chain, path="/chain-of-note-wiki")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/chain-of-note-wiki/playground](http://127.0.0.1:8000/chain-of-note-wiki/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/chain-of-note-wiki")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:43.268Z",
"loadedUrl": "https://python.langchain.com/docs/templates/chain-of-note-wiki/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/chain-of-note-wiki/",
"description": "Implements Chain-of-Note as described in https://arxiv.org/pdf/2311.09210.pdf by Yu, et al. Uses Wikipedia for retrieval.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"chain-of-note-wiki\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:43 GMT",
"etag": "W/\"93e32b9381f8036b100af2045415b2b7\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::8vjpf-1713753943018-b85433dcd6b3"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/chain-of-note-wiki/",
"property": "og:url"
},
{
"content": "Chain-of-Note (Wikipedia) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Implements Chain-of-Note as described in https://arxiv.org/pdf/2311.09210.pdf by Yu, et al. Uses Wikipedia for retrieval.",
"property": "og:description"
}
],
"title": "Chain-of-Note (Wikipedia) | 🦜️🔗 LangChain"
} | Chain-of-Note (Wikipedia)
Implements Chain-of-Note as described in https://arxiv.org/pdf/2311.09210.pdf by Yu, et al. Uses Wikipedia for retrieval.
Check out the prompt being used here https://smith.langchain.com/hub/bagatur/chain-of-note-wiki.
Environment Setup
Uses Anthropic claude-3-sonnet-20240229 chat model. Set Anthropic API key:
export ANTHROPIC_API_KEY="..."
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package chain-of-note-wiki
If you want to add this to an existing project, you can just run:
langchain app add chain-of-note-wiki
And add the following code to your server.py file:
from chain_of_note_wiki import chain as chain_of_note_wiki_chain
add_routes(app, chain_of_note_wiki_chain, path="/chain-of-note-wiki")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/chain-of-note-wiki/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/chain-of-note-wiki")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-lantern/ | ## rag\_lantern
This template performs RAG with Lantern.
[Lantern](https://lantern.dev/) is an open-source vector database built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL). It enables vector search and embedding generation inside your database.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
To find your `LANTERN_URL` and `LANTERN_SERVICE_KEY`, head to your Lantern project's [API settings](https://lantern.dev/dashboard/project/_/settings/api).
* `LANTERN_URL` corresponds to the Project URL
* `LANTERN_SERVICE_KEY` corresponds to the `service_role` API key
```
export LANTERN_URL=export LANTERN_SERVICE_KEY=export OPENAI_API_KEY=
```
## Setup Lantern Database[](#setup-lantern-database "Direct link to Setup Lantern Database")
Use these steps to setup your Lantern database if you haven't already.
1. Head to [https://lantern.dev](https://lantern.dev/) to create your Lantern database.
2. In your favorite SQL client, jump to the SQL editor and run the following script to setup your database as a vector store:
```
-- Create a table to store your documentscreate table documents ( id uuid primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding REAL[1536] -- 1536 works for OpenAI embeddings, change as needed );-- Create a function to search for documentscreate function match_documents ( query_embedding REAL[1536], filter jsonb default '{}') returns table ( id uuid, content text, metadata jsonb, similarity float) language plpgsql as $$#variable_conflict use_columnbegin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding;end;$$;
```
## Setup Environment Variables[](#setup-environment-variables "Direct link to Setup Environment Variables")
Since we are using [`Lantern`](https://python.langchain.com/docs/integrations/vectorstores/lantern) and [`OpenAIEmbeddings`](https://python.langchain.com/docs/integrations/text_embedding/openai), we need to load their API keys.
## Usage[](#usage "Direct link to Usage")
First, install the LangChain CLI:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-lantern
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-lantern
```
And add the following code to your `server.py` file:
```
from rag_lantern.chain import chain as rag_lantern_chainadd_routes(app, rag_lantern_chain, path="/rag-lantern")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-lantern/playground](http://127.0.0.1:8000/rag-lantern/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-lantern")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:43.627Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-lantern/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-lantern/",
"description": "This template performs RAG with Lantern.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4921",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-lantern\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:43 GMT",
"etag": "W/\"82777a0c9c33e42928debd4095990d0e\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::5lwwz-1713753943439-11eae86e754a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-lantern/",
"property": "og:url"
},
{
"content": "rag_lantern | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG with Lantern.",
"property": "og:description"
}
],
"title": "rag_lantern | 🦜️🔗 LangChain"
} | rag_lantern
This template performs RAG with Lantern.
Lantern is an open-source vector database built on top of PostgreSQL. It enables vector search and embedding generation inside your database.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
To get your OPENAI_API_KEY, navigate to API keys on your OpenAI account and create a new secret key.
To find your LANTERN_URL and LANTERN_SERVICE_KEY, head to your Lantern project's API settings.
LANTERN_URL corresponds to the Project URL
LANTERN_SERVICE_KEY corresponds to the service_role API key
export LANTERN_URL=
export LANTERN_SERVICE_KEY=
export OPENAI_API_KEY=
Setup Lantern Database
Use these steps to setup your Lantern database if you haven't already.
Head to https://lantern.dev to create your Lantern database.
In your favorite SQL client, jump to the SQL editor and run the following script to setup your database as a vector store:
-- Create a table to store your documents
create table
documents (
id uuid primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding REAL[1536] -- 1536 works for OpenAI embeddings, change as needed
);
-- Create a function to search for documents
create function match_documents (
query_embedding REAL[1536],
filter jsonb default '{}'
) returns table (
id uuid,
content text,
metadata jsonb,
similarity float
) language plpgsql as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding;
end;
$$;
Setup Environment Variables
Since we are using Lantern and OpenAIEmbeddings, we need to load their API keys.
Usage
First, install the LangChain CLI:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-lantern
If you want to add this to an existing project, you can just run:
langchain app add rag-lantern
And add the following code to your server.py file:
from rag_lantern.chain import chain as rag_lantern_chain
add_routes(app, rag_lantern_chain, path="/rag-lantern")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-lantern/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-lantern") |
https://python.langchain.com/docs/templates/chat-bot-feedback/ | ## Chat Bot Feedback Template
This template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in [chain.py](https://github.com/langchain-ai/langchain/blob/master/templates/chat-bot-feedback/chat_bot_feedback/chain.py) and custom evaluator that scores bot response effectiveness based on the subsequent user response. You can apply this run evaluator to your own chat bot by calling `with_config` on the chat bot before serving. You can also directly deploy your chat app using this template.
[Chat bots](https://python.langchain.com/docs/use_cases/chatbots) are one of the most common interfaces for deploying LLMs. The quality of chat bots varies, making continuous development important. But users are wont to leave explicit feedback through mechanisms like thumbs-up or thumbs-down buttons. Furthermore, traditional analytics such as "session length" or "conversation length" often lack clarity. However, multi-turn conversations with a chat bot can provide a wealth of information, which we can transform into metrics for fine-tuning, evaluation, and product analytics.
Taking [Chat Langchain](https://chat.langchain.com/) as a case study, only about 0.04% of all queries receive explicit feedback. Yet, approximately 70% of the queries are follow-ups to previous questions. A significant portion of these follow-up queries continue useful information we can use to infer the quality of the previous AI response.
This template helps solve this "feedback scarcity" problem. Below is an example invocation of this chat bot:
When the user responds to this ([link](https://smith.langchain.com/public/a7e2df54-4194-455d-9978-cecd8be0df1e/r)), the response evaluator is invoked, resulting in the following evaluationrun:
As shown, the evaluator sees that the user is increasingly frustrated, indicating that the prior response was not effective
## LangSmith Feedback[](#langsmith-feedback "Direct link to LangSmith Feedback")
[LangSmith](https://smith.langchain.com/) is a platform for building production-grade LLM applications. Beyond its debugging and offline evaluation features, LangSmith helps you capture both user and model-assisted feedback to refine your LLM application. This template uses an LLM to generate feedback for your application, which you can use to continuously improve your service. For more examples on collecting feedback using LangSmith, consult the [documentation](https://docs.smith.langchain.com/cookbook/feedback-examples).
## Evaluator Implementation[](#evaluator-implementation "Direct link to Evaluator Implementation")
The user feedback is inferred by custom `RunEvaluator`. This evaluator is called using the `EvaluatorCallbackHandler`, which run it in a separate thread to avoid interfering with the chat bot's runtime. You can use this custom evaluator on any compatible chat bot by calling the following function on your LangChain object:
```
my_chain.with_config( callbacks=[ EvaluatorCallbackHandler( evaluators=[ ResponseEffectivenessEvaluator(evaluate_response_effectiveness) ] ) ],)
```
The evaluator instructs an LLM, specifically `gpt-3.5-turbo`, to evaluate the AI's most recent chat message based on the user's followup response. It generates a score and accompanying reasoning that is converted to feedback in LangSmith, applied to the value provided as the `last_run_id`.
The prompt used within the LLM [is available on the hub](https://smith.langchain.com/hub/wfh/response-effectiveness). Feel free to customize it with things like additional app context (such as the goal of the app or the types of questions it should respond to) or "symptoms" you'd like the LLM to focus on. This evaluator also utilizes OpenAI's function-calling API to ensure a more consistent, structured output for the grade.
## Environment Variables[](#environment-variables "Direct link to Environment Variables")
Ensure that `OPENAI_API_KEY` is set to use OpenAI models. Also, configure LangSmith by setting your `LANGSMITH_API_KEY`.
```
export OPENAI_API_KEY=sk-...export LANGSMITH_API_KEY=...export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_PROJECT=my-project # Set to the project you want to save to
```
## Usage[](#usage "Direct link to Usage")
If deploying via `LangServe`, we recommend configuring the server to return callback events as well. This will ensure the backend traces are included in whatever traces you generate using the `RemoteRunnable`.
```
from chat_bot_feedback.chain import chainadd_routes(app, chain, path="/chat-bot-feedback", include_callback_events=True)
```
With the server running, you can use the following code snippet to stream the chat bot responses for a 2 turn conversation.
```
from functools import partialfrom typing import Dict, Optional, Callable, Listfrom langserve import RemoteRunnablefrom langchain.callbacks.manager import tracing_v2_enabledfrom langchain_core.messages import BaseMessage, AIMessage, HumanMessage# Update with the URL provided by your LangServe serverchain = RemoteRunnable("http://127.0.0.1:8031/chat-bot-feedback")def stream_content( text: str, chat_history: Optional[List[BaseMessage]] = None, last_run_id: Optional[str] = None, on_chunk: Callable = None,): results = [] with tracing_v2_enabled() as cb: for chunk in chain.stream( {"text": text, "chat_history": chat_history, "last_run_id": last_run_id}, ): on_chunk(chunk) results.append(chunk) last_run_id = cb.latest_run.id if cb.latest_run else None return last_run_id, "".join(results)chat_history = []text = "Where are my keys?"last_run_id, response_message = stream_content(text, on_chunk=partial(print, end=""))print()chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)])text = "I CAN'T FIND THEM ANYWHERE" # The previous response will likely receive a low score,# as the user's frustration appears to be escalating.last_run_id, response_message = stream_content( text, chat_history=chat_history, last_run_id=str(last_run_id), on_chunk=partial(print, end=""),)print()chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)])
```
This uses the `tracing_v2_enabled` callback manager to get the run ID of the call, which we provide in subsequent calls in the same chat thread, so the evaluator can assign feedback to the appropriate trace.
## Conclusion[](#conclusion "Direct link to Conclusion")
This template provides a simple chat bot definition you can directly deploy using LangServe. It defines a custom evaluator to log evaluation feedback for the bot without any explicit user ratings. This is an effective way to augment your analytics and to better select data points for fine-tuning and evaluation. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:44.318Z",
"loadedUrl": "https://python.langchain.com/docs/templates/chat-bot-feedback/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/chat-bot-feedback/",
"description": "This template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in chain.py and custom evaluator that scores bot response effectiveness based on the subsequent user response. You can apply this run evaluator to your own chat bot by calling with_config on the chat bot before serving. You can also directly deploy your chat app using this template.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4936",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"chat-bot-feedback\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:44 GMT",
"etag": "W/\"11dd3bf1f68fd1ff380e6a00deb6992d\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::9xzlr-1713753944199-4ac641b7dd7a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/chat-bot-feedback/",
"property": "og:url"
},
{
"content": "Chat Bot Feedback Template | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in chain.py and custom evaluator that scores bot response effectiveness based on the subsequent user response. You can apply this run evaluator to your own chat bot by calling with_config on the chat bot before serving. You can also directly deploy your chat app using this template.",
"property": "og:description"
}
],
"title": "Chat Bot Feedback Template | 🦜️🔗 LangChain"
} | Chat Bot Feedback Template
This template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in chain.py and custom evaluator that scores bot response effectiveness based on the subsequent user response. You can apply this run evaluator to your own chat bot by calling with_config on the chat bot before serving. You can also directly deploy your chat app using this template.
Chat bots are one of the most common interfaces for deploying LLMs. The quality of chat bots varies, making continuous development important. But users are wont to leave explicit feedback through mechanisms like thumbs-up or thumbs-down buttons. Furthermore, traditional analytics such as "session length" or "conversation length" often lack clarity. However, multi-turn conversations with a chat bot can provide a wealth of information, which we can transform into metrics for fine-tuning, evaluation, and product analytics.
Taking Chat Langchain as a case study, only about 0.04% of all queries receive explicit feedback. Yet, approximately 70% of the queries are follow-ups to previous questions. A significant portion of these follow-up queries continue useful information we can use to infer the quality of the previous AI response.
This template helps solve this "feedback scarcity" problem. Below is an example invocation of this chat bot:
When the user responds to this (link), the response evaluator is invoked, resulting in the following evaluationrun:
As shown, the evaluator sees that the user is increasingly frustrated, indicating that the prior response was not effective
LangSmith Feedback
LangSmith is a platform for building production-grade LLM applications. Beyond its debugging and offline evaluation features, LangSmith helps you capture both user and model-assisted feedback to refine your LLM application. This template uses an LLM to generate feedback for your application, which you can use to continuously improve your service. For more examples on collecting feedback using LangSmith, consult the documentation.
Evaluator Implementation
The user feedback is inferred by custom RunEvaluator. This evaluator is called using the EvaluatorCallbackHandler, which run it in a separate thread to avoid interfering with the chat bot's runtime. You can use this custom evaluator on any compatible chat bot by calling the following function on your LangChain object:
my_chain.with_config(
callbacks=[
EvaluatorCallbackHandler(
evaluators=[
ResponseEffectivenessEvaluator(evaluate_response_effectiveness)
]
)
],
)
The evaluator instructs an LLM, specifically gpt-3.5-turbo, to evaluate the AI's most recent chat message based on the user's followup response. It generates a score and accompanying reasoning that is converted to feedback in LangSmith, applied to the value provided as the last_run_id.
The prompt used within the LLM is available on the hub. Feel free to customize it with things like additional app context (such as the goal of the app or the types of questions it should respond to) or "symptoms" you'd like the LLM to focus on. This evaluator also utilizes OpenAI's function-calling API to ensure a more consistent, structured output for the grade.
Environment Variables
Ensure that OPENAI_API_KEY is set to use OpenAI models. Also, configure LangSmith by setting your LANGSMITH_API_KEY.
export OPENAI_API_KEY=sk-...
export LANGSMITH_API_KEY=...
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_PROJECT=my-project # Set to the project you want to save to
Usage
If deploying via LangServe, we recommend configuring the server to return callback events as well. This will ensure the backend traces are included in whatever traces you generate using the RemoteRunnable.
from chat_bot_feedback.chain import chain
add_routes(app, chain, path="/chat-bot-feedback", include_callback_events=True)
With the server running, you can use the following code snippet to stream the chat bot responses for a 2 turn conversation.
from functools import partial
from typing import Dict, Optional, Callable, List
from langserve import RemoteRunnable
from langchain.callbacks.manager import tracing_v2_enabled
from langchain_core.messages import BaseMessage, AIMessage, HumanMessage
# Update with the URL provided by your LangServe server
chain = RemoteRunnable("http://127.0.0.1:8031/chat-bot-feedback")
def stream_content(
text: str,
chat_history: Optional[List[BaseMessage]] = None,
last_run_id: Optional[str] = None,
on_chunk: Callable = None,
):
results = []
with tracing_v2_enabled() as cb:
for chunk in chain.stream(
{"text": text, "chat_history": chat_history, "last_run_id": last_run_id},
):
on_chunk(chunk)
results.append(chunk)
last_run_id = cb.latest_run.id if cb.latest_run else None
return last_run_id, "".join(results)
chat_history = []
text = "Where are my keys?"
last_run_id, response_message = stream_content(text, on_chunk=partial(print, end=""))
print()
chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)])
text = "I CAN'T FIND THEM ANYWHERE" # The previous response will likely receive a low score,
# as the user's frustration appears to be escalating.
last_run_id, response_message = stream_content(
text,
chat_history=chat_history,
last_run_id=str(last_run_id),
on_chunk=partial(print, end=""),
)
print()
chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)])
This uses the tracing_v2_enabled callback manager to get the run ID of the call, which we provide in subsequent calls in the same chat thread, so the evaluator can assign feedback to the appropriate trace.
Conclusion
This template provides a simple chat bot definition you can directly deploy using LangServe. It defines a custom evaluator to log evaluation feedback for the bot without any explicit user ratings. This is an effective way to augment your analytics and to better select data points for fine-tuning and evaluation. |
https://python.langchain.com/docs/templates/rag-mongo/ | ## rag-mongo
This template performs RAG using MongoDB and OpenAI.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
You should export two environment variables, one being your MongoDB URI, the other being your OpenAI API KEY. If you do not have a MongoDB URI, see the `Setup Mongo` section at the bottom for instructions on how to do so.
```
export MONGO_URI=...export OPENAI_API_KEY=...
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-mongo
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-mongo
```
And add the following code to your `server.py` file:
```
from rag_mongo import chain as rag_mongo_chainadd_routes(app, rag_mongo_chain, path="/rag-mongo")
```
If you want to set up an ingestion pipeline, you can add the following code to your `server.py` file:
```
from rag_mongo import ingest as rag_mongo_ingestadd_routes(app, rag_mongo_ingest, path="/rag-mongo-ingest")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you DO NOT already have a Mongo Search Index you want to connect to, see `MongoDB Setup` section below before proceeding.
If you DO have a MongoDB Search index you want to connect to, edit the connection details in `rag_mongo/chain.py`
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-mongo/playground](http://127.0.0.1:8000/rag-mongo/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-mongo")
```
For additional context, please refer to [this notebook](https://colab.research.google.com/drive/1cr2HBAHyBmwKUerJq2if0JaNhy-hIq7I#scrollTo=TZp7_CBfxTOB).
## MongoDB Setup[](#mongodb-setup "Direct link to MongoDB Setup")
Use this step if you need to setup your MongoDB account and ingest data. We will first follow the standard MongoDB Atlas setup instructions [here](https://www.mongodb.com/docs/atlas/getting-started/).
1. Create an account (if not already done)
2. Create a new project (if not already done)
3. Locate your MongoDB URI.
This can be done by going to the deployement overview page and connecting to you database
We then look at the drivers available
Among which we will see our URI listed
Let's then set that as an environment variable locally:
4. Let's also set an environment variable for OpenAI (which we will use as an LLM)
```
export OPENAI_API_KEY=...
```
5. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:
Note that you can (and should!) change this to ingest data of your choice
6. We now need to set up a vector index on our data.
We can first connect to the cluster where our database lives
We can then navigate to where all our collections are listed
We can then find the collection we want and look at the search indexes for that collection
That should likely be empty, and we want to create a new one:
We will use the JSON editor to create it
And we will paste the following JSON in:
```
{ "mappings": { "dynamic": true, "fields": { "embedding": { "dimensions": 1536, "similarity": "cosine", "type": "knnVector" } } } }
```
From there, hit "Next" and then "Create Search Index". It will take a little bit but you should then have an index over your data! | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:44.866Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-mongo/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-mongo/",
"description": "This template performs RAG using MongoDB and OpenAI.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3745",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-mongo\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:44 GMT",
"etag": "W/\"a7eaf87975566a3f8a1b3f31aecec084\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::6lnrd-1713753944794-7fe927f98e91"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-mongo/",
"property": "og:url"
},
{
"content": "rag-mongo | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using MongoDB and OpenAI.",
"property": "og:description"
}
],
"title": "rag-mongo | 🦜️🔗 LangChain"
} | rag-mongo
This template performs RAG using MongoDB and OpenAI.
Environment Setup
You should export two environment variables, one being your MongoDB URI, the other being your OpenAI API KEY. If you do not have a MongoDB URI, see the Setup Mongo section at the bottom for instructions on how to do so.
export MONGO_URI=...
export OPENAI_API_KEY=...
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-mongo
If you want to add this to an existing project, you can just run:
langchain app add rag-mongo
And add the following code to your server.py file:
from rag_mongo import chain as rag_mongo_chain
add_routes(app, rag_mongo_chain, path="/rag-mongo")
If you want to set up an ingestion pipeline, you can add the following code to your server.py file:
from rag_mongo import ingest as rag_mongo_ingest
add_routes(app, rag_mongo_ingest, path="/rag-mongo-ingest")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you DO NOT already have a Mongo Search Index you want to connect to, see MongoDB Setup section below before proceeding.
If you DO have a MongoDB Search index you want to connect to, edit the connection details in rag_mongo/chain.py
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-mongo/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-mongo")
For additional context, please refer to this notebook.
MongoDB Setup
Use this step if you need to setup your MongoDB account and ingest data. We will first follow the standard MongoDB Atlas setup instructions here.
Create an account (if not already done)
Create a new project (if not already done)
Locate your MongoDB URI.
This can be done by going to the deployement overview page and connecting to you database
We then look at the drivers available
Among which we will see our URI listed
Let's then set that as an environment variable locally:
Let's also set an environment variable for OpenAI (which we will use as an LLM)
export OPENAI_API_KEY=...
Let's now ingest some data! We can do that by moving into this directory and running the code in ingest.py, eg:
Note that you can (and should!) change this to ingest data of your choice
We now need to set up a vector index on our data.
We can first connect to the cluster where our database lives
We can then navigate to where all our collections are listed
We can then find the collection we want and look at the search indexes for that collection
That should likely be empty, and we want to create a new one:
We will use the JSON editor to create it
And we will paste the following JSON in:
{
"mappings": {
"dynamic": true,
"fields": {
"embedding": {
"dimensions": 1536,
"similarity": "cosine",
"type": "knnVector"
}
}
}
}
From there, hit "Next" and then "Create Search Index". It will take a little bit but you should then have an index over your data! |
https://python.langchain.com/docs/templates/cohere-librarian/ | ## cohere-librarian
This template turns Cohere into a librarian.
It demonstrates the use of a router to switch between chains that can handle different things: a vector database with Cohere embeddings; a chat bot that has a prompt with some information about the library; and finally a RAG chatbot that has access to the internet.
For a fuller demo of the book recomendation, consider replacing books\_with\_blurbs.csv with a larger sample from the following dataset: [https://www.kaggle.com/datasets/jdobrow/57000-books-with-metadata-and-blurbs/](https://www.kaggle.com/datasets/jdobrow/57000-books-with-metadata-and-blurbs/) .
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `COHERE_API_KEY` environment variable to access the Cohere models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package cohere-librarian
```
If you want to add this to an existing project, you can just run:
```
langchain app add cohere-librarian
```
And add the following code to your `server.py` file:
```
from cohere_librarian.chain import chain as cohere_librarian_chainadd_routes(app, cohere_librarian_chain, path="/cohere-librarian")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://localhost:8000/docs](http://localhost:8000/docs) We can access the playground at [http://localhost:8000/cohere-librarian/playground](http://localhost:8000/cohere-librarian/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/cohere-librarian")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:45.541Z",
"loadedUrl": "https://python.langchain.com/docs/templates/cohere-librarian/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/cohere-librarian/",
"description": "This template turns Cohere into a librarian.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4937",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"cohere-librarian\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:45 GMT",
"etag": "W/\"9af980ccf591f63ee6b5e306fa534b47\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::vrnmv-1713753945427-d54b093cf52d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/cohere-librarian/",
"property": "og:url"
},
{
"content": "cohere-librarian | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template turns Cohere into a librarian.",
"property": "og:description"
}
],
"title": "cohere-librarian | 🦜️🔗 LangChain"
} | cohere-librarian
This template turns Cohere into a librarian.
It demonstrates the use of a router to switch between chains that can handle different things: a vector database with Cohere embeddings; a chat bot that has a prompt with some information about the library; and finally a RAG chatbot that has access to the internet.
For a fuller demo of the book recomendation, consider replacing books_with_blurbs.csv with a larger sample from the following dataset: https://www.kaggle.com/datasets/jdobrow/57000-books-with-metadata-and-blurbs/ .
Environment Setup
Set the COHERE_API_KEY environment variable to access the Cohere models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package cohere-librarian
If you want to add this to an existing project, you can just run:
langchain app add cohere-librarian
And add the following code to your server.py file:
from cohere_librarian.chain import chain as cohere_librarian_chain
add_routes(app, cohere_librarian_chain, path="/cohere-librarian")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://localhost:8000/docs We can access the playground at http://localhost:8000/cohere-librarian/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/cohere-librarian")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-multi-index-fusion/ | A QA application that queries multiple domain-specific retrievers and selects the most relevant documents from across all retrieved results.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This application queries PubMed, ArXiv, Wikipedia, and [Kay AI](https://www.kay.ai/) (for SEC filings).
You will need to create a free Kay AI account and [get your API key here](https://www.kay.ai/). Then set environment variable:
```
export KAY_API_KEY="<YOUR_API_KEY>"
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-multi-index-fusion
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-multi-index-fusion
```
And add the following code to your `server.py` file:
```
from rag_multi_index_fusion import chain as rag_multi_index_fusion_chainadd_routes(app, rag_multi_index_fusion_chain, path="/rag-multi-index-fusion")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-multi-index-fusion/playground](http://127.0.0.1:8000/rag-multi-index-fusion/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-multi-index-fusion")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:46.276Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-multi-index-fusion/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-multi-index-fusion/",
"description": "A QA application that queries multiple domain-specific retrievers and selects the most relevant documents from across all retrieved results.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4922",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-multi-index-fusion\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:46 GMT",
"etag": "W/\"1b39be516924e7f507db8f0a15700dab\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::9xzlr-1713753946161-07a844b636ed"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-multi-index-fusion/",
"property": "og:url"
},
{
"content": "RAG with Multiple Indexes (Fusion) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "A QA application that queries multiple domain-specific retrievers and selects the most relevant documents from across all retrieved results.",
"property": "og:description"
}
],
"title": "RAG with Multiple Indexes (Fusion) | 🦜️🔗 LangChain"
} | A QA application that queries multiple domain-specific retrievers and selects the most relevant documents from across all retrieved results.
Environment Setup
This application queries PubMed, ArXiv, Wikipedia, and Kay AI (for SEC filings).
You will need to create a free Kay AI account and get your API key here. Then set environment variable:
export KAY_API_KEY="<YOUR_API_KEY>"
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-multi-index-fusion
If you want to add this to an existing project, you can just run:
langchain app add rag-multi-index-fusion
And add the following code to your server.py file:
from rag_multi_index_fusion import chain as rag_multi_index_fusion_chain
add_routes(app, rag_multi_index_fusion_chain, path="/rag-multi-index-fusion")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-multi-index-fusion/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-multi-index-fusion")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/csv-agent/ | ## csv-agent
This template uses a [csv agent](https://python.langchain.com/docs/integrations/toolkits/csv) with tools (Python REPL) and memory (vectorstore) for interaction (question-answering) with text data.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To set up the environment, the `ingest.py` script should be run to handle the ingestion into a vectorstore.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package csv-agent
```
If you want to add this to an existing project, you can just run:
```
langchain app add csv-agent
```
And add the following code to your `server.py` file:
```
from csv_agent.agent import agent_executor as csv_agent_chainadd_routes(app, csv_agent_chain, path="/csv-agent")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/csv-agent/playground](http://127.0.0.1:8000/csv-agent/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/csv-agent")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:46.946Z",
"loadedUrl": "https://python.langchain.com/docs/templates/csv-agent/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/csv-agent/",
"description": "This template uses a csv agent with tools (Python REPL) and memory (vectorstore) for interaction (question-answering) with text data.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4196",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"csv-agent\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:46 GMT",
"etag": "W/\"1f0f45bcfc7ba72e98a7f7010be3c45a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::zmgp6-1713753946891-f1b8e5cd8dfd"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/csv-agent/",
"property": "og:url"
},
{
"content": "csv-agent | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template uses a csv agent with tools (Python REPL) and memory (vectorstore) for interaction (question-answering) with text data.",
"property": "og:description"
}
],
"title": "csv-agent | 🦜️🔗 LangChain"
} | csv-agent
This template uses a csv agent with tools (Python REPL) and memory (vectorstore) for interaction (question-answering) with text data.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
To set up the environment, the ingest.py script should be run to handle the ingestion into a vectorstore.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package csv-agent
If you want to add this to an existing project, you can just run:
langchain app add csv-agent
And add the following code to your server.py file:
from csv_agent.agent import agent_executor as csv_agent_chain
add_routes(app, csv_agent_chain, path="/csv-agent")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/csv-agent/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/csv-agent")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/openai-functions-agent-gmail/ | Ever struggled to reach inbox zero?
Using this template, you can create and customize your very own AI assistant to manage your Gmail account. Using the default Gmail tools, it can read, search through, and draft emails to respond on your behalf. It also has access to a Tavily search engine so it can search for relevant information about any topics or people in the email thread before writing, ensuring the drafts include all the relevant information needed to sound well-informed.
## The details[](#the-details "Direct link to The details")
This assistant uses OpenAI's [function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) support to reliably select and invoke the tools you've provided
This template also imports directly from [langchain-core](https://pypi.org/project/langchain-core/) and [`langchain-community`](https://pypi.org/project/langchain-community/) where appropriate. We have restructured LangChain to let you select the specific integrations needed for your use case. While you can still import from `langchain` (we are making this transition backwards-compatible), we have separated the homes of most of the classes to reflect ownership and to make your dependency lists lighter. Most of the integrations you need can be found in the `langchain-community` package, and if you are just using the core expression language API's, you can even build solely based on `langchain-core`.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `TAVILY_API_KEY` environment variable to access Tavily search.
Create a [`credentials.json`](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application) file containing your OAuth client ID from Gmail. To customize authentication, see the [Customize Auth](#customize-auth) section below.
__Note:_ The first time you run this app, it will force you to go through a user authentication flow._
(Optional): Set `GMAIL_AGENT_ENABLE_SEND` to `true` (or modify the `agent.py` file in this template) to give it access to the "Send" tool. This will give your assistant permissions to send emails on your behalf without your explicit review, which is not recommended.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package openai-functions-agent-gmail
```
If you want to add this to an existing project, you can just run:
```
langchain app add openai-functions-agent-gmail
```
And add the following code to your `server.py` file:
```
from openai_functions_agent import agent_executor as openai_functions_agent_chainadd_routes(app, openai_functions_agent_chain, path="/openai-functions-agent-gmail")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/openai-functions-agent-gmail/playground](http://127.0.0.1:8000/openai-functions-agent/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/openai-functions-agent-gmail")
```
## Customize Auth[](#customize-auth "Direct link to Customize Auth")
```
from langchain_community.tools.gmail.utils import build_resource_service, get_gmail_credentials# Can review scopes here https://developers.google.com/gmail/api/auth/scopes# For instance, readonly scope is 'https://www.googleapis.com/auth/gmail.readonly'credentials = get_gmail_credentials( token_file="token.json", scopes=["https://mail.google.com/"], client_secrets_file="credentials.json",)api_resource = build_resource_service(credentials=credentials)toolkit = GmailToolkit(api_resource=api_resource)
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:47.012Z",
"loadedUrl": "https://python.langchain.com/docs/templates/openai-functions-agent-gmail/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/openai-functions-agent-gmail/",
"description": "Ever struggled to reach inbox zero?",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"openai-functions-agent-gmail\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:46 GMT",
"etag": "W/\"699f2079604533deabad8bf79f362cf3\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::wf55v-1713753946894-4be12c028077"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/openai-functions-agent-gmail/",
"property": "og:url"
},
{
"content": "OpenAI Functions Agent - Gmail | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Ever struggled to reach inbox zero?",
"property": "og:description"
}
],
"title": "OpenAI Functions Agent - Gmail | 🦜️🔗 LangChain"
} | Ever struggled to reach inbox zero?
Using this template, you can create and customize your very own AI assistant to manage your Gmail account. Using the default Gmail tools, it can read, search through, and draft emails to respond on your behalf. It also has access to a Tavily search engine so it can search for relevant information about any topics or people in the email thread before writing, ensuring the drafts include all the relevant information needed to sound well-informed.
The details
This assistant uses OpenAI's function calling support to reliably select and invoke the tools you've provided
This template also imports directly from langchain-core and langchain-community where appropriate. We have restructured LangChain to let you select the specific integrations needed for your use case. While you can still import from langchain (we are making this transition backwards-compatible), we have separated the homes of most of the classes to reflect ownership and to make your dependency lists lighter. Most of the integrations you need can be found in the langchain-community package, and if you are just using the core expression language API's, you can even build solely based on langchain-core.
Environment Setup
The following environment variables need to be set:
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Set the TAVILY_API_KEY environment variable to access Tavily search.
Create a credentials.json file containing your OAuth client ID from Gmail. To customize authentication, see the Customize Auth section below.
Note: The first time you run this app, it will force you to go through a user authentication flow.
(Optional): Set GMAIL_AGENT_ENABLE_SEND to true (or modify the agent.py file in this template) to give it access to the "Send" tool. This will give your assistant permissions to send emails on your behalf without your explicit review, which is not recommended.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package openai-functions-agent-gmail
If you want to add this to an existing project, you can just run:
langchain app add openai-functions-agent-gmail
And add the following code to your server.py file:
from openai_functions_agent import agent_executor as openai_functions_agent_chain
add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent-gmail")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/openai-functions-agent-gmail/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-agent-gmail")
Customize Auth
from langchain_community.tools.gmail.utils import build_resource_service, get_gmail_credentials
# Can review scopes here https://developers.google.com/gmail/api/auth/scopes
# For instance, readonly scope is 'https://www.googleapis.com/auth/gmail.readonly'
credentials = get_gmail_credentials(
token_file="token.json",
scopes=["https://mail.google.com/"],
client_secrets_file="credentials.json",
)
api_resource = build_resource_service(credentials=credentials)
toolkit = GmailToolkit(api_resource=api_resource) |
https://python.langchain.com/docs/templates/rag-multi-index-router/ | A QA application that routes between different domain-specific retrievers given a user question.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This application queries PubMed, ArXiv, Wikipedia, and [Kay AI](https://www.kay.ai/) (for SEC filings).
You will need to create a free Kay AI account and [get your API key here](https://www.kay.ai/). Then set environment variable:
```
export KAY_API_KEY="<YOUR_API_KEY>"
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-multi-index-router
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-multi-index-router
```
And add the following code to your `server.py` file:
```
from rag_multi_index_router import chain as rag_multi_index_router_chainadd_routes(app, rag_multi_index_router_chain, path="/rag-multi-index-router")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-multi-index-router/playground](http://127.0.0.1:8000/rag-multi-index-router/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-multi-index-router")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:47.368Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-multi-index-router/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-multi-index-router/",
"description": "A QA application that routes between different domain-specific retrievers given a user question.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3747",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-multi-index-router\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:47 GMT",
"etag": "W/\"17ebae529cd1cf4f9f85cc810e4b9ff6\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::zvcms-1713753947030-36ca3f8dc8a9"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-multi-index-router/",
"property": "og:url"
},
{
"content": "RAG with Multiple Indexes (Routing) | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "A QA application that routes between different domain-specific retrievers given a user question.",
"property": "og:description"
}
],
"title": "RAG with Multiple Indexes (Routing) | 🦜️🔗 LangChain"
} | A QA application that routes between different domain-specific retrievers given a user question.
Environment Setup
This application queries PubMed, ArXiv, Wikipedia, and Kay AI (for SEC filings).
You will need to create a free Kay AI account and get your API key here. Then set environment variable:
export KAY_API_KEY="<YOUR_API_KEY>"
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-multi-index-router
If you want to add this to an existing project, you can just run:
langchain app add rag-multi-index-router
And add the following code to your server.py file:
from rag_multi_index_router import chain as rag_multi_index_router_chain
add_routes(app, rag_multi_index_router_chain, path="/rag-multi-index-router")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-multi-index-router/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-multi-index-router")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-multi-modal-local/ | ## rag-multi-modal-local
Visual search is a famililar application to many with iPhones or Android devices. It allows user to search photos using natural language.
With the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.
This template demonstrates how to perform private visual search and question-answering over a collection of your photos.
It uses OpenCLIP embeddings to embed all of the photos and stores them in Chroma.
Given a question, relevant photos are retrieved and passed to an open source multi-modal LLM of your choice for answer synthesis.
## Input[](#input "Direct link to Input")
Supply a set of photos in the `/docs` directory.
By default, this template has a toy collection of 3 food pictures.
Example questions to ask can be:
```
What kind of soft serve did I have?
```
In practice, a larger corpus of images can be tested.
To create an index of the images, run:
```
poetry installpython ingest.py
```
## Storage[](#storage "Direct link to Storage")
This template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.
You can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
```
vectorstore_mmembd = Chroma( collection_name="multi-modal-rag", persist_directory=str(re_vectorstore_path), embedding_function=OpenCLIPEmbeddings( model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k" ),)
```
## LLM[](#llm "Direct link to LLM")
This template will use [Ollama](https://python.langchain.com/docs/integrations/chat/ollama#multi-modal).
Download the latest version of Ollama: [https://ollama.ai/](https://ollama.ai/)
Pull the an open source multi-modal LLM: e.g., [https://ollama.ai/library/bakllava](https://ollama.ai/library/bakllava)
The app is by default configured for `bakllava`. But you can change this in `chain.py` and `ingest.py` for different downloaded models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-chroma-multi-modal
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-chroma-multi-modal
```
And add the following code to your `server.py` file:
```
from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chainadd_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal/playground](http://127.0.0.1:8000/rag-chroma-multi-modal/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:47.205Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-multi-modal-local/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-multi-modal-local/",
"description": "Visual search is a famililar application to many with iPhones or Android devices. It allows user to search photos using natural language.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3747",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-multi-modal-local\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:46 GMT",
"etag": "W/\"a4dff20c542a2ba2905e2ba7bebb3e15\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::wf55v-1713753946970-5f81b04f305d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-multi-modal-local/",
"property": "og:url"
},
{
"content": "rag-multi-modal-local | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Visual search is a famililar application to many with iPhones or Android devices. It allows user to search photos using natural language.",
"property": "og:description"
}
],
"title": "rag-multi-modal-local | 🦜️🔗 LangChain"
} | rag-multi-modal-local
Visual search is a famililar application to many with iPhones or Android devices. It allows user to search photos using natural language.
With the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.
This template demonstrates how to perform private visual search and question-answering over a collection of your photos.
It uses OpenCLIP embeddings to embed all of the photos and stores them in Chroma.
Given a question, relevant photos are retrieved and passed to an open source multi-modal LLM of your choice for answer synthesis.
Input
Supply a set of photos in the /docs directory.
By default, this template has a toy collection of 3 food pictures.
Example questions to ask can be:
What kind of soft serve did I have?
In practice, a larger corpus of images can be tested.
To create an index of the images, run:
poetry install
python ingest.py
Storage
This template will use OpenCLIP multi-modal embeddings to embed the images.
You can select different embedding model options (see results here).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, ViT-H-14.
You can choose alternative OpenCLIPEmbeddings models in rag_chroma_multi_modal/ingest.py:
vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(re_vectorstore_path),
embedding_function=OpenCLIPEmbeddings(
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
),
)
LLM
This template will use Ollama.
Download the latest version of Ollama: https://ollama.ai/
Pull the an open source multi-modal LLM: e.g., https://ollama.ai/library/bakllava
The app is by default configured for bakllava. But you can change this in chain.py and ingest.py for different downloaded models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-chroma-multi-modal
If you want to add this to an existing project, you can just run:
langchain app add rag-chroma-multi-modal
And add the following code to your server.py file:
from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain
add_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-chroma-multi-modal/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal") |
https://python.langchain.com/docs/templates/elastic-query-generator/ | ## elastic-query-generator
This template allows interacting with Elasticsearch analytics databases in natural language using LLMs.
It builds search queries via the Elasticsearch DSL API (filters and aggregations).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
### Installing Elasticsearch[](#installing-elasticsearch "Direct link to Installing Elasticsearch")
There are a number of ways to run Elasticsearch. However, one recommended way is through Elastic Cloud.
Create a free trial account on [Elastic Cloud](https://cloud.elastic.co/registration?utm_source=langchain&utm_content=langserve).
With a deployment, update the connection string.
Password and connection (elasticsearch url) can be found on the deployment console.
Note that the Elasticsearch client must have permissions for index listing, mapping description, and search queries.
### Populating with data[](#populating-with-data "Direct link to Populating with data")
If you want to populate the DB with some example info, you can run `python ingest.py`.
This will create a `customers` index. In this package, we specify indexes to generate queries against, and we specify `["customers"]`. This is specific to setting up your Elastic index.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package elastic-query-generator
```
If you want to add this to an existing project, you can just run:
```
langchain app add elastic-query-generator
```
And add the following code to your `server.py` file:
```
from elastic_query_generator.chain import chain as elastic_query_generator_chainadd_routes(app, elastic_query_generator_chain, path="/elastic-query-generator")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/elastic-query-generator/playground](http://127.0.0.1:8000/elastic-query-generator/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/elastic-query-generator")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:47.288Z",
"loadedUrl": "https://python.langchain.com/docs/templates/elastic-query-generator/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/elastic-query-generator/",
"description": "This template allows interacting with Elasticsearch analytics databases in natural language using LLMs.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"elastic-query-generator\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:47 GMT",
"etag": "W/\"8f9518445f3504851ad87a047dbfdc51\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::fn5d5-1713753946966-c13b82161fcd"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/elastic-query-generator/",
"property": "og:url"
},
{
"content": "elastic-query-generator | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template allows interacting with Elasticsearch analytics databases in natural language using LLMs.",
"property": "og:description"
}
],
"title": "elastic-query-generator | 🦜️🔗 LangChain"
} | elastic-query-generator
This template allows interacting with Elasticsearch analytics databases in natural language using LLMs.
It builds search queries via the Elasticsearch DSL API (filters and aggregations).
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Installing Elasticsearch
There are a number of ways to run Elasticsearch. However, one recommended way is through Elastic Cloud.
Create a free trial account on Elastic Cloud.
With a deployment, update the connection string.
Password and connection (elasticsearch url) can be found on the deployment console.
Note that the Elasticsearch client must have permissions for index listing, mapping description, and search queries.
Populating with data
If you want to populate the DB with some example info, you can run python ingest.py.
This will create a customers index. In this package, we specify indexes to generate queries against, and we specify ["customers"]. This is specific to setting up your Elastic index.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package elastic-query-generator
If you want to add this to an existing project, you can just run:
langchain app add elastic-query-generator
And add the following code to your server.py file:
from elastic_query_generator.chain import chain as elastic_query_generator_chain
add_routes(app, elastic_query_generator_chain, path="/elastic-query-generator")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/elastic-query-generator/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/elastic-query-generator") |
https://python.langchain.com/docs/templates/extraction-anthropic-functions/ | ## extraction-anthropic-functions
This template enables [Anthropic function calling](https://python.langchain.com/docs/integrations/chat/anthropic_functions).
This can be used for various tasks, such as extraction or tagging.
The function output schema can be set in `chain.py`.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package extraction-anthropic-functions
```
If you want to add this to an existing project, you can just run:
```
langchain app add extraction-anthropic-functions
```
And add the following code to your `server.py` file:
```
from extraction_anthropic_functions import chain as extraction_anthropic_functions_chainadd_routes(app, extraction_anthropic_functions_chain, path="/extraction-anthropic-functions")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/extraction-anthropic-functions/playground](http://127.0.0.1:8000/extraction-anthropic-functions/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/extraction-anthropic-functions")
```
By default, the package will extract the title and author of papers from the information you specify in `chain.py`. This template will use `Claude2` by default.
* * * | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:47.528Z",
"loadedUrl": "https://python.langchain.com/docs/templates/extraction-anthropic-functions/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/extraction-anthropic-functions/",
"description": "This template enables Anthropic function calling.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4938",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"extraction-anthropic-functions\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:47 GMT",
"etag": "W/\"29d6270cee73c38345e9e286ab5be999\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::qb88p-1713753947132-df5be263fbef"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/extraction-anthropic-functions/",
"property": "og:url"
},
{
"content": "extraction-anthropic-functions | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template enables Anthropic function calling.",
"property": "og:description"
}
],
"title": "extraction-anthropic-functions | 🦜️🔗 LangChain"
} | extraction-anthropic-functions
This template enables Anthropic function calling.
This can be used for various tasks, such as extraction or tagging.
The function output schema can be set in chain.py.
Environment Setup
Set the ANTHROPIC_API_KEY environment variable to access the Anthropic models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package extraction-anthropic-functions
If you want to add this to an existing project, you can just run:
langchain app add extraction-anthropic-functions
And add the following code to your server.py file:
from extraction_anthropic_functions import chain as extraction_anthropic_functions_chain
add_routes(app, extraction_anthropic_functions_chain, path="/extraction-anthropic-functions")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/extraction-anthropic-functions/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/extraction-anthropic-functions")
By default, the package will extract the title and author of papers from the information you specify in chain.py. This template will use Claude2 by default. |
https://python.langchain.com/docs/templates/rag-multi-modal-mv-local/ | ## rag-multi-modal-mv-local
Visual search is a famililar application to many with iPhones or Android devices. It allows user to search photos using natural language.
With the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.
This template demonstrates how to perform private visual search and question-answering over a collection of your photos.
It uses an open source multi-modal LLM of your choice to create image summaries for each photos, embeds the summaries, and stores them in Chroma.
Given a question, relevat photos are retrieved and passed to the multi-modal LLM for answer synthesis.
## Input[](#input "Direct link to Input")
Supply a set of photos in the `/docs` directory.
By default, this template has a toy collection of 3 food pictures.
The app will look up and summarize photos based upon provided keywords or questions:
```
What kind of ice cream did I have?
```
In practice, a larger corpus of images can be tested.
To create an index of the images, run:
```
poetry installpython ingest.py
```
## Storage[](#storage "Direct link to Storage")
Here is the process the template will use to create an index of the slides (see [blog](https://blog.langchain.dev/multi-modal-rag-template/)):
* Given a set of images
* It uses a local multi-modal LLM ([bakllava](https://ollama.ai/library/bakllava)) to summarize each image
* Embeds the image summaries with a link to the original images
* Given a user question, it will relevant image(s) based on similarity between the image summary and user input (using Ollama embeddings)
* It will pass those images to bakllava for answer synthesis
By default, this will use [LocalFileStore](https://python.langchain.com/docs/integrations/stores/file_system) to store images and Chroma to store summaries.
## LLM and Embedding Models[](#llm-and-embedding-models "Direct link to LLM and Embedding Models")
We will use [Ollama](https://python.langchain.com/docs/integrations/chat/ollama#multi-modal) for generating image summaries, embeddings, and the final image QA.
Download the latest version of Ollama: [https://ollama.ai/](https://ollama.ai/)
Pull an open source multi-modal LLM: e.g., [https://ollama.ai/library/bakllava](https://ollama.ai/library/bakllava)
Pull an open source embedding model: e.g., [https://ollama.ai/library/llama2:7b](https://ollama.ai/library/llama2:7b)
```
ollama pull bakllavaollama pull llama2:7b
```
The app is by default configured for `bakllava`. But you can change this in `chain.py` and `ingest.py` for different downloaded models.
The app will retrieve images based on similarity between the text input and the image summary, and pass the images to `bakllava`.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-multi-modal-mv-local
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-multi-modal-mv-local
```
And add the following code to your `server.py` file:
```
from rag_multi_modal_mv_local import chain as rag_multi_modal_mv_local_chainadd_routes(app, rag_multi_modal_mv_local_chain, path="/rag-multi-modal-mv-local")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-multi-modal-mv-local/playground](http://127.0.0.1:8000/rag-multi-modal-mv-local/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-multi-modal-mv-local")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:47.431Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-multi-modal-mv-local/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-multi-modal-mv-local/",
"description": "Visual search is a famililar application to many with iPhones or Android devices. It allows user to search photos using natural language.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4923",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-multi-modal-mv-local\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:47 GMT",
"etag": "W/\"17e18738e14d547606ecc3ad34b8ed59\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::4vch7-1713753947126-18d3b0d92e42"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-multi-modal-mv-local/",
"property": "og:url"
},
{
"content": "rag-multi-modal-mv-local | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Visual search is a famililar application to many with iPhones or Android devices. It allows user to search photos using natural language.",
"property": "og:description"
}
],
"title": "rag-multi-modal-mv-local | 🦜️🔗 LangChain"
} | rag-multi-modal-mv-local
Visual search is a famililar application to many with iPhones or Android devices. It allows user to search photos using natural language.
With the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.
This template demonstrates how to perform private visual search and question-answering over a collection of your photos.
It uses an open source multi-modal LLM of your choice to create image summaries for each photos, embeds the summaries, and stores them in Chroma.
Given a question, relevat photos are retrieved and passed to the multi-modal LLM for answer synthesis.
Input
Supply a set of photos in the /docs directory.
By default, this template has a toy collection of 3 food pictures.
The app will look up and summarize photos based upon provided keywords or questions:
What kind of ice cream did I have?
In practice, a larger corpus of images can be tested.
To create an index of the images, run:
poetry install
python ingest.py
Storage
Here is the process the template will use to create an index of the slides (see blog):
Given a set of images
It uses a local multi-modal LLM (bakllava) to summarize each image
Embeds the image summaries with a link to the original images
Given a user question, it will relevant image(s) based on similarity between the image summary and user input (using Ollama embeddings)
It will pass those images to bakllava for answer synthesis
By default, this will use LocalFileStore to store images and Chroma to store summaries.
LLM and Embedding Models
We will use Ollama for generating image summaries, embeddings, and the final image QA.
Download the latest version of Ollama: https://ollama.ai/
Pull an open source multi-modal LLM: e.g., https://ollama.ai/library/bakllava
Pull an open source embedding model: e.g., https://ollama.ai/library/llama2:7b
ollama pull bakllava
ollama pull llama2:7b
The app is by default configured for bakllava. But you can change this in chain.py and ingest.py for different downloaded models.
The app will retrieve images based on similarity between the text input and the image summary, and pass the images to bakllava.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-multi-modal-mv-local
If you want to add this to an existing project, you can just run:
langchain app add rag-multi-modal-mv-local
And add the following code to your server.py file:
from rag_multi_modal_mv_local import chain as rag_multi_modal_mv_local_chain
add_routes(app, rag_multi_modal_mv_local_chain, path="/rag-multi-modal-mv-local")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-multi-modal-mv-local/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-multi-modal-mv-local") |
https://python.langchain.com/docs/templates/openai-functions-agent/ | ## openai-functions-agent
This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `TAVILY_API_KEY` environment variable to access Tavily.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package openai-functions-agent
```
If you want to add this to an existing project, you can just run:
```
langchain app add openai-functions-agent
```
And add the following code to your `server.py` file:
```
from openai_functions_agent import agent_executor as openai_functions_agent_chainadd_routes(app, openai_functions_agent_chain, path="/openai-functions-agent")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/openai-functions-agent/playground](http://127.0.0.1:8000/openai-functions-agent/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/openai-functions-agent")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:48.394Z",
"loadedUrl": "https://python.langchain.com/docs/templates/openai-functions-agent/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/openai-functions-agent/",
"description": "This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4932",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"openai-functions-agent\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:48 GMT",
"etag": "W/\"8fa9eb7a206273fbf632e659116f36bc\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::jsw6h-1713753948276-8cae3651c09f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/openai-functions-agent/",
"property": "og:url"
},
{
"content": "openai-functions-agent | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.",
"property": "og:description"
}
],
"title": "openai-functions-agent | 🦜️🔗 LangChain"
} | openai-functions-agent
This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
Environment Setup
The following environment variables need to be set:
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Set the TAVILY_API_KEY environment variable to access Tavily.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package openai-functions-agent
If you want to add this to an existing project, you can just run:
langchain app add openai-functions-agent
And add the following code to your server.py file:
from openai_functions_agent import agent_executor as openai_functions_agent_chain
add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/openai-functions-agent/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-agent") |
https://python.langchain.com/docs/templates/rag-ollama-multi-query/ | ## rag-ollama-multi-query
This template performs RAG using Ollama and OpenAI with a multi-query retriever.
The multi-query retriever is an example of query transformation, generating multiple queries from different perspectives based on the user's input query.
For each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.
We use a private, local LLM for the narrow task of query generation to avoid excessive calls to a larger LLM API.
See an example trace for Ollama LLM performing the query expansion [here](https://smith.langchain.com/public/8017d04d-2045-4089-b47f-f2d66393a999/r).
But we use OpenAI for the more challenging task of answer syntesis (full trace example [here](https://smith.langchain.com/public/ec75793b-645b-498d-b855-e8d85e1f6738/r)).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
To set up the environment, you need to download Ollama.
Follow the instructions [here](https://python.langchain.com/docs/integrations/chat/ollama).
You can choose the desired LLM with Ollama.
This template uses `zephyr`, which can be accessed using `ollama pull zephyr`.
There are many other options available [here](https://ollama.ai/library).
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first install the LangChain CLI:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this package, do:
```
langchain app new my-app --package rag-ollama-multi-query
```
To add this package to an existing project, run:
```
langchain app add rag-ollama-multi-query
```
And add the following code to your `server.py` file:
```
from rag_ollama_multi_query import chain as rag_ollama_multi_query_chainadd_routes(app, rag_ollama_multi_query_chain, path="/rag-ollama-multi-query")
```
(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000/)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) You can access the playground at [http://127.0.0.1:8000/rag-ollama-multi-query/playground](http://127.0.0.1:8000/rag-ollama-multi-query/playground)
To access the template from code, use:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-ollama-multi-query")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:48.512Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-ollama-multi-query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-ollama-multi-query/",
"description": "This template performs RAG using Ollama and OpenAI with a multi-query retriever.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3748",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-ollama-multi-query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:48 GMT",
"etag": "W/\"f71f2fbaa325455285e2ed972dd89294\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::64q9c-1713753948361-257051650b43"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-ollama-multi-query/",
"property": "og:url"
},
{
"content": "rag-ollama-multi-query | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Ollama and OpenAI with a multi-query retriever.",
"property": "og:description"
}
],
"title": "rag-ollama-multi-query | 🦜️🔗 LangChain"
} | rag-ollama-multi-query
This template performs RAG using Ollama and OpenAI with a multi-query retriever.
The multi-query retriever is an example of query transformation, generating multiple queries from different perspectives based on the user's input query.
For each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.
We use a private, local LLM for the narrow task of query generation to avoid excessive calls to a larger LLM API.
See an example trace for Ollama LLM performing the query expansion here.
But we use OpenAI for the more challenging task of answer syntesis (full trace example here).
Environment Setup
To set up the environment, you need to download Ollama.
Follow the instructions here.
You can choose the desired LLM with Ollama.
This template uses zephyr, which can be accessed using ollama pull zephyr.
There are many other options available here.
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first install the LangChain CLI:
pip install -U langchain-cli
To create a new LangChain project and install this package, do:
langchain app new my-app --package rag-ollama-multi-query
To add this package to an existing project, run:
langchain app add rag-ollama-multi-query
And add the following code to your server.py file:
from rag_ollama_multi_query import chain as rag_ollama_multi_query_chain
add_routes(app, rag_ollama_multi_query_chain, path="/rag-ollama-multi-query")
(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at http://localhost:8000
You can see all templates at http://127.0.0.1:8000/docs You can access the playground at http://127.0.0.1:8000/rag-ollama-multi-query/playground
To access the template from code, use:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-ollama-multi-query") |
https://python.langchain.com/docs/templates/rag-opensearch/ | ## rag-opensearch
This Template performs RAG using [OpenSearch](https://python.langchain.com/docs/integrations/vectorstores/opensearch).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the following environment variables.
* `OPENAI_API_KEY` - To access OpenAI Embeddings and Models.
And optionally set the OpenSearch ones if not using defaults:
* `OPENSEARCH_URL` - URL of the hosted OpenSearch Instance
* `OPENSEARCH_USERNAME` - User name for the OpenSearch instance
* `OPENSEARCH_PASSWORD` - Password for the OpenSearch instance
* `OPENSEARCH_INDEX_NAME` - Name of the index
To run the default OpenSeach instance in docker, you can use the command
```
docker run -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" --name opensearch-node -d opensearchproject/opensearch:latest
```
Note: To load dummy index named `langchain-test` with dummy documents, run `python dummy_index_setup.py` in the package
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-opensearch
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-opensearch
```
And add the following code to your `server.py` file:
```
from rag_opensearch import chain as rag_opensearch_chainadd_routes(app, rag_opensearch_chain, path="/rag-opensearch")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-opensearch/playground](http://127.0.0.1:8000/rag-opensearch/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-opensearch")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:48.730Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-opensearch/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-opensearch/",
"description": "This Template performs RAG using OpenSearch.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-opensearch\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:48 GMT",
"etag": "W/\"726ba62848688a772ffaa7ef4d094015\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::grqfv-1713753948371-f881e60e12e9"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-opensearch/",
"property": "og:url"
},
{
"content": "rag-opensearch | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This Template performs RAG using OpenSearch.",
"property": "og:description"
}
],
"title": "rag-opensearch | 🦜️🔗 LangChain"
} | rag-opensearch
This Template performs RAG using OpenSearch.
Environment Setup
Set the following environment variables.
OPENAI_API_KEY - To access OpenAI Embeddings and Models.
And optionally set the OpenSearch ones if not using defaults:
OPENSEARCH_URL - URL of the hosted OpenSearch Instance
OPENSEARCH_USERNAME - User name for the OpenSearch instance
OPENSEARCH_PASSWORD - Password for the OpenSearch instance
OPENSEARCH_INDEX_NAME - Name of the index
To run the default OpenSeach instance in docker, you can use the command
docker run -p 9200:9200 -p 9600:9600 -e "discovery.type=single-node" --name opensearch-node -d opensearchproject/opensearch:latest
Note: To load dummy index named langchain-test with dummy documents, run python dummy_index_setup.py in the package
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-opensearch
If you want to add this to an existing project, you can just run:
langchain app add rag-opensearch
And add the following code to your server.py file:
from rag_opensearch import chain as rag_opensearch_chain
add_routes(app, rag_opensearch_chain, path="/rag-opensearch")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-opensearch/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-opensearch") |
https://python.langchain.com/docs/templates/gemini-functions-agent/ | ## gemini-functions-agent
This template creates an agent that uses Google Gemini function calling to communicate its decisions on what actions to take.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
[See an example LangSmith trace here](https://smith.langchain.com/public/0ebf1bd6-b048-4019-b4de-25efe8d3d18c/r)
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
The following environment variables need to be set:
Set the `TAVILY_API_KEY` environment variable to access Tavily
Set the `GOOGLE_API_KEY` environment variable to access the Google Gemini APIs.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package gemini-functions-agent
```
If you want to add this to an existing project, you can just run:
```
langchain app add gemini-functions-agent
```
And add the following code to your `server.py` file:
```
from gemini_functions_agent import agent_executor as gemini_functions_agent_chainadd_routes(app, gemini_functions_agent_chain, path="/openai-functions-agent")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/gemini-functions-agent/playground](http://127.0.0.1:8000/gemini-functions-agent/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/gemini-functions-agent")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:48.834Z",
"loadedUrl": "https://python.langchain.com/docs/templates/gemini-functions-agent/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/gemini-functions-agent/",
"description": "This template creates an agent that uses Google Gemini function calling to communicate its decisions on what actions to take.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "8947",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"gemini-functions-agent\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:48 GMT",
"etag": "W/\"8e34348a4098501c1bc91a4e51c4de2c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::l88wt-1713753948392-d04c945ddc5e"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/gemini-functions-agent/",
"property": "og:url"
},
{
"content": "gemini-functions-agent | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template creates an agent that uses Google Gemini function calling to communicate its decisions on what actions to take.",
"property": "og:description"
}
],
"title": "gemini-functions-agent | 🦜️🔗 LangChain"
} | gemini-functions-agent
This template creates an agent that uses Google Gemini function calling to communicate its decisions on what actions to take.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
See an example LangSmith trace here
Environment Setup
The following environment variables need to be set:
Set the TAVILY_API_KEY environment variable to access Tavily
Set the GOOGLE_API_KEY environment variable to access the Google Gemini APIs.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package gemini-functions-agent
If you want to add this to an existing project, you can just run:
langchain app add gemini-functions-agent
And add the following code to your server.py file:
from gemini_functions_agent import agent_executor as gemini_functions_agent_chain
add_routes(app, gemini_functions_agent_chain, path="/openai-functions-agent")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/gemini-functions-agent/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/gemini-functions-agent") |
https://python.langchain.com/docs/templates/openai-functions-tool-retrieval-agent/ | ## openai-functions-tool-retrieval-agent
The novel idea introduced in this template is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.
In this template we will create a somewhat contrived example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query.
This template is based on [this Agent How-To](https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `TAVILY_API_KEY` environment variable to access Tavily.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package openai-functions-tool-retrieval-agent
```
If you want to add this to an existing project, you can just run:
```
langchain app add openai-functions-tool-retrieval-agent
```
And add the following code to your `server.py` file:
```
from openai_functions_tool_retrieval_agent import agent_executor as openai_functions_tool_retrieval_agent_chainadd_routes(app, openai_functions_tool_retrieval_agent_chain, path="/openai-functions-tool-retrieval-agent")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/openai-functions-tool-retrieval-agent/playground](http://127.0.0.1:8000/openai-functions-tool-retrieval-agent/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/openai-functions-tool-retrieval-agent")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:48.939Z",
"loadedUrl": "https://python.langchain.com/docs/templates/openai-functions-tool-retrieval-agent/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/openai-functions-tool-retrieval-agent/",
"description": "The novel idea introduced in this template is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3023",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"openai-functions-tool-retrieval-agent\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:48 GMT",
"etag": "W/\"e01a419e40125f1ada4ce6646716bc8c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::m8br6-1713753948401-3c96440a9f4b"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/openai-functions-tool-retrieval-agent/",
"property": "og:url"
},
{
"content": "openai-functions-tool-retrieval-agent | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "The novel idea introduced in this template is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.",
"property": "og:description"
}
],
"title": "openai-functions-tool-retrieval-agent | 🦜️🔗 LangChain"
} | openai-functions-tool-retrieval-agent
The novel idea introduced in this template is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.
In this template we will create a somewhat contrived example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query.
This template is based on this Agent How-To.
Environment Setup
The following environment variables need to be set:
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Set the TAVILY_API_KEY environment variable to access Tavily.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package openai-functions-tool-retrieval-agent
If you want to add this to an existing project, you can just run:
langchain app add openai-functions-tool-retrieval-agent
And add the following code to your server.py file:
from openai_functions_tool_retrieval_agent import agent_executor as openai_functions_tool_retrieval_agent_chain
add_routes(app, openai_functions_tool_retrieval_agent_chain, path="/openai-functions-tool-retrieval-agent")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/openai-functions-tool-retrieval-agent/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-tool-retrieval-agent") |
https://python.langchain.com/docs/templates/extraction-openai-functions/ | ## extraction-openai-functions
This template uses [OpenAI function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) for extraction of structured output from unstructured input text.
The extraction output schema can be set in `chain.py`.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package extraction-openai-functions
```
If you want to add this to an existing project, you can just run:
```
langchain app add extraction-openai-functions
```
And add the following code to your `server.py` file:
```
from extraction_openai_functions import chain as extraction_openai_functions_chainadd_routes(app, extraction_openai_functions_chain, path="/extraction-openai-functions")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/extraction-openai-functions/playground](http://127.0.0.1:8000/extraction-openai-functions/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/extraction-openai-functions")
```
By default, this package is set to extract the title and author of papers, as specified in the `chain.py` file.
LLM is leveraged by the OpenAI function by default. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:49.110Z",
"loadedUrl": "https://python.langchain.com/docs/templates/extraction-openai-functions/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/extraction-openai-functions/",
"description": "This template uses OpenAI function calling for extraction of structured output from unstructured input text.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4939",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"extraction-openai-functions\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:48 GMT",
"etag": "W/\"a7822a299a1ec020a3457f27895eb4ba\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::c9jwb-1713753948537-eb21fb0e1e3a"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/extraction-openai-functions/",
"property": "og:url"
},
{
"content": "extraction-openai-functions | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template uses OpenAI function calling for extraction of structured output from unstructured input text.",
"property": "og:description"
}
],
"title": "extraction-openai-functions | 🦜️🔗 LangChain"
} | extraction-openai-functions
This template uses OpenAI function calling for extraction of structured output from unstructured input text.
The extraction output schema can be set in chain.py.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package extraction-openai-functions
If you want to add this to an existing project, you can just run:
langchain app add extraction-openai-functions
And add the following code to your server.py file:
from extraction_openai_functions import chain as extraction_openai_functions_chain
add_routes(app, extraction_openai_functions_chain, path="/extraction-openai-functions")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/extraction-openai-functions/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/extraction-openai-functions")
By default, this package is set to extract the title and author of papers, as specified in the chain.py file.
LLM is leveraged by the OpenAI function by default. |
https://python.langchain.com/docs/templates/guardrails-output-parser/ | ## guardrails-output-parser
This template uses [guardrails-ai](https://github.com/guardrails-ai/guardrails) to validate LLM output.
The `GuardrailsOutputParser` is set in `chain.py`.
The default example protects against profanity.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package guardrails-output-parser
```
If you want to add this to an existing project, you can just run:
```
langchain app add guardrails-output-parser
```
And add the following code to your `server.py` file:
```
from guardrails_output_parser.chain import chain as guardrails_output_parser_chainadd_routes(app, guardrails_output_parser_chain, path="/guardrails-output-parser")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/guardrails-output-parser/playground](http://127.0.0.1:8000/guardrails-output-parser/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/guardrails-output-parser")
```
If Guardrails does not find any profanity, then the translated output is returned as is. If Guardrails does find profanity, then an empty string is returned. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:49.342Z",
"loadedUrl": "https://python.langchain.com/docs/templates/guardrails-output-parser/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/guardrails-output-parser/",
"description": "This template uses guardrails-ai to validate LLM output.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4937",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"guardrails-output-parser\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:48 GMT",
"etag": "W/\"d842aded88f5d72deb2026f8d7c90562\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wv8xj-1713753948684-e8bfb3d0d980"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/guardrails-output-parser/",
"property": "og:url"
},
{
"content": "guardrails-output-parser | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template uses guardrails-ai to validate LLM output.",
"property": "og:description"
}
],
"title": "guardrails-output-parser | 🦜️🔗 LangChain"
} | guardrails-output-parser
This template uses guardrails-ai to validate LLM output.
The GuardrailsOutputParser is set in chain.py.
The default example protects against profanity.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package guardrails-output-parser
If you want to add this to an existing project, you can just run:
langchain app add guardrails-output-parser
And add the following code to your server.py file:
from guardrails_output_parser.chain import chain as guardrails_output_parser_chain
add_routes(app, guardrails_output_parser_chain, path="/guardrails-output-parser")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/guardrails-output-parser/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/guardrails-output-parser")
If Guardrails does not find any profanity, then the translated output is returned as is. If Guardrails does find profanity, then an empty string is returned. |
https://python.langchain.com/docs/templates/pii-protected-chatbot/ | ## pii-protected-chatbot
This template creates a chatbot that flags any incoming PII and doesn't pass it to the LLM.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package pii-protected-chatbot
```
If you want to add this to an existing project, you can just run:
```
langchain app add pii-protected-chatbot
```
And add the following code to your `server.py` file:
```
from pii_protected_chatbot.chain import chain as pii_protected_chatbotadd_routes(app, pii_protected_chatbot, path="/openai-functions-agent")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/pii\_protected\_chatbot/playground](http://127.0.0.1:8000/pii_protected_chatbot/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/pii_protected_chatbot")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:49.699Z",
"loadedUrl": "https://python.langchain.com/docs/templates/pii-protected-chatbot/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/pii-protected-chatbot/",
"description": "This template creates a chatbot that flags any incoming PII and doesn't pass it to the LLM.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3752",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"pii-protected-chatbot\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:49 GMT",
"etag": "W/\"d4d37c6b90213bc8034aee38b9f9e011\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::hf2cn-1713753949598-0e09e7ac7b64"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/pii-protected-chatbot/",
"property": "og:url"
},
{
"content": "pii-protected-chatbot | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template creates a chatbot that flags any incoming PII and doesn't pass it to the LLM.",
"property": "og:description"
}
],
"title": "pii-protected-chatbot | 🦜️🔗 LangChain"
} | pii-protected-chatbot
This template creates a chatbot that flags any incoming PII and doesn't pass it to the LLM.
Environment Setup
The following environment variables need to be set:
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package pii-protected-chatbot
If you want to add this to an existing project, you can just run:
langchain app add pii-protected-chatbot
And add the following code to your server.py file:
from pii_protected_chatbot.chain import chain as pii_protected_chatbot
add_routes(app, pii_protected_chatbot, path="/openai-functions-agent")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/pii_protected_chatbot/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/pii_protected_chatbot")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/pirate-speak-configurable/ | ## pirate-speak-configurable
This template converts user input into pirate speak. It shows how you can allow `configurable_alternatives` in the Runnable, allowing you to select from OpenAI, Anthropic, or Cohere as your LLM Provider in the playground (or via API).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the following environment variables to access all 3 configurable alternative model providers:
* `OPENAI_API_KEY`
* `ANTHROPIC_API_KEY`
* `COHERE_API_KEY`
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package pirate-speak-configurable
```
If you want to add this to an existing project, you can just run:
```
langchain app add pirate-speak-configurable
```
And add the following code to your `server.py` file:
```
from pirate_speak_configurable import chain as pirate_speak_configurable_chainadd_routes(app, pirate_speak_configurable_chain, path="/pirate-speak-configurable")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/pirate-speak-configurable/playground](http://127.0.0.1:8000/pirate-speak-configurable/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/pirate-speak-configurable")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:49.868Z",
"loadedUrl": "https://python.langchain.com/docs/templates/pirate-speak-configurable/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/pirate-speak-configurable/",
"description": "This template converts user input into pirate speak. It shows how you can allow",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3752",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"pirate-speak-configurable\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:49 GMT",
"etag": "W/\"e96194acfb668cdca344427fb72333f8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dvqkj-1713753949813-979b50556fda"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/pirate-speak-configurable/",
"property": "og:url"
},
{
"content": "pirate-speak-configurable | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template converts user input into pirate speak. It shows how you can allow",
"property": "og:description"
}
],
"title": "pirate-speak-configurable | 🦜️🔗 LangChain"
} | pirate-speak-configurable
This template converts user input into pirate speak. It shows how you can allow configurable_alternatives in the Runnable, allowing you to select from OpenAI, Anthropic, or Cohere as your LLM Provider in the playground (or via API).
Environment Setup
Set the following environment variables to access all 3 configurable alternative model providers:
OPENAI_API_KEY
ANTHROPIC_API_KEY
COHERE_API_KEY
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package pirate-speak-configurable
If you want to add this to an existing project, you can just run:
langchain app add pirate-speak-configurable
And add the following code to your server.py file:
from pirate_speak_configurable import chain as pirate_speak_configurable_chain
add_routes(app, pirate_speak_configurable_chain, path="/pirate-speak-configurable")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/pirate-speak-configurable/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/pirate-speak-configurable")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-pinecone-multi-query/ | ## rag-pinecone-multi-query
This template performs RAG using Pinecone and OpenAI with a multi-query retriever.
It uses an LLM to generate multiple queries from different perspectives based on the user's input query.
For each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first install the LangChain CLI:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this package, do:
```
langchain app new my-app --package rag-pinecone-multi-query
```
To add this package to an existing project, run:
```
langchain app add rag-pinecone-multi-query
```
And add the following code to your `server.py` file:
```
from rag_pinecone_multi_query import chain as rag_pinecone_multi_query_chainadd_routes(app, rag_pinecone_multi_query_chain, path="/rag-pinecone-multi-query")
```
(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000/)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) You can access the playground at [http://127.0.0.1:8000/rag-pinecone-multi-query/playground](http://127.0.0.1:8000/rag-pinecone-multi-query/playground)
To access the template from code, use:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-pinecone-multi-query")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:50.130Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-pinecone-multi-query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-pinecone-multi-query/",
"description": "This template performs RAG using Pinecone and OpenAI with a multi-query retriever.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-pinecone-multi-query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:50 GMT",
"etag": "W/\"df52b30927c06f2d5f764bace03645b1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::cgwfs-1713753949986-14910715f6de"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-pinecone-multi-query/",
"property": "og:url"
},
{
"content": "rag-pinecone-multi-query | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Pinecone and OpenAI with a multi-query retriever.",
"property": "og:description"
}
],
"title": "rag-pinecone-multi-query | 🦜️🔗 LangChain"
} | rag-pinecone-multi-query
This template performs RAG using Pinecone and OpenAI with a multi-query retriever.
It uses an LLM to generate multiple queries from different perspectives based on the user's input query.
For each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.
Environment Setup
This template uses Pinecone as a vectorstore and requires that PINECONE_API_KEY, PINECONE_ENVIRONMENT, and PINECONE_INDEX are set.
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first install the LangChain CLI:
pip install -U langchain-cli
To create a new LangChain project and install this package, do:
langchain app new my-app --package rag-pinecone-multi-query
To add this package to an existing project, run:
langchain app add rag-pinecone-multi-query
And add the following code to your server.py file:
from rag_pinecone_multi_query import chain as rag_pinecone_multi_query_chain
add_routes(app, rag_pinecone_multi_query_chain, path="/rag-pinecone-multi-query")
(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at http://localhost:8000
You can see all templates at http://127.0.0.1:8000/docs You can access the playground at http://127.0.0.1:8000/rag-pinecone-multi-query/playground
To access the template from code, use:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone-multi-query") |
https://python.langchain.com/docs/templates/pirate-speak/ | ## pirate-speak
This template converts user input into pirate speak.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package pirate-speak
```
If you want to add this to an existing project, you can just run:
```
langchain app add pirate-speak
```
And add the following code to your `server.py` file:
```
from pirate_speak.chain import chain as pirate_speak_chainadd_routes(app, pirate_speak_chain, path="/pirate-speak")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/pirate-speak/playground](http://127.0.0.1:8000/pirate-speak/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/pirate-speak")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:50.326Z",
"loadedUrl": "https://python.langchain.com/docs/templates/pirate-speak/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/pirate-speak/",
"description": "This template converts user input into pirate speak.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"pirate-speak\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:50 GMT",
"etag": "W/\"8dae268bcc192c0706954de9fe457b8b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::f6976-1713753950099-a61a83117d23"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/pirate-speak/",
"property": "og:url"
},
{
"content": "pirate-speak | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template converts user input into pirate speak.",
"property": "og:description"
}
],
"title": "pirate-speak | 🦜️🔗 LangChain"
} | pirate-speak
This template converts user input into pirate speak.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package pirate-speak
If you want to add this to an existing project, you can just run:
langchain app add pirate-speak
And add the following code to your server.py file:
from pirate_speak.chain import chain as pirate_speak_chain
add_routes(app, pirate_speak_chain, path="/pirate-speak")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/pirate-speak/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/pirate-speak")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/intel-rag-xeon/ | This template performs RAG using Chroma and Text Generation Inference on Intel® Xeon® Scalable Processors. Intel® Xeon® Scalable processors feature built-in accelerators for more performance-per-core and unmatched AI performance, with advanced security technologies for the most in-demand workload requirements—all while offering the greatest cloud choice and application portability, please check [Intel® Xeon® Scalable Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
To use [🤗 text-generation-inference](https://github.com/huggingface/text-generation-inference) on Intel® Xeon® Scalable Processors, please follow these steps:
### Launch a local server instance on Intel Xeon Server:[](#launch-a-local-server-instance-on-intel-xeon-server "Direct link to Launch a local server instance on Intel Xeon Server:")
```
model=Intel/neural-chat-7b-v3-3volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every rundocker run --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4 --model-id $model
```
For gated models such as `LLAMA-2`, you will have to pass -e HUGGING\_FACE\_HUB\_TOKEN=\\<token\> to the docker run command above with a valid Hugging Face Hub read token.
Please follow this link [huggingface token](https://huggingface.co/docs/hub/security-tokens) to get the access token ans export `HUGGINGFACEHUB_API_TOKEN` environment with the token.
```
export HUGGINGFACEHUB_API_TOKEN=<token>
```
Send a request to check if the endpoint is wokring:
```
curl localhost:8080/generate -X POST -d '{"inputs":"Which NFL team won the Super Bowl in the 2010 season?","parameters":{"max_new_tokens":128, "do_sample": true}}' -H 'Content-Type: application/json'
```
More details please refer to [text-generation-inference](https://github.com/huggingface/text-generation-inference).
## Populating with data[](#populating-with-data "Direct link to Populating with data")
If you want to populate the DB with some example data, you can run the below commands:
```
poetry installpoetry run python ingest.py
```
The script process and stores sections from Edgar 10k filings data for Nike `nke-10k-2023.pdf` into a Chroma database.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package intel-rag-xeon
```
If you want to add this to an existing project, you can just run:
```
langchain app add intel-rag-xeon
```
And add the following code to your `server.py` file:
```
from intel_rag_xeon import chain as xeon_rag_chainadd_routes(app, xeon_rag_chain, path="/intel-rag-xeon")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/intel-rag-xeon/playground](http://127.0.0.1:8000/intel-rag-xeon/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/intel-rag-xeon")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:50.435Z",
"loadedUrl": "https://python.langchain.com/docs/templates/intel-rag-xeon/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/intel-rag-xeon/",
"description": "This template performs RAG using Chroma and Text Generation Inference on Intel® Xeon® Scalable Processors.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3754",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"intel-rag-xeon\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:50 GMT",
"etag": "W/\"d099a458fce3c586fecd1367348cbbd5\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::nqbp6-1713753950240-592096d9279c"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/intel-rag-xeon/",
"property": "og:url"
},
{
"content": "RAG example on Intel Xeon | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Chroma and Text Generation Inference on Intel® Xeon® Scalable Processors.",
"property": "og:description"
}
],
"title": "RAG example on Intel Xeon | 🦜️🔗 LangChain"
} | This template performs RAG using Chroma and Text Generation Inference on Intel® Xeon® Scalable Processors. Intel® Xeon® Scalable processors feature built-in accelerators for more performance-per-core and unmatched AI performance, with advanced security technologies for the most in-demand workload requirements—all while offering the greatest cloud choice and application portability, please check Intel® Xeon® Scalable Processors.
Environment Setup
To use 🤗 text-generation-inference on Intel® Xeon® Scalable Processors, please follow these steps:
Launch a local server instance on Intel Xeon Server:
model=Intel/neural-chat-7b-v3-3
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4 --model-id $model
For gated models such as LLAMA-2, you will have to pass -e HUGGING_FACE_HUB_TOKEN=\<token> to the docker run command above with a valid Hugging Face Hub read token.
Please follow this link huggingface token to get the access token ans export HUGGINGFACEHUB_API_TOKEN environment with the token.
export HUGGINGFACEHUB_API_TOKEN=<token>
Send a request to check if the endpoint is wokring:
curl localhost:8080/generate -X POST -d '{"inputs":"Which NFL team won the Super Bowl in the 2010 season?","parameters":{"max_new_tokens":128, "do_sample": true}}' -H 'Content-Type: application/json'
More details please refer to text-generation-inference.
Populating with data
If you want to populate the DB with some example data, you can run the below commands:
poetry install
poetry run python ingest.py
The script process and stores sections from Edgar 10k filings data for Nike nke-10k-2023.pdf into a Chroma database.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package intel-rag-xeon
If you want to add this to an existing project, you can just run:
langchain app add intel-rag-xeon
And add the following code to your server.py file:
from intel_rag_xeon import chain as xeon_rag_chain
add_routes(app, xeon_rag_chain, path="/intel-rag-xeon")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/intel-rag-xeon/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/intel-rag-xeon") |
https://python.langchain.com/docs/templates/rag-pinecone-rerank/ | ## rag-pinecone-rerank
This template performs RAG using Pinecone and OpenAI along with [Cohere to perform re-ranking](https://txt.cohere.com/rerank/) on returned documents.
Re-ranking provides a way to rank retrieved documents using specified filters or criteria.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `COHERE_API_KEY` environment variable to access the Cohere ReRank.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-pinecone-rerank
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-pinecone-rerank
```
And add the following code to your `server.py` file:
```
from rag_pinecone_rerank import chain as rag_pinecone_rerank_chainadd_routes(app, rag_pinecone_rerank_chain, path="/rag-pinecone-rerank")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-pinecone-rerank/playground](http://127.0.0.1:8000/rag-pinecone-rerank/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-pinecone-rerank")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:50.604Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-pinecone-rerank/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-pinecone-rerank/",
"description": "This template performs RAG using Pinecone and OpenAI along with Cohere to perform re-ranking on returned documents.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4925",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-pinecone-rerank\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:50 GMT",
"etag": "W/\"162dbe41235b90577e8b8ff9c052a52e\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::ncfnt-1713753950246-5027d8cb5bd9"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-pinecone-rerank/",
"property": "og:url"
},
{
"content": "rag-pinecone-rerank | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Pinecone and OpenAI along with Cohere to perform re-ranking on returned documents.",
"property": "og:description"
}
],
"title": "rag-pinecone-rerank | 🦜️🔗 LangChain"
} | rag-pinecone-rerank
This template performs RAG using Pinecone and OpenAI along with Cohere to perform re-ranking on returned documents.
Re-ranking provides a way to rank retrieved documents using specified filters or criteria.
Environment Setup
This template uses Pinecone as a vectorstore and requires that PINECONE_API_KEY, PINECONE_ENVIRONMENT, and PINECONE_INDEX are set.
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Set the COHERE_API_KEY environment variable to access the Cohere ReRank.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-pinecone-rerank
If you want to add this to an existing project, you can just run:
langchain app add rag-pinecone-rerank
And add the following code to your server.py file:
from rag_pinecone_rerank import chain as rag_pinecone_rerank_chain
add_routes(app, rag_pinecone_rerank_chain, path="/rag-pinecone-rerank")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-pinecone-rerank/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone-rerank") |
https://python.langchain.com/docs/templates/hyde/ | ## hyde
This template uses HyDE with RAG.
Hyde is a retrieval method that stands for Hypothetical Document Embeddings (HyDE). It is a method used to enhance retrieval by generating a hypothetical document for an incoming query.
The document is then embedded, and that embedding is utilized to look up real documents that are similar to the hypothetical document.
The underlying concept is that the hypothetical document may be closer in the embedding space than the query.
For a more detailed description, see the paper [here](https://arxiv.org/abs/2212.10496).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package hyde
```
If you want to add this to an existing project, you can just run:
And add the following code to your `server.py` file:
```
from hyde.chain import chain as hyde_chainadd_routes(app, hyde_chain, path="/hyde")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/hyde/playground](http://127.0.0.1:8000/hyde/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/hyde")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:50.765Z",
"loadedUrl": "https://python.langchain.com/docs/templates/hyde/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/hyde/",
"description": "This template uses HyDE with RAG.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4938",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"hyde\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:50 GMT",
"etag": "W/\"48e51925e9223445a2eb1898d25323c5\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::stqkb-1713753950344-a8a6df401595"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/hyde/",
"property": "og:url"
},
{
"content": "hyde | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template uses HyDE with RAG.",
"property": "og:description"
}
],
"title": "hyde | 🦜️🔗 LangChain"
} | hyde
This template uses HyDE with RAG.
Hyde is a retrieval method that stands for Hypothetical Document Embeddings (HyDE). It is a method used to enhance retrieval by generating a hypothetical document for an incoming query.
The document is then embedded, and that embedding is utilized to look up real documents that are similar to the hypothetical document.
The underlying concept is that the hypothetical document may be closer in the embedding space than the query.
For a more detailed description, see the paper here.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package hyde
If you want to add this to an existing project, you can just run:
And add the following code to your server.py file:
from hyde.chain import chain as hyde_chain
add_routes(app, hyde_chain, path="/hyde")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/hyde/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/hyde") |
https://python.langchain.com/docs/templates/hybrid-search-weaviate/ | ## Hybrid Search in Weaviate
This template shows you how to use the hybrid search feature in Weaviate. Hybrid search combines multiple search algorithms to improve the accuracy and relevance of search results.
Weaviate uses both sparse and dense vectors to represent the meaning and context of search queries and documents. The results use a combination of `bm25` and vector search ranking to return the top results.
## Configurations[](#configurations "Direct link to Configurations")
Connect to your hosted Weaviate Vectorstore by setting a few env variables in `chain.py`:
* `WEAVIATE_ENVIRONMENT`
* `WEAVIATE_API_KEY`
You will also need to set your `OPENAI_API_KEY` to use the OpenAI models.
## Get Started[](#get-started "Direct link to Get Started")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package hybrid-search-weaviate
```
If you want to add this to an existing project, you can just run:
```
langchain app add hybrid-search-weaviate
```
And add the following code to your `server.py` file:
```
from hybrid_search_weaviate import chain as hybrid_search_weaviate_chainadd_routes(app, hybrid_search_weaviate_chain, path="/hybrid-search-weaviate")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/hybrid-search-weaviate/playground](http://127.0.0.1:8000/hybrid-search-weaviate/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/hybrid-search-weaviate")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:50.878Z",
"loadedUrl": "https://python.langchain.com/docs/templates/hybrid-search-weaviate/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/hybrid-search-weaviate/",
"description": "This template shows you how to use the hybrid search feature in Weaviate. Hybrid search combines multiple search algorithms to improve the accuracy and relevance of search results.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4938",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"hybrid-search-weaviate\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:50 GMT",
"etag": "W/\"541767e6437207d9c7479e4b28c1f40e\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::ffxhk-1713753950343-f12e7cd3b37f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/hybrid-search-weaviate/",
"property": "og:url"
},
{
"content": "Hybrid Search in Weaviate | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template shows you how to use the hybrid search feature in Weaviate. Hybrid search combines multiple search algorithms to improve the accuracy and relevance of search results.",
"property": "og:description"
}
],
"title": "Hybrid Search in Weaviate | 🦜️🔗 LangChain"
} | Hybrid Search in Weaviate
This template shows you how to use the hybrid search feature in Weaviate. Hybrid search combines multiple search algorithms to improve the accuracy and relevance of search results.
Weaviate uses both sparse and dense vectors to represent the meaning and context of search queries and documents. The results use a combination of bm25 and vector search ranking to return the top results.
Configurations
Connect to your hosted Weaviate Vectorstore by setting a few env variables in chain.py:
WEAVIATE_ENVIRONMENT
WEAVIATE_API_KEY
You will also need to set your OPENAI_API_KEY to use the OpenAI models.
Get Started
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package hybrid-search-weaviate
If you want to add this to an existing project, you can just run:
langchain app add hybrid-search-weaviate
And add the following code to your server.py file:
from hybrid_search_weaviate import chain as hybrid_search_weaviate_chain
add_routes(app, hybrid_search_weaviate_chain, path="/hybrid-search-weaviate")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/hybrid-search-weaviate/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/hybrid-search-weaviate")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-pinecone/ | ## rag-pinecone
This template performs RAG using Pinecone and OpenAI.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-pinecone
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-pinecone
```
And add the following code to your `server.py` file:
```
from rag_pinecone import chain as rag_pinecone_chainadd_routes(app, rag_pinecone_chain, path="/rag-pinecone")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-pinecone/playground](http://127.0.0.1:8000/rag-pinecone/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-pinecone")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:51.003Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-pinecone/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-pinecone/",
"description": "This template performs RAG using Pinecone and OpenAI.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3750",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-pinecone\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:50 GMT",
"etag": "W/\"a77489134c62c40ef52e70472113be15\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::82lsb-1713753950712-2b6104df7ec1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-pinecone/",
"property": "og:url"
},
{
"content": "rag-pinecone | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Pinecone and OpenAI.",
"property": "og:description"
}
],
"title": "rag-pinecone | 🦜️🔗 LangChain"
} | rag-pinecone
This template performs RAG using Pinecone and OpenAI.
Environment Setup
This template uses Pinecone as a vectorstore and requires that PINECONE_API_KEY, PINECONE_ENVIRONMENT, and PINECONE_INDEX are set.
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-pinecone
If you want to add this to an existing project, you can just run:
langchain app add rag-pinecone
And add the following code to your server.py file:
from rag_pinecone import chain as rag_pinecone_chain
add_routes(app, rag_pinecone_chain, path="/rag-pinecone")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-pinecone/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-pinecone")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/propositional-retrieval/ | ## propositional-retrieval
This template demonstrates the multi-vector indexing strategy proposed by Chen, et. al.'s [Dense X Retrieval: What Retrieval Granularity Should We Use?](https://arxiv.org/abs/2312.06648). The prompt, which you can [try out on the hub](https://smith.langchain.com/hub/wfh/proposal-indexing), directs an LLM to generate de-contextualized "propositions" which can be vectorized to increase the retrieval accuracy. You can see the full definition in `proposal_chain.py`.
## Storage[](#storage "Direct link to Storage")
For this demo, we index a simple academic paper using the RecursiveUrlLoader, and store all retriever information locally (using chroma and a bytestore stored on the local filesystem). You can modify the storage layer in `storage.py`.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access `gpt-3.5` and the OpenAI Embeddings classes.
## Indexing[](#indexing "Direct link to Indexing")
Create the index by running the following:
```
poetry installpoetry run python propositional_retrieval/ingest.py
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package propositional-retrieval
```
If you want to add this to an existing project, you can just run:
```
langchain app add propositional-retrieval
```
And add the following code to your `server.py` file:
```
from propositional_retrieval import chainadd_routes(app, chain, path="/propositional-retrieval")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/propositional-retrieval/playground](http://127.0.0.1:8000/propositional-retrieval/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/propositional-retrieval")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:51.172Z",
"loadedUrl": "https://python.langchain.com/docs/templates/propositional-retrieval/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/propositional-retrieval/",
"description": "This template demonstrates the multi-vector indexing strategy proposed by Chen, et. al.'s Dense X Retrieval: What Retrieval Granularity Should We Use?. The prompt, which you can try out on the hub, directs an LLM to generate de-contextualized \"propositions\" which can be vectorized to increase the retrieval accuracy. You can see the full definition in proposal_chain.py.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3753",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"propositional-retrieval\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:51 GMT",
"etag": "W/\"77c7899fc210be82a048f6bb51b2c34f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::c8dx6-1713753951015-5e9478b9229d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/propositional-retrieval/",
"property": "og:url"
},
{
"content": "propositional-retrieval | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template demonstrates the multi-vector indexing strategy proposed by Chen, et. al.'s Dense X Retrieval: What Retrieval Granularity Should We Use?. The prompt, which you can try out on the hub, directs an LLM to generate de-contextualized \"propositions\" which can be vectorized to increase the retrieval accuracy. You can see the full definition in proposal_chain.py.",
"property": "og:description"
}
],
"title": "propositional-retrieval | 🦜️🔗 LangChain"
} | propositional-retrieval
This template demonstrates the multi-vector indexing strategy proposed by Chen, et. al.'s Dense X Retrieval: What Retrieval Granularity Should We Use?. The prompt, which you can try out on the hub, directs an LLM to generate de-contextualized "propositions" which can be vectorized to increase the retrieval accuracy. You can see the full definition in proposal_chain.py.
Storage
For this demo, we index a simple academic paper using the RecursiveUrlLoader, and store all retriever information locally (using chroma and a bytestore stored on the local filesystem). You can modify the storage layer in storage.py.
Environment Setup
Set the OPENAI_API_KEY environment variable to access gpt-3.5 and the OpenAI Embeddings classes.
Indexing
Create the index by running the following:
poetry install
poetry run python propositional_retrieval/ingest.py
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package propositional-retrieval
If you want to add this to an existing project, you can just run:
langchain app add propositional-retrieval
And add the following code to your server.py file:
from propositional_retrieval import chain
add_routes(app, chain, path="/propositional-retrieval")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/propositional-retrieval/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/propositional-retrieval") |
https://python.langchain.com/docs/templates/plate-chain/ | ## plate-chain
This template enables parsing of data from laboratory plates.
In the context of biochemistry or molecular biology, laboratory plates are commonly used tools to hold samples in a grid-like format.
This can parse the resulting data into standardized (e.g., JSON) format for further processing.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To utilize plate-chain, you must have the LangChain CLI installed:
```
pip install -U langchain-cli
```
Creating a new LangChain project and installing plate-chain as the only package can be done with:
```
langchain app new my-app --package plate-chain
```
If you wish to add this to an existing project, simply run:
```
langchain app add plate-chain
```
Then add the following code to your `server.py` file:
```
from plate_chain import chain as plate_chainadd_routes(app, plate_chain, path="/plate-chain")
```
(Optional) For configuring LangSmith, which helps trace, monitor and debug LangChain applications, use the following code:
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you're in this directory, you can start a LangServe instance directly by:
This starts the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000/)
All templates can be viewed at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) Access the playground at [http://127.0.0.1:8000/plate-chain/playground](http://127.0.0.1:8000/plate-chain/playground)
You can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/plate-chain")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:51.400Z",
"loadedUrl": "https://python.langchain.com/docs/templates/plate-chain/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/plate-chain/",
"description": "This template enables parsing of data from laboratory plates.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3753",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"plate-chain\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:51 GMT",
"etag": "W/\"0ff6e6e375f7127f2f259f30332d3171\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::v782c-1713753951119-157761d6f28d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/plate-chain/",
"property": "og:url"
},
{
"content": "plate-chain | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template enables parsing of data from laboratory plates.",
"property": "og:description"
}
],
"title": "plate-chain | 🦜️🔗 LangChain"
} | plate-chain
This template enables parsing of data from laboratory plates.
In the context of biochemistry or molecular biology, laboratory plates are commonly used tools to hold samples in a grid-like format.
This can parse the resulting data into standardized (e.g., JSON) format for further processing.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To utilize plate-chain, you must have the LangChain CLI installed:
pip install -U langchain-cli
Creating a new LangChain project and installing plate-chain as the only package can be done with:
langchain app new my-app --package plate-chain
If you wish to add this to an existing project, simply run:
langchain app add plate-chain
Then add the following code to your server.py file:
from plate_chain import chain as plate_chain
add_routes(app, plate_chain, path="/plate-chain")
(Optional) For configuring LangSmith, which helps trace, monitor and debug LangChain applications, use the following code:
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you're in this directory, you can start a LangServe instance directly by:
This starts the FastAPI app with a server running locally at http://localhost:8000
All templates can be viewed at http://127.0.0.1:8000/docs Access the playground at http://127.0.0.1:8000/plate-chain/playground
You can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/plate-chain")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/python-lint/ | ## python-lint
This agent specializes in generating high-quality Python code with a focus on proper formatting and linting. It uses `black`, `ruff`, and `mypy` to ensure the code meets standard quality checks.
This streamlines the coding process by integrating and responding to these checks, resulting in reliable and consistent code output.
It cannot actually execute the code it writes, as code execution may introduce additional dependencies and potential security vulnerabilities. This makes the agent both a secure and efficient solution for code generation tasks.
You can use it to generate Python code directly, or network it with planning and execution agents.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
* Install `black`, `ruff`, and `mypy`: `pip install -U black ruff mypy`
* Set `OPENAI_API_KEY` environment variable.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package python-lint
```
If you want to add this to an existing project, you can just run:
```
langchain app add python-lint
```
And add the following code to your `server.py` file:
```
from python_lint import agent_executor as python_lint_agentadd_routes(app, python_lint_agent, path="/python-lint")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/python-lint/playground](http://127.0.0.1:8000/python-lint/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/python-lint")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:51.579Z",
"loadedUrl": "https://python.langchain.com/docs/templates/python-lint/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/python-lint/",
"description": "This agent specializes in generating high-quality Python code with a focus on proper formatting and linting. It uses black, ruff, and mypy to ensure the code meets standard quality checks.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"python-lint\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:51 GMT",
"etag": "W/\"fe2c8433ee70819e8355755d873312f8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::h6m2t-1713753951476-20d593e3455f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/python-lint/",
"property": "og:url"
},
{
"content": "python-lint | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This agent specializes in generating high-quality Python code with a focus on proper formatting and linting. It uses black, ruff, and mypy to ensure the code meets standard quality checks.",
"property": "og:description"
}
],
"title": "python-lint | 🦜️🔗 LangChain"
} | python-lint
This agent specializes in generating high-quality Python code with a focus on proper formatting and linting. It uses black, ruff, and mypy to ensure the code meets standard quality checks.
This streamlines the coding process by integrating and responding to these checks, resulting in reliable and consistent code output.
It cannot actually execute the code it writes, as code execution may introduce additional dependencies and potential security vulnerabilities. This makes the agent both a secure and efficient solution for code generation tasks.
You can use it to generate Python code directly, or network it with planning and execution agents.
Environment Setup
Install black, ruff, and mypy: pip install -U black ruff mypy
Set OPENAI_API_KEY environment variable.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package python-lint
If you want to add this to an existing project, you can just run:
langchain app add python-lint
And add the following code to your server.py file:
from python_lint import agent_executor as python_lint_agent
add_routes(app, python_lint_agent, path="/python-lint")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/python-lint/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/python-lint") |
https://python.langchain.com/docs/templates/rag-redis-multi-modal-multi-vector/ | ## rag-redis-multi-modal-multi-vector
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses GPT-4V to create image summaries for each slide, embeds the summaries, and stores them in Redis.
Given a question, relevant slides are retrieved and passed to GPT-4V for answer synthesis.
## Input[](#input "Direct link to Input")
Supply a slide deck as PDF in the `/docs` directory.
By default, this template has a slide deck about recent earnings from NVIDIA.
Example questions to ask can be:
```
1/ how much can H100 TensorRT improve LLama2 inference performance?2/ what is the % change in GPU accelerated applications from 2020 to 2023?
```
To create an index of the slide deck, run:
```
poetry installpoetry shellpython ingest.py
```
## Storage[](#storage "Direct link to Storage")
Here is the process the template will use to create an index of the slides (see [blog](https://blog.langchain.dev/multi-modal-rag-template/)):
* Extract the slides as a collection of images
* Use GPT-4V to summarize each image
* Embed the image summaries using text embeddings with a link to the original images
* Retrieve relevant image based on similarity between the image summary and the user input question
* Pass those images to GPT-4V for answer synthesis
### Redis[](#redis "Direct link to Redis")
This template uses [Redis](https://redis.com/) to power the [MultiVectorRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector) including:
* Redis as the [VectorStore](https://python.langchain.com/docs/integrations/vectorstores/redis) (to store + index image summary embeddings)
* Redis as the [ByteStore](https://python.langchain.com/docs/integrations/stores/redis) (to store images)
Make sure to deploy a Redis instance either in the [cloud](https://redis.com/try-free) (free) or locally with [docker](https://redis.io/docs/install/install-stack/docker/).
This will give you an accessible Redis endpoint that you can use as a URL. If deploying locally, simply use `redis://localhost:6379`.
## LLM[](#llm "Direct link to LLM")
The app will retrieve images based on similarity between the text input and the image summary (text), and pass the images to GPT-4V for answer synthesis.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.
Set `REDIS_URL` environment variable to access your Redis database.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-redis-multi-modal-multi-vector
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-redis-multi-modal-multi-vector
```
And add the following code to your `server.py` file:
```
from rag_redis_multi_modal_multi_vector import chain as rag_redis_multi_modal_chain_mvadd_routes(app, rag_redis_multi_modal_chain_mv, path="/rag-redis-multi-modal-multi-vector")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-redis-multi-modal-multi-vector/playground](http://127.0.0.1:8000/rag-redis-multi-modal-multi-vector/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-redis-multi-modal-multi-vector")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:51.638Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-redis-multi-modal-multi-vector/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-redis-multi-modal-multi-vector/",
"description": "Multi-modal LLMs enable visual assistants that can perform question-answering about images.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4926",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-redis-multi-modal-multi-vector\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:51 GMT",
"etag": "W/\"9d8bf6616e392931c81f64a32a563a1a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::4vch7-1713753951501-1f9f2adbf38f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-redis-multi-modal-multi-vector/",
"property": "og:url"
},
{
"content": "rag-redis-multi-modal-multi-vector | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Multi-modal LLMs enable visual assistants that can perform question-answering about images.",
"property": "og:description"
}
],
"title": "rag-redis-multi-modal-multi-vector | 🦜️🔗 LangChain"
} | rag-redis-multi-modal-multi-vector
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses GPT-4V to create image summaries for each slide, embeds the summaries, and stores them in Redis.
Given a question, relevant slides are retrieved and passed to GPT-4V for answer synthesis.
Input
Supply a slide deck as PDF in the /docs directory.
By default, this template has a slide deck about recent earnings from NVIDIA.
Example questions to ask can be:
1/ how much can H100 TensorRT improve LLama2 inference performance?
2/ what is the % change in GPU accelerated applications from 2020 to 2023?
To create an index of the slide deck, run:
poetry install
poetry shell
python ingest.py
Storage
Here is the process the template will use to create an index of the slides (see blog):
Extract the slides as a collection of images
Use GPT-4V to summarize each image
Embed the image summaries using text embeddings with a link to the original images
Retrieve relevant image based on similarity between the image summary and the user input question
Pass those images to GPT-4V for answer synthesis
Redis
This template uses Redis to power the MultiVectorRetriever including:
Redis as the VectorStore (to store + index image summary embeddings)
Redis as the ByteStore (to store images)
Make sure to deploy a Redis instance either in the cloud (free) or locally with docker.
This will give you an accessible Redis endpoint that you can use as a URL. If deploying locally, simply use redis://localhost:6379.
LLM
The app will retrieve images based on similarity between the text input and the image summary (text), and pass the images to GPT-4V for answer synthesis.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI GPT-4V.
Set REDIS_URL environment variable to access your Redis database.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-redis-multi-modal-multi-vector
If you want to add this to an existing project, you can just run:
langchain app add rag-redis-multi-modal-multi-vector
And add the following code to your server.py file:
from rag_redis_multi_modal_multi_vector import chain as rag_redis_multi_modal_chain_mv
add_routes(app, rag_redis_multi_modal_chain_mv, path="/rag-redis-multi-modal-multi-vector")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-redis-multi-modal-multi-vector/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-redis-multi-modal-multi-vector") |
https://python.langchain.com/docs/templates/llama2-functions/ | ## llama2-functions
This template performs extraction of structured data from unstructured data using a [LLaMA2 model that supports a specified JSON output schema](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md).
The extraction schema can be set in `chain.py`.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This will use a [LLaMA2-13b model hosted by Replicate](https://replicate.com/andreasjansson/llama-2-13b-chat-gguf/versions).
Ensure that `REPLICATE_API_TOKEN` is set in your environment.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package llama2-functions
```
If you want to add this to an existing project, you can just run:
```
langchain app add llama2-functions
```
And add the following code to your `server.py` file:
```
from llama2_functions import chain as llama2_functions_chainadd_routes(app, llama2_functions_chain, path="/llama2-functions")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/llama2-functions/playground](http://127.0.0.1:8000/llama2-functions/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/llama2-functions")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:51.713Z",
"loadedUrl": "https://python.langchain.com/docs/templates/llama2-functions/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/llama2-functions/",
"description": "This template performs extraction of structured data from unstructured data using a LLaMA2 model that supports a specified JSON output schema.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3755",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"llama2-functions\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:51 GMT",
"etag": "W/\"da3764c5cf0f07cc943b3a6e179decae\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::rsd2t-1713753951570-d69c7e6b6638"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/llama2-functions/",
"property": "og:url"
},
{
"content": "llama2-functions | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs extraction of structured data from unstructured data using a LLaMA2 model that supports a specified JSON output schema.",
"property": "og:description"
}
],
"title": "llama2-functions | 🦜️🔗 LangChain"
} | llama2-functions
This template performs extraction of structured data from unstructured data using a LLaMA2 model that supports a specified JSON output schema.
The extraction schema can be set in chain.py.
Environment Setup
This will use a LLaMA2-13b model hosted by Replicate.
Ensure that REPLICATE_API_TOKEN is set in your environment.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package llama2-functions
If you want to add this to an existing project, you can just run:
langchain app add llama2-functions
And add the following code to your server.py file:
from llama2_functions import chain as llama2_functions_chain
add_routes(app, llama2_functions_chain, path="/llama2-functions")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/llama2-functions/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/llama2-functions")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-self-query/ | ## rag-self-query
This template performs RAG using the self-query retrieval technique. The main idea is to let an LLM convert unstructured queries into structured queries. See the [docs for more on how this works](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
In this template we'll use OpenAI models and an Elasticsearch vector store, but the approach generalizes to all LLMs/ChatModels and [a number of vector stores](https://python.langchain.com/docs/integrations/retrievers/self_query/).
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To connect to your Elasticsearch instance, use the following environment variables:
```
export ELASTIC_CLOUD_ID = <ClOUD_ID>export ELASTIC_USERNAME = <ClOUD_USERNAME>export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
```
For local development with Docker, use:
```
export ES_URL = "http://localhost:9200"docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-self-query
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-self-query
```
And add the following code to your `server.py` file:
```
from rag_self_query import chainadd_routes(app, chain, path="/rag-elasticsearch")
```
To populate the vector store with the sample data, from the root of the directory run:
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-elasticsearch/playground](http://127.0.0.1:8000/rag-elasticsearch/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-self-query")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:51.857Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-self-query/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-self-query/",
"description": "This template performs RAG using the self-query retrieval technique. The main idea is to let an LLM convert unstructured queries into structured queries. See the docs for more on how this works.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3750",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-self-query\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:51 GMT",
"etag": "W/\"c333ff0822eabb2ce7d3a86d0f806198\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::bqkmk-1713753951659-fff57f5cae82"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-self-query/",
"property": "og:url"
},
{
"content": "rag-self-query | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using the self-query retrieval technique. The main idea is to let an LLM convert unstructured queries into structured queries. See the docs for more on how this works.",
"property": "og:description"
}
],
"title": "rag-self-query | 🦜️🔗 LangChain"
} | rag-self-query
This template performs RAG using the self-query retrieval technique. The main idea is to let an LLM convert unstructured queries into structured queries. See the docs for more on how this works.
Environment Setup
In this template we'll use OpenAI models and an Elasticsearch vector store, but the approach generalizes to all LLMs/ChatModels and a number of vector stores.
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
To connect to your Elasticsearch instance, use the following environment variables:
export ELASTIC_CLOUD_ID = <ClOUD_ID>
export ELASTIC_USERNAME = <ClOUD_USERNAME>
export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
For local development with Docker, use:
export ES_URL = "http://localhost:9200"
docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-self-query
If you want to add this to an existing project, you can just run:
langchain app add rag-self-query
And add the following code to your server.py file:
from rag_self_query import chain
add_routes(app, chain, path="/rag-elasticsearch")
To populate the vector store with the sample data, from the root of the directory run:
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-elasticsearch/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-self-query") |
https://python.langchain.com/docs/templates/rag-astradb/ | ## rag-astradb
This template will perform RAG using Astra DB (`AstraDB` vector store class)
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
An [Astra DB](https://astra.datastax.com/) database is required; free tier is fine.
* You need the database **API endpoint** (such as `https://0123...-us-east1.apps.astra.datastax.com`) ...
* ... and a **token** (`AstraCS:...`).
Also, an **OpenAI API Key** is required. _Note that out-of-the-box this demo supports OpenAI only, unless you tinker with the code._
Provide the connection parameters and secrets through environment variables. Please refer to `.env.template` for the variable names.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-astradb
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-astradb
```
And add the following code to your `server.py` file:
```
from astradb_entomology_rag import chain as astradb_entomology_rag_chainadd_routes(app, astradb_entomology_rag_chain, path="/rag-astradb")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-astradb/playground](http://127.0.0.1:8000/rag-astradb/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-astradb")
```
## Reference[](#reference "Direct link to Reference")
Stand-alone repo with LangServe chain: [here](https://github.com/hemidactylus/langserve_astradb_entomology_rag). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:52.027Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-astradb/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-astradb/",
"description": "This template will perform RAG using Astra DB (AstraDB vector store class)",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-astradb\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:51 GMT",
"etag": "W/\"46367caa39534362fbc1b9074ab9566c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::drx4p-1713753951729-904b2d75632f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-astradb/",
"property": "og:url"
},
{
"content": "rag-astradb | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template will perform RAG using Astra DB (AstraDB vector store class)",
"property": "og:description"
}
],
"title": "rag-astradb | 🦜️🔗 LangChain"
} | rag-astradb
This template will perform RAG using Astra DB (AstraDB vector store class)
Environment Setup
An Astra DB database is required; free tier is fine.
You need the database API endpoint (such as https://0123...-us-east1.apps.astra.datastax.com) ...
... and a token (AstraCS:...).
Also, an OpenAI API Key is required. Note that out-of-the-box this demo supports OpenAI only, unless you tinker with the code.
Provide the connection parameters and secrets through environment variables. Please refer to .env.template for the variable names.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-astradb
If you want to add this to an existing project, you can just run:
langchain app add rag-astradb
And add the following code to your server.py file:
from astradb_entomology_rag import chain as astradb_entomology_rag_chain
add_routes(app, astradb_entomology_rag_chain, path="/rag-astradb")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-astradb/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-astradb")
Reference
Stand-alone repo with LangServe chain: here. |
https://python.langchain.com/docs/templates/rag-redis/ | ## rag-redis
This template performs RAG using Redis (vector database) and OpenAI (LLM) on financial 10k filings docs for Nike.
It relies on the sentence transformer `all-MiniLM-L6-v2` for embedding chunks of the pdf and user questions.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the [OpenAI](https://platform.openai.com/) models:
```
export OPENAI_API_KEY= <YOUR OPENAI API KEY>
```
Set the following [Redis](https://redis.com/try-free) environment variables:
```
export REDIS_HOST = <YOUR REDIS HOST>export REDIS_PORT = <YOUR REDIS PORT>export REDIS_USER = <YOUR REDIS USER NAME>export REDIS_PASSWORD = <YOUR REDIS PASSWORD>
```
## Supported Settings[](#supported-settings "Direct link to Supported Settings")
We use a variety of environment variables to configure this application
| Environment Variable | Description | Default Value |
| --- | --- | --- |
| `DEBUG` | Enable or disable Langchain debugging logs | True |
| `REDIS_HOST` | Hostname for the Redis server | "localhost" |
| `REDIS_PORT` | Port for the Redis server | 6379 |
| `REDIS_USER` | User for the Redis server | "" |
| `REDIS_PASSWORD` | Password for the Redis server | "" |
| `REDIS_URL` | Full URL for connecting to Redis | `None`, Constructed from user, password, host, and port if not provided |
| `INDEX_NAME` | Name of the vector index | "rag-redis" |
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI and Pydantic installed in a Python virtual environment:
```
pip install -U langchain-cli pydantic==1.10.13
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-redis
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-redis
```
And add the following code snippet to your `app/server.py` file:
```
from rag_redis.chain import chain as rag_redis_chainadd_routes(app, rag_redis_chain, path="/rag-redis")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-redis/playground](http://127.0.0.1:8000/rag-redis/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-redis")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:52.091Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-redis/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-redis/",
"description": "This template performs RAG using Redis (vector database) and OpenAI (LLM) on financial 10k filings docs for Nike.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-redis\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:51 GMT",
"etag": "W/\"ce1dd92fbc11e03b8b36abe6535d2639\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::5nsvl-1713753951645-a2bbda4fc4cc"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-redis/",
"property": "og:url"
},
{
"content": "rag-redis | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Redis (vector database) and OpenAI (LLM) on financial 10k filings docs for Nike.",
"property": "og:description"
}
],
"title": "rag-redis | 🦜️🔗 LangChain"
} | rag-redis
This template performs RAG using Redis (vector database) and OpenAI (LLM) on financial 10k filings docs for Nike.
It relies on the sentence transformer all-MiniLM-L6-v2 for embedding chunks of the pdf and user questions.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models:
export OPENAI_API_KEY= <YOUR OPENAI API KEY>
Set the following Redis environment variables:
export REDIS_HOST = <YOUR REDIS HOST>
export REDIS_PORT = <YOUR REDIS PORT>
export REDIS_USER = <YOUR REDIS USER NAME>
export REDIS_PASSWORD = <YOUR REDIS PASSWORD>
Supported Settings
We use a variety of environment variables to configure this application
Environment VariableDescriptionDefault Value
DEBUG Enable or disable Langchain debugging logs True
REDIS_HOST Hostname for the Redis server "localhost"
REDIS_PORT Port for the Redis server 6379
REDIS_USER User for the Redis server ""
REDIS_PASSWORD Password for the Redis server ""
REDIS_URL Full URL for connecting to Redis None, Constructed from user, password, host, and port if not provided
INDEX_NAME Name of the vector index "rag-redis"
Usage
To use this package, you should first have the LangChain CLI and Pydantic installed in a Python virtual environment:
pip install -U langchain-cli pydantic==1.10.13
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-redis
If you want to add this to an existing project, you can just run:
langchain app add rag-redis
And add the following code snippet to your app/server.py file:
from rag_redis.chain import chain as rag_redis_chain
add_routes(app, rag_redis_chain, path="/rag-redis")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-redis/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-redis") |
https://python.langchain.com/docs/templates/mongo-parent-document-retrieval/ | ## mongo-parent-document-retrieval
This template performs RAG using MongoDB and OpenAI. It does a more advanced form of RAG called Parent-Document Retrieval.
In this form of retrieval, a large document is first split into medium sized chunks. From there, those medium size chunks are split into small chunks. Embeddings are created for the small chunks. When a query comes in, an embedding is created for that query and compared to the small chunks. But rather than passing the small chunks directly to the LLM for generation, the medium-sized chunks from whence the smaller chunks came are passed. This helps enable finer-grained search, but then passing of larger context (which can be useful during generation).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
You should export two environment variables, one being your MongoDB URI, the other being your OpenAI API KEY. If you do not have a MongoDB URI, see the `Setup Mongo` section at the bottom for instructions on how to do so.
```
export MONGO_URI=...export OPENAI_API_KEY=...
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package mongo-parent-document-retrieval
```
If you want to add this to an existing project, you can just run:
```
langchain app add mongo-parent-document-retrieval
```
And add the following code to your `server.py` file:
```
from mongo_parent_document_retrieval import chain as mongo_parent_document_retrieval_chainadd_routes(app, mongo_parent_document_retrieval_chain, path="/mongo-parent-document-retrieval")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you DO NOT already have a Mongo Search Index you want to connect to, see `MongoDB Setup` section below before proceeding. Note that because Parent Document Retrieval uses a different indexing strategy, it's likely you will want to run this new setup.
If you DO have a MongoDB Search index you want to connect to, edit the connection details in `mongo_parent_document_retrieval/chain.py`
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/mongo-parent-document-retrieval/playground](http://127.0.0.1:8000/mongo-parent-document-retrieval/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/mongo-parent-document-retrieval")
```
For additional context, please refer to [this notebook](https://colab.research.google.com/drive/1cr2HBAHyBmwKUerJq2if0JaNhy-hIq7I#scrollTo=TZp7_CBfxTOB).
## MongoDB Setup[](#mongodb-setup "Direct link to MongoDB Setup")
Use this step if you need to setup your MongoDB account and ingest data. We will first follow the standard MongoDB Atlas setup instructions [here](https://www.mongodb.com/docs/atlas/getting-started/).
1. Create an account (if not already done)
2. Create a new project (if not already done)
3. Locate your MongoDB URI.
This can be done by going to the deployement overview page and connecting to you database
We then look at the drivers available
Among which we will see our URI listed
Let's then set that as an environment variable locally:
4. Let's also set an environment variable for OpenAI (which we will use as an LLM)
```
export OPENAI_API_KEY=...
```
5. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:
Note that you can (and should!) change this to ingest data of your choice
6. We now need to set up a vector index on our data.
We can first connect to the cluster where our database lives
We can then navigate to where all our collections are listed
We can then find the collection we want and look at the search indexes for that collection
That should likely be empty, and we want to create a new one:
We will use the JSON editor to create it
And we will paste the following JSON in:
```
{ "mappings": { "dynamic": true, "fields": { "doc_level": [ { "type": "token" } ], "embedding": { "dimensions": 1536, "similarity": "cosine", "type": "knnVector" } } }}
```
From there, hit "Next" and then "Create Search Index". It will take a little bit but you should then have an index over your data! | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:52.199Z",
"loadedUrl": "https://python.langchain.com/docs/templates/mongo-parent-document-retrieval/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/mongo-parent-document-retrieval/",
"description": "This template performs RAG using MongoDB and OpenAI.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4938",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"mongo-parent-document-retrieval\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:51 GMT",
"etag": "W/\"4ef612f973209dbab81d02cb9e8cd435\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wbz5r-1713753951668-3dd21ab8c881"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/mongo-parent-document-retrieval/",
"property": "og:url"
},
{
"content": "mongo-parent-document-retrieval | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using MongoDB and OpenAI.",
"property": "og:description"
}
],
"title": "mongo-parent-document-retrieval | 🦜️🔗 LangChain"
} | mongo-parent-document-retrieval
This template performs RAG using MongoDB and OpenAI. It does a more advanced form of RAG called Parent-Document Retrieval.
In this form of retrieval, a large document is first split into medium sized chunks. From there, those medium size chunks are split into small chunks. Embeddings are created for the small chunks. When a query comes in, an embedding is created for that query and compared to the small chunks. But rather than passing the small chunks directly to the LLM for generation, the medium-sized chunks from whence the smaller chunks came are passed. This helps enable finer-grained search, but then passing of larger context (which can be useful during generation).
Environment Setup
You should export two environment variables, one being your MongoDB URI, the other being your OpenAI API KEY. If you do not have a MongoDB URI, see the Setup Mongo section at the bottom for instructions on how to do so.
export MONGO_URI=...
export OPENAI_API_KEY=...
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package mongo-parent-document-retrieval
If you want to add this to an existing project, you can just run:
langchain app add mongo-parent-document-retrieval
And add the following code to your server.py file:
from mongo_parent_document_retrieval import chain as mongo_parent_document_retrieval_chain
add_routes(app, mongo_parent_document_retrieval_chain, path="/mongo-parent-document-retrieval")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you DO NOT already have a Mongo Search Index you want to connect to, see MongoDB Setup section below before proceeding. Note that because Parent Document Retrieval uses a different indexing strategy, it's likely you will want to run this new setup.
If you DO have a MongoDB Search index you want to connect to, edit the connection details in mongo_parent_document_retrieval/chain.py
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/mongo-parent-document-retrieval/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/mongo-parent-document-retrieval")
For additional context, please refer to this notebook.
MongoDB Setup
Use this step if you need to setup your MongoDB account and ingest data. We will first follow the standard MongoDB Atlas setup instructions here.
Create an account (if not already done)
Create a new project (if not already done)
Locate your MongoDB URI.
This can be done by going to the deployement overview page and connecting to you database
We then look at the drivers available
Among which we will see our URI listed
Let's then set that as an environment variable locally:
Let's also set an environment variable for OpenAI (which we will use as an LLM)
export OPENAI_API_KEY=...
Let's now ingest some data! We can do that by moving into this directory and running the code in ingest.py, eg:
Note that you can (and should!) change this to ingest data of your choice
We now need to set up a vector index on our data.
We can first connect to the cluster where our database lives
We can then navigate to where all our collections are listed
We can then find the collection we want and look at the search indexes for that collection
That should likely be empty, and we want to create a new one:
We will use the JSON editor to create it
And we will paste the following JSON in:
{
"mappings": {
"dynamic": true,
"fields": {
"doc_level": [
{
"type": "token"
}
],
"embedding": {
"dimensions": 1536,
"similarity": "cosine",
"type": "knnVector"
}
}
}
}
From there, hit "Next" and then "Create Search Index". It will take a little bit but you should then have an index over your data! |
https://python.langchain.com/docs/templates/neo4j-advanced-rag/ | ## neo4j-advanced-rag
This template allows you to balance precise embeddings and context retention by implementing advanced retrieval strategies.
## Strategies[](#strategies "Direct link to Strategies")
1. **Typical RAG**:
* Traditional method where the exact data indexed is the data retrieved.
2. **Parent retriever**:
* Instead of indexing entire documents, data is divided into smaller chunks, referred to as Parent and Child documents.
* Child documents are indexed for better representation of specific concepts, while parent documents is retrieved to ensure context retention.
3. **Hypothetical Questions**:
* Documents are processed to determine potential questions they might answer.
* These questions are then indexed for better representation of specific concepts, while parent documents are retrieved to ensure context retention.
4. **Summaries**:
* Instead of indexing the entire document, a summary of the document is created and indexed.
* Similarly, the parent document is retrieved in a RAG application.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
You need to define the following environment variables
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>NEO4J_URI=<YOUR_NEO4J_URI>NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data[](#populating-with-data "Direct link to Populating with data")
If you want to populate the DB with some example data, you can run `python ingest.py`. The script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database. First, the text is divided into larger chunks ("parents") and then further subdivided into smaller chunks ("children"), where both parent and child chunks overlap slightly to maintain context. After storing these chunks in the database, embeddings for the child nodes are computed using OpenAI's embeddings and stored back in the graph for future retrieval or analysis. For every parent node, hypothetical questions and summaries are generated, embedded, and added to the database. Additionally, a vector index for each retrieval strategy is created for efficient querying of these embeddings.
_Note that ingestion can take a minute or two due to LLMs velocity of generating hypothetical questions and summaries._
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package neo4j-advanced-rag
```
If you want to add this to an existing project, you can just run:
```
langchain app add neo4j-advanced-rag
```
And add the following code to your `server.py` file:
```
from neo4j_advanced_rag import chain as neo4j_advanced_chainadd_routes(app, neo4j_advanced_chain, path="/neo4j-advanced-rag")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/neo4j-advanced-rag/playground](http://127.0.0.1:8000/neo4j-advanced-rag/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/neo4j-advanced-rag")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:52.401Z",
"loadedUrl": "https://python.langchain.com/docs/templates/neo4j-advanced-rag/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/neo4j-advanced-rag/",
"description": "This template allows you to balance precise embeddings and context retention by implementing advanced retrieval strategies.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"neo4j-advanced-rag\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:52 GMT",
"etag": "W/\"a70e904c4932f7754d0ffa9d97ad36ce\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::5mmfn-1713753952006-411f87f380fe"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/neo4j-advanced-rag/",
"property": "og:url"
},
{
"content": "neo4j-advanced-rag | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template allows you to balance precise embeddings and context retention by implementing advanced retrieval strategies.",
"property": "og:description"
}
],
"title": "neo4j-advanced-rag | 🦜️🔗 LangChain"
} | neo4j-advanced-rag
This template allows you to balance precise embeddings and context retention by implementing advanced retrieval strategies.
Strategies
Typical RAG:
Traditional method where the exact data indexed is the data retrieved.
Parent retriever:
Instead of indexing entire documents, data is divided into smaller chunks, referred to as Parent and Child documents.
Child documents are indexed for better representation of specific concepts, while parent documents is retrieved to ensure context retention.
Hypothetical Questions:
Documents are processed to determine potential questions they might answer.
These questions are then indexed for better representation of specific concepts, while parent documents are retrieved to ensure context retention.
Summaries:
Instead of indexing the entire document, a summary of the document is created and indexed.
Similarly, the parent document is retrieved in a RAG application.
Environment Setup
You need to define the following environment variables
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
Populating with data
If you want to populate the DB with some example data, you can run python ingest.py. The script process and stores sections of the text from the file dune.txt into a Neo4j graph database. First, the text is divided into larger chunks ("parents") and then further subdivided into smaller chunks ("children"), where both parent and child chunks overlap slightly to maintain context. After storing these chunks in the database, embeddings for the child nodes are computed using OpenAI's embeddings and stored back in the graph for future retrieval or analysis. For every parent node, hypothetical questions and summaries are generated, embedded, and added to the database. Additionally, a vector index for each retrieval strategy is created for efficient querying of these embeddings.
Note that ingestion can take a minute or two due to LLMs velocity of generating hypothetical questions and summaries.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package neo4j-advanced-rag
If you want to add this to an existing project, you can just run:
langchain app add neo4j-advanced-rag
And add the following code to your server.py file:
from neo4j_advanced_rag import chain as neo4j_advanced_chain
add_routes(app, neo4j_advanced_chain, path="/neo4j-advanced-rag")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/neo4j-advanced-rag/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-advanced-rag") |
https://python.langchain.com/docs/templates/rag-semi-structured/ | ## rag-semi-structured
This template performs RAG on semi-structured data, such as a PDF with text and tables.
See [this cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb) as a reference.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
This uses [Unstructured](https://unstructured-io.github.io/unstructured/) for PDF parsing, which requires some system-level package installations.
On Mac, you can install the necessary packages with the following:
```
brew install tesseract poppler
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-semi-structured
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-semi-structured
```
And add the following code to your `server.py` file:
```
from rag_semi_structured import chain as rag_semi_structured_chainadd_routes(app, rag_semi_structured_chain, path="/rag-semi-structured")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-semi-structured/playground](http://127.0.0.1:8000/rag-semi-structured/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-semi-structured")
```
For more details on how to connect to the template, refer to the Jupyter notebook `rag_semi_structured`. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:52.677Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-semi-structured/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-semi-structured/",
"description": "This template performs RAG on semi-structured data, such as a PDF with text and tables.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4926",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-semi-structured\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:52 GMT",
"etag": "W/\"0bd7f64053ca7b0efcd31cdcb80923c1\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::swct2-1713753952527-d2b4b694c9a9"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-semi-structured/",
"property": "og:url"
},
{
"content": "rag-semi-structured | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG on semi-structured data, such as a PDF with text and tables.",
"property": "og:description"
}
],
"title": "rag-semi-structured | 🦜️🔗 LangChain"
} | rag-semi-structured
This template performs RAG on semi-structured data, such as a PDF with text and tables.
See this cookbook as a reference.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
This uses Unstructured for PDF parsing, which requires some system-level package installations.
On Mac, you can install the necessary packages with the following:
brew install tesseract poppler
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-semi-structured
If you want to add this to an existing project, you can just run:
langchain app add rag-semi-structured
And add the following code to your server.py file:
from rag_semi_structured import chain as rag_semi_structured_chain
add_routes(app, rag_semi_structured_chain, path="/rag-semi-structured")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-semi-structured/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-semi-structured")
For more details on how to connect to the template, refer to the Jupyter notebook rag_semi_structured. |
https://python.langchain.com/docs/templates/neo4j-cypher-ft/ | ## neo4j-cypher-ft
This template allows you to interact with a Neo4j graph database using natural language, leveraging OpenAI's LLM.
Its main function is to convert natural language questions into Cypher queries (the language used to query Neo4j databases), execute these queries, and provide natural language responses based on the query's results.
The package utilizes a full-text index for efficient mapping of text values to database entries, thereby enhancing the generation of accurate Cypher statements.
In the provided example, the full-text index is used to map names of people and movies from the user's query to corresponding database entries.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
The following environment variables need to be set:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>NEO4J_URI=<YOUR_NEO4J_URI>NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
Additionally, if you wish to populate the DB with some example data, you can run `python ingest.py`. This script will populate the database with sample movie data and create a full-text index named `entity`, which is used to map person and movies from user input to database values for precise Cypher statement generation.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package neo4j-cypher-ft
```
If you want to add this to an existing project, you can just run:
```
langchain app add neo4j-cypher-ft
```
And add the following code to your `server.py` file:
```
from neo4j_cypher_ft import chain as neo4j_cypher_ft_chainadd_routes(app, neo4j_cypher_ft_chain, path="/neo4j-cypher-ft")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/neo4j-cypher-ft/playground](http://127.0.0.1:8000/neo4j-cypher-ft/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/neo4j-cypher-ft")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:52.771Z",
"loadedUrl": "https://python.langchain.com/docs/templates/neo4j-cypher-ft/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/neo4j-cypher-ft/",
"description": "This template allows you to interact with a Neo4j graph database using natural language, leveraging OpenAI's LLM.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3756",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"neo4j-cypher-ft\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:52 GMT",
"etag": "W/\"c65aae3e610d4553e621cdf2b925c23c\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::85vkj-1713753952699-8eae125112f1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/neo4j-cypher-ft/",
"property": "og:url"
},
{
"content": "neo4j-cypher-ft | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template allows you to interact with a Neo4j graph database using natural language, leveraging OpenAI's LLM.",
"property": "og:description"
}
],
"title": "neo4j-cypher-ft | 🦜️🔗 LangChain"
} | neo4j-cypher-ft
This template allows you to interact with a Neo4j graph database using natural language, leveraging OpenAI's LLM.
Its main function is to convert natural language questions into Cypher queries (the language used to query Neo4j databases), execute these queries, and provide natural language responses based on the query's results.
The package utilizes a full-text index for efficient mapping of text values to database entries, thereby enhancing the generation of accurate Cypher statements.
In the provided example, the full-text index is used to map names of people and movies from the user's query to corresponding database entries.
Environment Setup
The following environment variables need to be set:
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
Additionally, if you wish to populate the DB with some example data, you can run python ingest.py. This script will populate the database with sample movie data and create a full-text index named entity, which is used to map person and movies from user input to database values for precise Cypher statement generation.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package neo4j-cypher-ft
If you want to add this to an existing project, you can just run:
langchain app add neo4j-cypher-ft
And add the following code to your server.py file:
from neo4j_cypher_ft import chain as neo4j_cypher_ft_chain
add_routes(app, neo4j_cypher_ft_chain, path="/neo4j-cypher-ft")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/neo4j-cypher-ft/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-cypher-ft") |
https://python.langchain.com/docs/templates/rag-aws-bedrock/ | ## rag-aws-bedrock
This template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.
It primarily uses the `Anthropic Claude` for text generation and `Amazon Titan` for text embedding, and utilizes FAISS as the vectorstore.
For additional context on the RAG pipeline, refer to [this notebook](https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Before you can use this package, ensure that you have configured `boto3` to work with your AWS account.
For details on how to set up and configure `boto3`, visit [this page](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).
In addition, you need to install the `faiss-cpu` package to work with the FAISS vector store:
You should also set the following environment variables to reflect your AWS profile and region (if you're not using the `default` AWS profile and `us-east-1` region):
* `AWS_DEFAULT_REGION`
* `AWS_PROFILE`
## Usage[](#usage "Direct link to Usage")
First, install the LangChain CLI:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package:
```
langchain app new my-app --package rag-aws-bedrock
```
To add this package to an existing project:
```
langchain app add rag-aws-bedrock
```
Then add the following code to your `server.py` file:
```
from rag_aws_bedrock import chain as rag_aws_bedrock_chainadd_routes(app, rag_aws_bedrock_chain, path="/rag-aws-bedrock")
```
(Optional) If you have access to LangSmith, you can configure it to trace, monitor, and debug LangChain applications. If you don't have access, you can skip this section.
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000/)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) and access the playground at [http://127.0.0.1:8000/rag-aws-bedrock/playground](http://127.0.0.1:8000/rag-aws-bedrock/playground).
You can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-aws-bedrock")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:53.032Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-aws-bedrock/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-aws-bedrock/",
"description": "This template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4935",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-aws-bedrock\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:52 GMT",
"etag": "W/\"e2c90d75ef1df91ab11e19ac0426d047\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::8ppqn-1713753952920-77c6710c5abc"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-aws-bedrock/",
"property": "og:url"
},
{
"content": "rag-aws-bedrock | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.",
"property": "og:description"
}
],
"title": "rag-aws-bedrock | 🦜️🔗 LangChain"
} | rag-aws-bedrock
This template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.
It primarily uses the Anthropic Claude for text generation and Amazon Titan for text embedding, and utilizes FAISS as the vectorstore.
For additional context on the RAG pipeline, refer to this notebook.
Environment Setup
Before you can use this package, ensure that you have configured boto3 to work with your AWS account.
For details on how to set up and configure boto3, visit this page.
In addition, you need to install the faiss-cpu package to work with the FAISS vector store:
You should also set the following environment variables to reflect your AWS profile and region (if you're not using the default AWS profile and us-east-1 region):
AWS_DEFAULT_REGION
AWS_PROFILE
Usage
First, install the LangChain CLI:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package:
langchain app new my-app --package rag-aws-bedrock
To add this package to an existing project:
langchain app add rag-aws-bedrock
Then add the following code to your server.py file:
from rag_aws_bedrock import chain as rag_aws_bedrock_chain
add_routes(app, rag_aws_bedrock_chain, path="/rag-aws-bedrock")
(Optional) If you have access to LangSmith, you can configure it to trace, monitor, and debug LangChain applications. If you don't have access, you can skip this section.
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at http://localhost:8000
You can see all templates at http://127.0.0.1:8000/docs and access the playground at http://127.0.0.1:8000/rag-aws-bedrock/playground.
You can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-aws-bedrock") |
https://python.langchain.com/docs/templates/rag-singlestoredb/ | ## rag-singlestoredb
This template performs RAG using SingleStoreDB and OpenAI.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This template uses SingleStoreDB as a vectorstore and requires that `SINGLESTOREDB_URL` is set. It should take the form `admin:password@svc-xxx.svc.singlestore.com:port/db_name`
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-singlestoredb
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-singlestoredb
```
And add the following code to your `server.py` file:
```
from rag_singlestoredb import chain as rag_singlestoredb_chainadd_routes(app, rag_singlestoredb_chain, path="/rag-singlestoredb")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-singlestoredb/playground](http://127.0.0.1:8000/rag-singlestoredb/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-singlestoredb")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:53.086Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-singlestoredb/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-singlestoredb/",
"description": "This template performs RAG using SingleStoreDB and OpenAI.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3752",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-singlestoredb\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:52 GMT",
"etag": "W/\"314251455bf03d622ddcec8a898d6573\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::64q9c-1713753952970-0867986d0e8e"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-singlestoredb/",
"property": "og:url"
},
{
"content": "rag-singlestoredb | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using SingleStoreDB and OpenAI.",
"property": "og:description"
}
],
"title": "rag-singlestoredb | 🦜️🔗 LangChain"
} | rag-singlestoredb
This template performs RAG using SingleStoreDB and OpenAI.
Environment Setup
This template uses SingleStoreDB as a vectorstore and requires that SINGLESTOREDB_URL is set. It should take the form admin:password@svc-xxx.svc.singlestore.com:port/db_name
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-singlestoredb
If you want to add this to an existing project, you can just run:
langchain app add rag-singlestoredb
And add the following code to your server.py file:
from rag_singlestoredb import chain as rag_singlestoredb_chain
add_routes(app, rag_singlestoredb_chain, path="/rag-singlestoredb")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-singlestoredb/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-singlestoredb")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-aws-kendra/ | ## rag-aws-kendra
This template is an application that utilizes Amazon Kendra, a machine learning powered search service, and Anthropic Claude for text generation. The application retrieves documents using a Retrieval chain to answer questions from your documents.
It uses the `boto3` library to connect with the Bedrock service.
For more context on building RAG applications with Amazon Kendra, check [this page](https://aws.amazon.com/blogs/machine-learning/quickly-build-high-accuracy-generative-ai-applications-on-enterprise-data-using-amazon-kendra-langchain-and-large-language-models/).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Please ensure to setup and configure `boto3` to work with your AWS account.
You can follow the guide [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).
You should also have a Kendra Index set up before using this template.
You can use [this Cloudformation template](https://github.com/aws-samples/amazon-kendra-langchain-extensions/blob/main/kendra_retriever_samples/kendra-docs-index.yaml) to create a sample index.
This includes sample data containing AWS online documentation for Amazon Kendra, Amazon Lex, and Amazon SageMaker. Alternatively, you can use your own Amazon Kendra index if you have indexed your own dataset.
The following environment variables need to be set:
* `AWS_DEFAULT_REGION` - This should reflect the correct AWS region. Default is `us-east-1`.
* `AWS_PROFILE` - This should reflect your AWS profile. Default is `default`.
* `KENDRA_INDEX_ID` - This should have the Index ID of the Kendra index. Note that the Index ID is a 36 character alphanumeric value that can be found in the index detail page.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-aws-kendra
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-aws-kendra
```
And add the following code to your `server.py` file:
```
from rag_aws_kendra.chain import chain as rag_aws_kendra_chainadd_routes(app, rag_aws_kendra_chain, path="/rag-aws-kendra")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-aws-kendra/playground](http://127.0.0.1:8000/rag-aws-kendra/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-aws-kendra")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:53.131Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-aws-kendra/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-aws-kendra/",
"description": "This template is an application that utilizes Amazon Kendra, a machine learning powered search service, and Anthropic Claude for text generation. The application retrieves documents using a Retrieval chain to answer questions from your documents.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3755",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-aws-kendra\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:52 GMT",
"etag": "W/\"ed8e1ddefb21abc3ff1c307cac5585e5\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::vlt2t-1713753952968-81e8994d981f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-aws-kendra/",
"property": "og:url"
},
{
"content": "rag-aws-kendra | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template is an application that utilizes Amazon Kendra, a machine learning powered search service, and Anthropic Claude for text generation. The application retrieves documents using a Retrieval chain to answer questions from your documents.",
"property": "og:description"
}
],
"title": "rag-aws-kendra | 🦜️🔗 LangChain"
} | rag-aws-kendra
This template is an application that utilizes Amazon Kendra, a machine learning powered search service, and Anthropic Claude for text generation. The application retrieves documents using a Retrieval chain to answer questions from your documents.
It uses the boto3 library to connect with the Bedrock service.
For more context on building RAG applications with Amazon Kendra, check this page.
Environment Setup
Please ensure to setup and configure boto3 to work with your AWS account.
You can follow the guide here.
You should also have a Kendra Index set up before using this template.
You can use this Cloudformation template to create a sample index.
This includes sample data containing AWS online documentation for Amazon Kendra, Amazon Lex, and Amazon SageMaker. Alternatively, you can use your own Amazon Kendra index if you have indexed your own dataset.
The following environment variables need to be set:
AWS_DEFAULT_REGION - This should reflect the correct AWS region. Default is us-east-1.
AWS_PROFILE - This should reflect your AWS profile. Default is default.
KENDRA_INDEX_ID - This should have the Index ID of the Kendra index. Note that the Index ID is a 36 character alphanumeric value that can be found in the index detail page.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-aws-kendra
If you want to add this to an existing project, you can just run:
langchain app add rag-aws-kendra
And add the following code to your server.py file:
from rag_aws_kendra.chain import chain as rag_aws_kendra_chain
add_routes(app, rag_aws_kendra_chain, path="/rag-aws-kendra")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-aws-kendra/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-aws-kendra") |
https://python.langchain.com/docs/templates/rag-supabase/ | ## rag\_supabase
This template performs RAG with Supabase.
[Supabase](https://supabase.com/docs) is an open-source Firebase alternative. It is built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL), a free and open-source relational database management system (RDBMS) and uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
To find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api).
* `SUPABASE_URL` corresponds to the Project URL
* `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key
```
export SUPABASE_URL=export SUPABASE_SERVICE_KEY=export OPENAI_API_KEY=
```
## Setup Supabase Database[](#setup-supabase-database "Direct link to Setup Supabase Database")
Use these steps to setup your Supabase database if you haven't already.
1. Head over to [https://database.new](https://database.new/) to provision your Supabase database.
2. In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and run the following script to enable `pgvector` and setup your database as a vector store:
```
-- Enable the pgvector extension to work with embedding vectorscreate extension if not exists vector;-- Create a table to store your documentscreate table documents ( id uuid primary key, content text, -- corresponds to Document.pageContent metadata jsonb, -- corresponds to Document.metadata embedding vector (1536) -- 1536 works for OpenAI embeddings, change as needed );-- Create a function to search for documentscreate function match_documents ( query_embedding vector (1536), filter jsonb default '{}') returns table ( id uuid, content text, metadata jsonb, similarity float) language plpgsql as $$#variable_conflict use_columnbegin return query select id, content, metadata, 1 - (documents.embedding <=> query_embedding) as similarity from documents where metadata @> filter order by documents.embedding <=> query_embedding;end;$$;
```
## Setup Environment Variables[](#setup-environment-variables "Direct link to Setup Environment Variables")
Since we are using [`SupabaseVectorStore`](https://python.langchain.com/docs/integrations/vectorstores/supabase) and [`OpenAIEmbeddings`](https://python.langchain.com/docs/integrations/text_embedding/openai), we need to load their API keys.
## Usage[](#usage "Direct link to Usage")
First, install the LangChain CLI:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-supabase
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-supabase
```
And add the following code to your `server.py` file:
```
from rag_supabase.chain import chain as rag_supabase_chainadd_routes(app, rag_supabase_chain, path="/rag-supabase")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-supabase/playground](http://127.0.0.1:8000/rag-supabase/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-supabase")
```
TODO: Add details about setting up the Supabase database | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:53.366Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-supabase/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-supabase/",
"description": "This template performs RAG with Supabase.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3752",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-supabase\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:53 GMT",
"etag": "W/\"219a5e8b3d25dc8c78cbca83cda102d2\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::lmhs6-1713753953098-87a4e5804aa5"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-supabase/",
"property": "og:url"
},
{
"content": "rag_supabase | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG with Supabase.",
"property": "og:description"
}
],
"title": "rag_supabase | 🦜️🔗 LangChain"
} | rag_supabase
This template performs RAG with Supabase.
Supabase is an open-source Firebase alternative. It is built on top of PostgreSQL, a free and open-source relational database management system (RDBMS) and uses pgvector to store embeddings within your tables.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
To get your OPENAI_API_KEY, navigate to API keys on your OpenAI account and create a new secret key.
To find your SUPABASE_URL and SUPABASE_SERVICE_KEY, head to your Supabase project's API settings.
SUPABASE_URL corresponds to the Project URL
SUPABASE_SERVICE_KEY corresponds to the service_role API key
export SUPABASE_URL=
export SUPABASE_SERVICE_KEY=
export OPENAI_API_KEY=
Setup Supabase Database
Use these steps to setup your Supabase database if you haven't already.
Head over to https://database.new to provision your Supabase database.
In the studio, jump to the SQL editor and run the following script to enable pgvector and setup your database as a vector store:
-- Enable the pgvector extension to work with embedding vectors
create extension if not exists vector;
-- Create a table to store your documents
create table
documents (
id uuid primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector (1536) -- 1536 works for OpenAI embeddings, change as needed
);
-- Create a function to search for documents
create function match_documents (
query_embedding vector (1536),
filter jsonb default '{}'
) returns table (
id uuid,
content text,
metadata jsonb,
similarity float
) language plpgsql as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding;
end;
$$;
Setup Environment Variables
Since we are using SupabaseVectorStore and OpenAIEmbeddings, we need to load their API keys.
Usage
First, install the LangChain CLI:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-supabase
If you want to add this to an existing project, you can just run:
langchain app add rag-supabase
And add the following code to your server.py file:
from rag_supabase.chain import chain as rag_supabase_chain
add_routes(app, rag_supabase_chain, path="/rag-supabase")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-supabase/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-supabase")
TODO: Add details about setting up the Supabase database |
https://python.langchain.com/docs/templates/rag-timescale-conversation/ | ## rag-timescale-conversation
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
It passes both a conversation history and retrieved documents into an LLM for synthesis.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This template uses Timescale Vector as a vectorstore and requires that `TIMESCALES_SERVICE_URL`. Signup for a 90-day trial [here](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) if you don't yet have an account.
To load the sample dataset, set `LOAD_SAMPLE_DATA=1`. To load your own dataset see the section below.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-timescale-conversation
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-timescale-conversation
```
And add the following code to your `server.py` file:
```
from rag_timescale_conversation import chain as rag_timescale_conversation_chainadd_routes(app, rag_timescale_conversation_chain, path="/rag-timescale_conversation")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-timescale-conversation/playground](http://127.0.0.1:8000/rag-timescale-conversation/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-timescale-conversation")
```
See the `rag_conversation.ipynb` notebook for example usage.
## Loading your own dataset[](#loading-your-own-dataset "Direct link to Loading your own dataset")
To load your own dataset you will have to create a `load_dataset` function. You can see an example, in the `load_ts_git_dataset` function defined in the `load_sample_dataset.py` file. You can then run this as a standalone function (e.g. in a bash script) or add it to chain.py (but then you should run it just once). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:53.498Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-timescale-conversation/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-timescale-conversation/",
"description": "This template is used for conversational retrieval, which is one of the most popular LLM use-cases.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-timescale-conversation\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:53 GMT",
"etag": "W/\"a963028e420e4bfde8248e9b05cde70b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::c5skq-1713753953289-7536ae4a2c48"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-timescale-conversation/",
"property": "og:url"
},
{
"content": "rag-timescale-conversation | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template is used for conversational retrieval, which is one of the most popular LLM use-cases.",
"property": "og:description"
}
],
"title": "rag-timescale-conversation | 🦜️🔗 LangChain"
} | rag-timescale-conversation
This template is used for conversational retrieval, which is one of the most popular LLM use-cases.
It passes both a conversation history and retrieved documents into an LLM for synthesis.
Environment Setup
This template uses Timescale Vector as a vectorstore and requires that TIMESCALES_SERVICE_URL. Signup for a 90-day trial here if you don't yet have an account.
To load the sample dataset, set LOAD_SAMPLE_DATA=1. To load your own dataset see the section below.
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-timescale-conversation
If you want to add this to an existing project, you can just run:
langchain app add rag-timescale-conversation
And add the following code to your server.py file:
from rag_timescale_conversation import chain as rag_timescale_conversation_chain
add_routes(app, rag_timescale_conversation_chain, path="/rag-timescale_conversation")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-timescale-conversation/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-timescale-conversation")
See the rag_conversation.ipynb notebook for example usage.
Loading your own dataset
To load your own dataset you will have to create a load_dataset function. You can see an example, in the load_ts_git_dataset function defined in the load_sample_dataset.py file. You can then run this as a standalone function (e.g. in a bash script) or add it to chain.py (but then you should run it just once). |
https://python.langchain.com/docs/templates/neo4j-cypher/ | ## neo4j\_cypher
This template allows you to interact with a Neo4j graph database in natural language, using an OpenAI LLM.
It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Define the following environment variables:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>NEO4J_URI=<YOUR_NEO4J_URI>NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Neo4j database setup[](#neo4j-database-setup "Direct link to Neo4j database setup")
There are a number of ways to set up a Neo4j database.
### Neo4j Aura[](#neo4j-aura "Direct link to Neo4j Aura")
Neo4j AuraDB is a fully managed cloud graph database service. Create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve). When you initiate a free database instance, you'll receive credentials to access the database.
## Populating with data[](#populating-with-data "Direct link to Populating with data")
If you want to populate the DB with some example data, you can run `python ingest.py`. This script will populate the database with sample movie data.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package neo4j-cypher
```
If you want to add this to an existing project, you can just run:
```
langchain app add neo4j-cypher
```
And add the following code to your `server.py` file:
```
from neo4j_cypher import chain as neo4j_cypher_chainadd_routes(app, neo4j_cypher_chain, path="/neo4j-cypher")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/neo4j\_cypher/playground](http://127.0.0.1:8000/neo4j_cypher/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/neo4j-cypher")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:53.793Z",
"loadedUrl": "https://python.langchain.com/docs/templates/neo4j-cypher/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/neo4j-cypher/",
"description": "This template allows you to interact with a Neo4j graph database in natural language, using an OpenAI LLM.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3757",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"neo4j-cypher\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:53 GMT",
"etag": "W/\"c8e29994ec7815fb055c0413394b3a5a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::ptbzf-1713753953591-220ae2462dd2"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/neo4j-cypher/",
"property": "og:url"
},
{
"content": "neo4j_cypher | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template allows you to interact with a Neo4j graph database in natural language, using an OpenAI LLM.",
"property": "og:description"
}
],
"title": "neo4j_cypher | 🦜️🔗 LangChain"
} | neo4j_cypher
This template allows you to interact with a Neo4j graph database in natural language, using an OpenAI LLM.
It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results.
Environment Setup
Define the following environment variables:
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
Neo4j database setup
There are a number of ways to set up a Neo4j database.
Neo4j Aura
Neo4j AuraDB is a fully managed cloud graph database service. Create a free instance on Neo4j Aura. When you initiate a free database instance, you'll receive credentials to access the database.
Populating with data
If you want to populate the DB with some example data, you can run python ingest.py. This script will populate the database with sample movie data.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package neo4j-cypher
If you want to add this to an existing project, you can just run:
langchain app add neo4j-cypher
And add the following code to your server.py file:
from neo4j_cypher import chain as neo4j_cypher_chain
add_routes(app, neo4j_cypher_chain, path="/neo4j-cypher")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/neo4j_cypher/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-cypher") |
https://python.langchain.com/docs/templates/rag-azure-search/ | ## rag-azure-search
This template performs RAG on documents using [Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) as the vectorstore and Azure OpenAI chat and embedding models.
For additional details on RAG with Azure AI Search, refer to [this notebook](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/vectorstores/azuresearch.ipynb).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
**_Prerequisites:_** Existing [Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) and [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/overview) resources.
**_Environment Variables:_**
To run this template, you'll need to set the following environment variables:
**_Required:_**
* AZURE\_SEARCH\_ENDPOINT - The endpoint of the Azure AI Search service.
* AZURE\_SEARCH\_KEY - The API key for the Azure AI Search service.
* AZURE\_OPENAI\_ENDPOINT - The endpoint of the Azure OpenAI service.
* AZURE\_OPENAI\_API\_KEY - The API key for the Azure OpenAI service.
* AZURE\_EMBEDDINGS\_DEPLOYMENT - Name of the Azure OpenAI deployment to use for embeddings.
* AZURE\_CHAT\_DEPLOYMENT - Name of the Azure OpenAI deployment to use for chat.
**_Optional:_**
* AZURE\_SEARCH\_INDEX\_NAME - Name of an existing Azure AI Search index to use. If not provided, an index will be created with name "rag-azure-search".
* OPENAI\_API\_VERSION - Azure OpenAI API version to use. Defaults to "2023-05-15".
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-azure-search
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-azure-search
```
And add the following code to your `server.py` file:
```
from rag_azure_search import chain as rag_azure_search_chainadd_routes(app, rag_azure_search_chain, path="/rag-azure-search")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-azure-search/playground](http://127.0.0.1:8000/rag-azure-search/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-azure-search")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:53.849Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-azure-search/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-azure-search/",
"description": "This template performs RAG on documents using Azure AI Search as the vectorstore and Azure OpenAI chat and embedding models.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3755",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-azure-search\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:53 GMT",
"etag": "W/\"bfd5dbacb5af87452f67fe8fb3c2b52a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dhf8l-1713753953621-cd7b200cf4b2"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-azure-search/",
"property": "og:url"
},
{
"content": "rag-azure-search | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG on documents using Azure AI Search as the vectorstore and Azure OpenAI chat and embedding models.",
"property": "og:description"
}
],
"title": "rag-azure-search | 🦜️🔗 LangChain"
} | rag-azure-search
This template performs RAG on documents using Azure AI Search as the vectorstore and Azure OpenAI chat and embedding models.
For additional details on RAG with Azure AI Search, refer to this notebook.
Environment Setup
Prerequisites: Existing Azure AI Search and Azure OpenAI resources.
Environment Variables:
To run this template, you'll need to set the following environment variables:
Required:
AZURE_SEARCH_ENDPOINT - The endpoint of the Azure AI Search service.
AZURE_SEARCH_KEY - The API key for the Azure AI Search service.
AZURE_OPENAI_ENDPOINT - The endpoint of the Azure OpenAI service.
AZURE_OPENAI_API_KEY - The API key for the Azure OpenAI service.
AZURE_EMBEDDINGS_DEPLOYMENT - Name of the Azure OpenAI deployment to use for embeddings.
AZURE_CHAT_DEPLOYMENT - Name of the Azure OpenAI deployment to use for chat.
Optional:
AZURE_SEARCH_INDEX_NAME - Name of an existing Azure AI Search index to use. If not provided, an index will be created with name "rag-azure-search".
OPENAI_API_VERSION - Azure OpenAI API version to use. Defaults to "2023-05-15".
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-azure-search
If you want to add this to an existing project, you can just run:
langchain app add rag-azure-search
And add the following code to your server.py file:
from rag_azure_search import chain as rag_azure_search_chain
add_routes(app, rag_azure_search_chain, path="/rag-azure-search")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-azure-search/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-azure-search")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/neo4j-cypher-memory/ | ## neo4j-cypher-memory
This template allows you to have conversations with a Neo4j graph database in natural language, using an OpenAI LLM. It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results. Additionally, it features a conversational memory module that stores the dialogue history in the Neo4j graph database. The conversation memory is uniquely maintained for each user session, ensuring personalized interactions. To facilitate this, please supply both the `user_id` and `session_id` when using the conversation chain.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Define the following environment variables:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>NEO4J_URI=<YOUR_NEO4J_URI>NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Neo4j database setup[](#neo4j-database-setup "Direct link to Neo4j database setup")
There are a number of ways to set up a Neo4j database.
### Neo4j Aura[](#neo4j-aura "Direct link to Neo4j Aura")
Neo4j AuraDB is a fully managed cloud graph database service. Create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve). When you initiate a free database instance, you'll receive credentials to access the database.
## Populating with data[](#populating-with-data "Direct link to Populating with data")
If you want to populate the DB with some example data, you can run `python ingest.py`. This script will populate the database with sample movie data.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package neo4j-cypher-memory
```
If you want to add this to an existing project, you can just run:
```
langchain app add neo4j-cypher-memory
```
And add the following code to your `server.py` file:
```
from neo4j_cypher_memory import chain as neo4j_cypher_memory_chainadd_routes(app, neo4j_cypher_memory_chain, path="/neo4j-cypher-memory")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/neo4j\_cypher\_memory/playground](http://127.0.0.1:8000/neo4j_cypher/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/neo4j-cypher-memory")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:54.151Z",
"loadedUrl": "https://python.langchain.com/docs/templates/neo4j-cypher-memory/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/neo4j-cypher-memory/",
"description": "This template allows you to have conversations with a Neo4j graph database in natural language, using an OpenAI LLM.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3757",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"neo4j-cypher-memory\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:53 GMT",
"etag": "W/\"a93ab3053231bc2eb8f27a9ef44d4d98\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::jrbzs-1713753953939-1aaf005aeab3"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/neo4j-cypher-memory/",
"property": "og:url"
},
{
"content": "neo4j-cypher-memory | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template allows you to have conversations with a Neo4j graph database in natural language, using an OpenAI LLM.",
"property": "og:description"
}
],
"title": "neo4j-cypher-memory | 🦜️🔗 LangChain"
} | neo4j-cypher-memory
This template allows you to have conversations with a Neo4j graph database in natural language, using an OpenAI LLM. It transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results. Additionally, it features a conversational memory module that stores the dialogue history in the Neo4j graph database. The conversation memory is uniquely maintained for each user session, ensuring personalized interactions. To facilitate this, please supply both the user_id and session_id when using the conversation chain.
Environment Setup
Define the following environment variables:
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
Neo4j database setup
There are a number of ways to set up a Neo4j database.
Neo4j Aura
Neo4j AuraDB is a fully managed cloud graph database service. Create a free instance on Neo4j Aura. When you initiate a free database instance, you'll receive credentials to access the database.
Populating with data
If you want to populate the DB with some example data, you can run python ingest.py. This script will populate the database with sample movie data.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package neo4j-cypher-memory
If you want to add this to an existing project, you can just run:
langchain app add neo4j-cypher-memory
And add the following code to your server.py file:
from neo4j_cypher_memory import chain as neo4j_cypher_memory_chain
add_routes(app, neo4j_cypher_memory_chain, path="/neo4j-cypher-memory")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/neo4j_cypher_memory/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-cypher-memory") |
https://python.langchain.com/docs/templates/neo4j-generation/ | ## neo4j-generation
This template pairs LLM-based knowledge graph extraction with Neo4j AuraDB, a fully managed cloud graph database.
You can create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).
When you initiate a free database instance, you'll receive credentials to access the database.
This template is flexible and allows users to guide the extraction process by specifying a list of node labels and relationship types.
For more details on the functionality and capabilities of this package, please refer to [this blog post](https://blog.langchain.dev/constructing-knowledge-graphs-from-text-using-openai-functions/).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
You need to set the following environment variables:
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>NEO4J_URI=<YOUR_NEO4J_URI>NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package neo4j-generation
```
If you want to add this to an existing project, you can just run:
```
langchain app add neo4j-generation
```
And add the following code to your `server.py` file:
```
from neo4j_generation.chain import chain as neo4j_generation_chainadd_routes(app, neo4j_generation_chain, path="/neo4j-generation")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/neo4j-generation/playground](http://127.0.0.1:8000/neo4j-generation/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/neo4j-generation")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:54.511Z",
"loadedUrl": "https://python.langchain.com/docs/templates/neo4j-generation/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/neo4j-generation/",
"description": "This template pairs LLM-based knowledge graph extraction with Neo4j AuraDB, a fully managed cloud graph database.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3758",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"neo4j-generation\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:54 GMT",
"etag": "W/\"741e0bf1825941474754eb1c06a86296\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::brbrf-1713753954461-9f4abc2c6c13"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/neo4j-generation/",
"property": "og:url"
},
{
"content": "neo4j-generation | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template pairs LLM-based knowledge graph extraction with Neo4j AuraDB, a fully managed cloud graph database.",
"property": "og:description"
}
],
"title": "neo4j-generation | 🦜️🔗 LangChain"
} | neo4j-generation
This template pairs LLM-based knowledge graph extraction with Neo4j AuraDB, a fully managed cloud graph database.
You can create a free instance on Neo4j Aura.
When you initiate a free database instance, you'll receive credentials to access the database.
This template is flexible and allows users to guide the extraction process by specifying a list of node labels and relationship types.
For more details on the functionality and capabilities of this package, please refer to this blog post.
Environment Setup
You need to set the following environment variables:
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package neo4j-generation
If you want to add this to an existing project, you can just run:
langchain app add neo4j-generation
And add the following code to your server.py file:
from neo4j_generation.chain import chain as neo4j_generation_chain
add_routes(app, neo4j_generation_chain, path="/neo4j-generation")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/neo4j-generation/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-generation") |
https://python.langchain.com/docs/templates/rag-chroma-multi-modal-multi-vector/ | ## rag-chroma-multi-modal-multi-vector
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses GPT-4V to create image summaries for each slide, embeds the summaries, and stores them in Chroma.
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
## Input[](#input "Direct link to Input")
Supply a slide deck as pdf in the `/docs` directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
Example questions to ask can be:
```
How many customers does Datadog have?What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
```
To create an index of the slide deck, run:
```
poetry installpython ingest.py
```
## Storage[](#storage "Direct link to Storage")
Here is the process the template will use to create an index of the slides (see [blog](https://blog.langchain.dev/multi-modal-rag-template/)):
* Extract the slides as a collection of images
* Use GPT-4V to summarize each image
* Embed the image summaries using text embeddings with a link to the original images
* Retrieve relevant image based on similarity between the image summary and the user input question
* Pass those images to GPT-4V for answer synthesis
By default, this will use [LocalFileStore](https://python.langchain.com/docs/integrations/stores/file_system) to store images and Chroma to store summaries.
For production, it may be desirable to use a remote option such as Redis.
You can set the `local_file_store` flag in `chain.py` and `ingest.py` to switch between the two options.
For Redis, the template will use [UpstashRedisByteStore](https://python.langchain.com/docs/integrations/stores/upstash_redis).
We will use Upstash to store the images, which offers Redis with a REST API.
Simply login [here](https://upstash.com/) and create a database.
This will give you a REST API with:
* `UPSTASH_URL`
* `UPSTASH_TOKEN`
Set `UPSTASH_URL` and `UPSTASH_TOKEN` as environment variables to access your database.
We will use Chroma to store and index the image summaries, which will be created locally in the template directory.
## LLM[](#llm "Direct link to LLM")
The app will retrieve images based on similarity between the text input and the image summary, and pass the images to GPT-4V.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.
Set `UPSTASH_URL` and `UPSTASH_TOKEN` as environment variables to access your database if you use `UpstashRedisByteStore`.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-chroma-multi-modal-multi-vector
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-chroma-multi-modal-multi-vector
```
And add the following code to your `server.py` file:
```
from rag_chroma_multi_modal_multi_vector import chain as rag_chroma_multi_modal_chain_mvadd_routes(app, rag_chroma_multi_modal_chain_mv, path="/rag-chroma-multi-modal-multi-vector")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal-multi-vector/playground](http://127.0.0.1:8000/rag-chroma-multi-modal-multi-vector/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal-multi-vector")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:54.996Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-chroma-multi-modal-multi-vector/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-chroma-multi-modal-multi-vector/",
"description": "Multi-modal LLMs enable visual assistants that can perform question-answering about images.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-chroma-multi-modal-multi-vector\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:54 GMT",
"etag": "W/\"b8559a048440bd6a195bffb206e20d22\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::9dw67-1713753954904-93c858f276ae"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-chroma-multi-modal-multi-vector/",
"property": "og:url"
},
{
"content": "rag-chroma-multi-modal-multi-vector | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Multi-modal LLMs enable visual assistants that can perform question-answering about images.",
"property": "og:description"
}
],
"title": "rag-chroma-multi-modal-multi-vector | 🦜️🔗 LangChain"
} | rag-chroma-multi-modal-multi-vector
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses GPT-4V to create image summaries for each slide, embeds the summaries, and stores them in Chroma.
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
Input
Supply a slide deck as pdf in the /docs directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
Example questions to ask can be:
How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
To create an index of the slide deck, run:
poetry install
python ingest.py
Storage
Here is the process the template will use to create an index of the slides (see blog):
Extract the slides as a collection of images
Use GPT-4V to summarize each image
Embed the image summaries using text embeddings with a link to the original images
Retrieve relevant image based on similarity between the image summary and the user input question
Pass those images to GPT-4V for answer synthesis
By default, this will use LocalFileStore to store images and Chroma to store summaries.
For production, it may be desirable to use a remote option such as Redis.
You can set the local_file_store flag in chain.py and ingest.py to switch between the two options.
For Redis, the template will use UpstashRedisByteStore.
We will use Upstash to store the images, which offers Redis with a REST API.
Simply login here and create a database.
This will give you a REST API with:
UPSTASH_URL
UPSTASH_TOKEN
Set UPSTASH_URL and UPSTASH_TOKEN as environment variables to access your database.
We will use Chroma to store and index the image summaries, which will be created locally in the template directory.
LLM
The app will retrieve images based on similarity between the text input and the image summary, and pass the images to GPT-4V.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI GPT-4V.
Set UPSTASH_URL and UPSTASH_TOKEN as environment variables to access your database if you use UpstashRedisByteStore.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-chroma-multi-modal-multi-vector
If you want to add this to an existing project, you can just run:
langchain app add rag-chroma-multi-modal-multi-vector
And add the following code to your server.py file:
from rag_chroma_multi_modal_multi_vector import chain as rag_chroma_multi_modal_chain_mv
add_routes(app, rag_chroma_multi_modal_chain_mv, path="/rag-chroma-multi-modal-multi-vector")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-chroma-multi-modal-multi-vector/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal-multi-vector") |
https://python.langchain.com/docs/templates/rag-chroma-multi-modal/ | ## rag-chroma-multi-modal
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
## Input[](#input "Direct link to Input")
Supply a slide deck as pdf in the `/docs` directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
Example questions to ask can be:
```
How many customers does Datadog have?What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
```
To create an index of the slide deck, run:
```
poetry installpython ingest.py
```
## Storage[](#storage "Direct link to Storage")
This template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.
You can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
```
vectorstore_mmembd = Chroma( collection_name="multi-modal-rag", persist_directory=str(re_vectorstore_path), embedding_function=OpenCLIPEmbeddings( model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k" ),)
```
## LLM[](#llm "Direct link to LLM")
The app will retrieve images based on similarity between the text input and the image, which are both mapped to multi-modal embedding space. It will then pass the images to GPT-4V.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-chroma-multi-modal
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-chroma-multi-modal
```
And add the following code to your `server.py` file:
```
from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chainadd_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal/playground](http://127.0.0.1:8000/rag-chroma-multi-modal/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:55.685Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-chroma-multi-modal/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-chroma-multi-modal/",
"description": "Multi-modal LLMs enable visual assistants that can perform question-answering about images.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4936",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-chroma-multi-modal\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:55 GMT",
"etag": "W/\"2cb59e300dc7f4166c160469372aaf0a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::ssks4-1713753955572-9dddfa7b62de"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-chroma-multi-modal/",
"property": "og:url"
},
{
"content": "rag-chroma-multi-modal | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Multi-modal LLMs enable visual assistants that can perform question-answering about images.",
"property": "og:description"
}
],
"title": "rag-chroma-multi-modal | 🦜️🔗 LangChain"
} | rag-chroma-multi-modal
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
Given a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.
Input
Supply a slide deck as pdf in the /docs directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
Example questions to ask can be:
How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
To create an index of the slide deck, run:
poetry install
python ingest.py
Storage
This template will use OpenCLIP multi-modal embeddings to embed the images.
You can select different embedding model options (see results here).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, ViT-H-14.
You can choose alternative OpenCLIPEmbeddings models in rag_chroma_multi_modal/ingest.py:
vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(re_vectorstore_path),
embedding_function=OpenCLIPEmbeddings(
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
),
)
LLM
The app will retrieve images based on similarity between the text input and the image, which are both mapped to multi-modal embedding space. It will then pass the images to GPT-4V.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI GPT-4V.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-chroma-multi-modal
If you want to add this to an existing project, you can just run:
langchain app add rag-chroma-multi-modal
And add the following code to your server.py file:
from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain
add_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-chroma-multi-modal/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal") |
https://python.langchain.com/docs/templates/neo4j-parent/ | ## neo4j-parent
This template allows you to balance precise embeddings and context retention by splitting documents into smaller chunks and retrieving their original or larger text information.
Using a Neo4j vector index, the package queries child nodes using vector similarity search and retrieves the corresponding parent's text by defining an appropriate `retrieval_query` parameter.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
You need to define the following environment variables
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>NEO4J_URI=<YOUR_NEO4J_URI>NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data[](#populating-with-data "Direct link to Populating with data")
If you want to populate the DB with some example data, you can run `python ingest.py`. The script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database. First, the text is divided into larger chunks ("parents") and then further subdivided into smaller chunks ("children"), where both parent and child chunks overlap slightly to maintain context. After storing these chunks in the database, embeddings for the child nodes are computed using OpenAI's embeddings and stored back in the graph for future retrieval or analysis. Additionally, a vector index named `retrieval` is created for efficient querying of these embeddings.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package neo4j-parent
```
If you want to add this to an existing project, you can just run:
```
langchain app add neo4j-parent
```
And add the following code to your `server.py` file:
```
from neo4j_parent import chain as neo4j_parent_chainadd_routes(app, neo4j_parent_chain, path="/neo4j-parent")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/neo4j-parent/playground](http://127.0.0.1:8000/neo4j-parent/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/neo4j-parent")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:56.291Z",
"loadedUrl": "https://python.langchain.com/docs/templates/neo4j-parent/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/neo4j-parent/",
"description": "This template allows you to balance precise embeddings and context retention by splitting documents into smaller chunks and retrieving their original or larger text information.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"neo4j-parent\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:56 GMT",
"etag": "W/\"4356421185a1d9b1ac66f0544e44281f\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::kqp69-1713753956127-40386eaefb4f"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/neo4j-parent/",
"property": "og:url"
},
{
"content": "neo4j-parent | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template allows you to balance precise embeddings and context retention by splitting documents into smaller chunks and retrieving their original or larger text information.",
"property": "og:description"
}
],
"title": "neo4j-parent | 🦜️🔗 LangChain"
} | neo4j-parent
This template allows you to balance precise embeddings and context retention by splitting documents into smaller chunks and retrieving their original or larger text information.
Using a Neo4j vector index, the package queries child nodes using vector similarity search and retrieves the corresponding parent's text by defining an appropriate retrieval_query parameter.
Environment Setup
You need to define the following environment variables
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
Populating with data
If you want to populate the DB with some example data, you can run python ingest.py. The script process and stores sections of the text from the file dune.txt into a Neo4j graph database. First, the text is divided into larger chunks ("parents") and then further subdivided into smaller chunks ("children"), where both parent and child chunks overlap slightly to maintain context. After storing these chunks in the database, embeddings for the child nodes are computed using OpenAI's embeddings and stored back in the graph for future retrieval or analysis. Additionally, a vector index named retrieval is created for efficient querying of these embeddings.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package neo4j-parent
If you want to add this to an existing project, you can just run:
langchain app add neo4j-parent
And add the following code to your server.py file:
from neo4j_parent import chain as neo4j_parent_chain
add_routes(app, neo4j_parent_chain, path="/neo4j-parent")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/neo4j-parent/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-parent") |
https://python.langchain.com/docs/templates/neo4j-semantic-layer/ | ## neo4j-semantic-layer
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using OpenAI function calling. The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph databas based on the user's intent. Learn more about the semantic layer template in the [corresponding blog post](https://medium.com/towards-data-science/enhancing-interaction-between-language-models-and-graph-databases-via-a-semantic-layer-0a78ad3eba49).
The agent utilizes several tools to interact with the Neo4j graph database effectively:
1. **Information tool**:
* Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
2. **Recommendation Tool**:
* Provides movie recommendations based upon user preferences and input.
3. **Memory Tool**:
* Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
You need to define the following environment variables
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>NEO4J_URI=<YOUR_NEO4J_URI>NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data[](#populating-with-data "Direct link to Populating with data")
If you want to populate the DB with an example movie dataset, you can run `python ingest.py`. The script import information about movies and their rating by users. Additionally, the script creates two [fulltext indices](https://neo4j.com/docs/cypher-manual/current/indexes-for-full-text-search/), which are used to map information from user input to the database.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package neo4j-semantic-layer
```
If you want to add this to an existing project, you can just run:
```
langchain app add neo4j-semantic-layer
```
And add the following code to your `server.py` file:
```
from neo4j_semantic_layer import agent_executor as neo4j_semantic_agentadd_routes(app, neo4j_semantic_agent, path="/neo4j-semantic-layer")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/neo4j-semantic-layer/playground](http://127.0.0.1:8000/neo4j-semantic-layer/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/neo4j-semantic-layer")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:56.778Z",
"loadedUrl": "https://python.langchain.com/docs/templates/neo4j-semantic-layer/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/neo4j-semantic-layer/",
"description": "This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using OpenAI function calling.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3760",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"neo4j-semantic-layer\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:56 GMT",
"etag": "W/\"7326c6c26fad99644ed1908357c3f672\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::dz74w-1713753956702-f2dc5d8f6082"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/neo4j-semantic-layer/",
"property": "og:url"
},
{
"content": "neo4j-semantic-layer | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using OpenAI function calling.",
"property": "og:description"
}
],
"title": "neo4j-semantic-layer | 🦜️🔗 LangChain"
} | neo4j-semantic-layer
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using OpenAI function calling. The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph databas based on the user's intent. Learn more about the semantic layer template in the corresponding blog post.
The agent utilizes several tools to interact with the Neo4j graph database effectively:
Information tool:
Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
Recommendation Tool:
Provides movie recommendations based upon user preferences and input.
Memory Tool:
Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.
Environment Setup
You need to define the following environment variables
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
Populating with data
If you want to populate the DB with an example movie dataset, you can run python ingest.py. The script import information about movies and their rating by users. Additionally, the script creates two fulltext indices, which are used to map information from user input to the database.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package neo4j-semantic-layer
If you want to add this to an existing project, you can just run:
langchain app add neo4j-semantic-layer
And add the following code to your server.py file:
from neo4j_semantic_layer import agent_executor as neo4j_semantic_agent
add_routes(app, neo4j_semantic_agent, path="/neo4j-semantic-layer")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/neo4j-semantic-layer/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-semantic-layer") |
https://python.langchain.com/docs/templates/neo4j-semantic-ollama/ | ## neo4j-semantic-ollama
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using Mixtral as a JSON-based agent. The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph database based on the user's intent. Learn more about the semantic layer template in the [corresponding blog post](https://medium.com/towards-data-science/enhancing-interaction-between-language-models-and-graph-databases-via-a-semantic-layer-0a78ad3eba49) and specifically about [Mixtral agents with Ollama](https://blog.langchain.dev/json-based-agents-with-ollama-and-langchain/).
The agent utilizes several tools to interact with the Neo4j graph database effectively:
1. **Information tool**:
* Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
2. **Recommendation Tool**:
* Provides movie recommendations based upon user preferences and input.
3. **Memory Tool**:
* Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.
4. **Smalltalk Tool**:
* Allows an agent to deal with smalltalk.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Before using this template, you need to set up Ollama and Neo4j database.
1. Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
2. Download your LLM of interest:
* This package uses `mixtral`: `ollama pull mixtral`
* You can choose from many LLMs [here](https://ollama.ai/library)
You need to define the following environment variables
```
OLLAMA_BASE_URL=<YOUR_OLLAMA_URL>NEO4J_URI=<YOUR_NEO4J_URI>NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data[](#populating-with-data "Direct link to Populating with data")
If you want to populate the DB with an example movie dataset, you can run `python ingest.py`. The script import information about movies and their rating by users. Additionally, the script creates two [fulltext indices](https://neo4j.com/docs/cypher-manual/current/indexes-for-full-text-search/), which are used to map information from user input to the database.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package neo4j-semantic-ollama
```
If you want to add this to an existing project, you can just run:
```
langchain app add neo4j-semantic-ollama
```
And add the following code to your `server.py` file:
```
from neo4j_semantic_layer import agent_executor as neo4j_semantic_agentadd_routes(app, neo4j_semantic_agent, path="/neo4j-semantic-ollama")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/neo4j-semantic-ollama/playground](http://127.0.0.1:8000/neo4j-semantic-ollama/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/neo4j-semantic-ollama")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:56.875Z",
"loadedUrl": "https://python.langchain.com/docs/templates/neo4j-semantic-ollama/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/neo4j-semantic-ollama/",
"description": "This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using Mixtral as a JSON-based agent.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3760",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"neo4j-semantic-ollama\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:56 GMT",
"etag": "W/\"2d0557bfa0a80c35f9d63b9bdd568e1b\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::zmgp6-1713753956737-bcfe1e2491fe"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/neo4j-semantic-ollama/",
"property": "og:url"
},
{
"content": "neo4j-semantic-ollama | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using Mixtral as a JSON-based agent.",
"property": "og:description"
}
],
"title": "neo4j-semantic-ollama | 🦜️🔗 LangChain"
} | neo4j-semantic-ollama
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using Mixtral as a JSON-based agent. The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph database based on the user's intent. Learn more about the semantic layer template in the corresponding blog post and specifically about Mixtral agents with Ollama.
The agent utilizes several tools to interact with the Neo4j graph database effectively:
Information tool:
Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
Recommendation Tool:
Provides movie recommendations based upon user preferences and input.
Memory Tool:
Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.
Smalltalk Tool:
Allows an agent to deal with smalltalk.
Environment Setup
Before using this template, you need to set up Ollama and Neo4j database.
Follow instructions here to download Ollama.
Download your LLM of interest:
This package uses mixtral: ollama pull mixtral
You can choose from many LLMs here
You need to define the following environment variables
OLLAMA_BASE_URL=<YOUR_OLLAMA_URL>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
Populating with data
If you want to populate the DB with an example movie dataset, you can run python ingest.py. The script import information about movies and their rating by users. Additionally, the script creates two fulltext indices, which are used to map information from user input to the database.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package neo4j-semantic-ollama
If you want to add this to an existing project, you can just run:
langchain app add neo4j-semantic-ollama
And add the following code to your server.py file:
from neo4j_semantic_layer import agent_executor as neo4j_semantic_agent
add_routes(app, neo4j_semantic_agent, path="/neo4j-semantic-ollama")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/neo4j-semantic-ollama/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-semantic-ollama") |
https://python.langchain.com/docs/templates/neo4j-vector-memory/ | ## neo4j-vector-memory
This template allows you to integrate an LLM with a vector-based retrieval system using Neo4j as the vector store. Additionally, it uses the graph capabilities of the Neo4j database to store and retrieve the dialogue history of a specific user's session. Having the dialogue history stored as a graph allows for seamless conversational flows but also gives you the ability to analyze user behavior and text chunk retrieval through graph analytics.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
You need to define the following environment variables
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>NEO4J_URI=<YOUR_NEO4J_URI>NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data[](#populating-with-data "Direct link to Populating with data")
If you want to populate the DB with some example data, you can run `python ingest.py`. The script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database. Additionally, a vector index named `dune` is created for efficient querying of these embeddings.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package neo4j-vector-memory
```
If you want to add this to an existing project, you can just run:
```
langchain app add neo4j-vector-memory
```
And add the following code to your `server.py` file:
```
from neo4j_vector_memory import chain as neo4j_vector_memory_chainadd_routes(app, neo4j_vector_memory_chain, path="/neo4j-vector-memory")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/neo4j-vector-memory/playground](http://127.0.0.1:8000/neo4j-parent/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/neo4j-vector-memory")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:56.995Z",
"loadedUrl": "https://python.langchain.com/docs/templates/neo4j-vector-memory/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/neo4j-vector-memory/",
"description": "This template allows you to integrate an LLM with a vector-based retrieval system using Neo4j as the vector store.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"neo4j-vector-memory\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:56 GMT",
"etag": "W/\"fc112a262f92c652b687a53e0199e0e9\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::np5t5-1713753956702-5d0997d0792e"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/neo4j-vector-memory/",
"property": "og:url"
},
{
"content": "neo4j-vector-memory | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template allows you to integrate an LLM with a vector-based retrieval system using Neo4j as the vector store.",
"property": "og:description"
}
],
"title": "neo4j-vector-memory | 🦜️🔗 LangChain"
} | neo4j-vector-memory
This template allows you to integrate an LLM with a vector-based retrieval system using Neo4j as the vector store. Additionally, it uses the graph capabilities of the Neo4j database to store and retrieve the dialogue history of a specific user's session. Having the dialogue history stored as a graph allows for seamless conversational flows but also gives you the ability to analyze user behavior and text chunk retrieval through graph analytics.
Environment Setup
You need to define the following environment variables
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
Populating with data
If you want to populate the DB with some example data, you can run python ingest.py. The script process and stores sections of the text from the file dune.txt into a Neo4j graph database. Additionally, a vector index named dune is created for efficient querying of these embeddings.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package neo4j-vector-memory
If you want to add this to an existing project, you can just run:
langchain app add neo4j-vector-memory
And add the following code to your server.py file:
from neo4j_vector_memory import chain as neo4j_vector_memory_chain
add_routes(app, neo4j_vector_memory_chain, path="/neo4j-vector-memory")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/neo4j-vector-memory/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-vector-memory") |
https://python.langchain.com/docs/templates/rag-chroma/ | ## rag-chroma
This template performs RAG using Chroma and OpenAI.
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-chroma
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-chroma
```
And add the following code to your `server.py` file:
```
from rag_chroma import chain as rag_chroma_chainadd_routes(app, rag_chroma_chain, path="/rag-chroma")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-chroma/playground](http://127.0.0.1:8000/rag-chroma/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-chroma")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:57.138Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-chroma/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-chroma/",
"description": "This template performs RAG using Chroma and OpenAI.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-chroma\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:56 GMT",
"etag": "W/\"7ce03853bfd79d469a8cba8ad9068949\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::2mngq-1713753956746-c0d2502d6efb"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-chroma/",
"property": "og:url"
},
{
"content": "rag-chroma | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Chroma and OpenAI.",
"property": "og:description"
}
],
"title": "rag-chroma | 🦜️🔗 LangChain"
} | rag-chroma
This template performs RAG using Chroma and OpenAI.
The vectorstore is created in chain.py and by default indexes a popular blog posts on Agents for question-answering.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-chroma
If you want to add this to an existing project, you can just run:
langchain app add rag-chroma
And add the following code to your server.py file:
from rag_chroma import chain as rag_chroma_chain
add_routes(app, rag_chroma_chain, path="/rag-chroma")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-chroma/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-codellama-fireworks/ | ## rag-codellama-fireworks
This template performs RAG on a codebase.
It uses codellama-34b hosted by Fireworks' [LLM inference API](https://blog.fireworks.ai/accelerating-code-completion-with-fireworks-fast-llm-inference-f4e8b5ec534a).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `FIREWORKS_API_KEY` environment variable to access the Fireworks models.
You can obtain it from [here](https://app.fireworks.ai/login?callbackURL=https://app.fireworks.ai).
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-codellama-fireworks
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-codellama-fireworks
```
And add the following code to your `server.py` file:
```
from rag_codellama_fireworks import chain as rag_codellama_fireworks_chainadd_routes(app, rag_codellama_fireworks_chain, path="/rag-codellama-fireworks")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-codellama-fireworks/playground](http://127.0.0.1:8000/rag-codellama-fireworks/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-codellama-fireworks")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:57.072Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-codellama-fireworks/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-codellama-fireworks/",
"description": "This template performs RAG on a codebase.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3758",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-codellama-fireworks\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:56 GMT",
"etag": "W/\"a82e53613e51062edf4b0e780db26944\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::vtglz-1713753956791-e4637cf4c92d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-codellama-fireworks/",
"property": "og:url"
},
{
"content": "rag-codellama-fireworks | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG on a codebase.",
"property": "og:description"
}
],
"title": "rag-codellama-fireworks | 🦜️🔗 LangChain"
} | rag-codellama-fireworks
This template performs RAG on a codebase.
It uses codellama-34b hosted by Fireworks' LLM inference API.
Environment Setup
Set the FIREWORKS_API_KEY environment variable to access the Fireworks models.
You can obtain it from here.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-codellama-fireworks
If you want to add this to an existing project, you can just run:
langchain app add rag-codellama-fireworks
And add the following code to your server.py file:
from rag_codellama_fireworks import chain as rag_codellama_fireworks_chain
add_routes(app, rag_codellama_fireworks_chain, path="/rag-codellama-fireworks")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-codellama-fireworks/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-codellama-fireworks")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-chroma-private/ | ## rag-chroma-private
This template performs RAG with no reliance on external APIs.
It utilizes Ollama the LLM, GPT4All for embeddings, and Chroma for the vectorstore.
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
To set up the environment, you need to download Ollama.
Follow the instructions [here](https://python.langchain.com/docs/integrations/chat/ollama).
You can choose the desired LLM with Ollama.
This template uses `llama2:7b-chat`, which can be accessed using `ollama pull llama2:7b-chat`.
There are many other options available [here](https://ollama.ai/library).
This package also uses [GPT4All](https://python.langchain.com/docs/integrations/text_embedding/gpt4all) embeddings.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-chroma-private
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-chroma-private
```
And add the following code to your `server.py` file:
```
from rag_chroma_private import chain as rag_chroma_private_chainadd_routes(app, rag_chroma_private_chain, path="/rag-chroma-private")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-chroma-private/playground](http://127.0.0.1:8000/rag-chroma-private/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-chroma-private")
```
The package will create and add documents to the vector database in `chain.py`. By default, it will load a popular blog post on agents. However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:57.660Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-chroma-private/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-chroma-private/",
"description": "This template performs RAG with no reliance on external APIs.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "5246",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-chroma-private\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:57 GMT",
"etag": "W/\"a144dd1a9d47d0b09f6518afdd9430ac\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::4w6m2-1713753957048-a6385c6485be"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-chroma-private/",
"property": "og:url"
},
{
"content": "rag-chroma-private | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG with no reliance on external APIs.",
"property": "og:description"
}
],
"title": "rag-chroma-private | 🦜️🔗 LangChain"
} | rag-chroma-private
This template performs RAG with no reliance on external APIs.
It utilizes Ollama the LLM, GPT4All for embeddings, and Chroma for the vectorstore.
The vectorstore is created in chain.py and by default indexes a popular blog posts on Agents for question-answering.
Environment Setup
To set up the environment, you need to download Ollama.
Follow the instructions here.
You can choose the desired LLM with Ollama.
This template uses llama2:7b-chat, which can be accessed using ollama pull llama2:7b-chat.
There are many other options available here.
This package also uses GPT4All embeddings.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-chroma-private
If you want to add this to an existing project, you can just run:
langchain app add rag-chroma-private
And add the following code to your server.py file:
from rag_chroma_private import chain as rag_chroma_private_chain
add_routes(app, rag_chroma_private_chain, path="/rag-chroma-private")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-chroma-private/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-private")
The package will create and add documents to the vector database in chain.py. By default, it will load a popular blog post on agents. However, you can choose from a large number of document loaders here. |
https://python.langchain.com/docs/templates/rag-timescale-hybrid-search-time/ | This template shows how to use timescale-vector with the self-query retriver to perform hybrid search on similarity and time. This is useful any time your data has a strong time-based component. Some examples of such data are:
* News articles (politics, business, etc)
* Blog posts, documentation or other published material (public or private).
* Social media posts
* Changelogs of any kind
* Messages
Such items are often searched by both similarity and time. For example: Show me all news about Toyota trucks from 2022.
[Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) provides superior performance when searching for embeddings within a particular timeframe by leveraging automatic table partitioning to isolate data for particular time-ranges.
Langchain's self-query retriever allows deducing time-ranges (as well as other search criteria) from the text of user queries.
## What is Timescale Vector?[](#what-is-timescale-vector "Direct link to What is Timescale Vector?")
**[Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) is PostgreSQL++ for AI applications.**
Timescale Vector enables you to efficiently store and query billions of vector embeddings in `PostgreSQL`.
* Enhances `pgvector` with faster and more accurate similarity search on 1B+ vectors via DiskANN inspired indexing algorithm.
* Enables fast time-based vector search via automatic time-based partitioning and indexing.
* Provides a familiar SQL interface for querying vector embeddings and relational data.
Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:
* Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database.
* Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.
* Enables a worry-free experience with enterprise-grade security and compliance.
### How to access Timescale Vector[](#how-to-access-timescale-vector "Direct link to How to access Timescale Vector")
Timescale Vector is available on [Timescale](https://www.timescale.com/products?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral), the cloud PostgreSQL platform. (There is no self-hosted version at this time.)
* LangChain users get a 90-day free trial for Timescale Vector.
* To get started, [signup](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) to Timescale, create a new database and follow this notebook!
* See the [installation instructions](https://github.com/timescale/python-vector) for more details on using Timescale Vector in python.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This template uses Timescale Vector as a vectorstore and requires that `TIMESCALES_SERVICE_URL`. Signup for a 90-day trial [here](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) if you don't yet have an account.
To load the sample dataset, set `LOAD_SAMPLE_DATA=1`. To load your own dataset see the section below.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-timescale-hybrid-search-time
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-timescale-hybrid-search-time
```
And add the following code to your `server.py` file:
```
from rag_timescale_hybrid_search.chain import chain as rag_timescale_hybrid_search_chainadd_routes(app, rag_timescale_hybrid_search_chain, path="/rag-timescale-hybrid-search")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-timescale-hybrid-search/playground](http://127.0.0.1:8000/rag-timescale-hybrid-search/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-timescale-hybrid-search")
```
## Loading your own dataset[](#loading-your-own-dataset "Direct link to Loading your own dataset")
To load your own dataset you will have to modify the code in the `DATASET SPECIFIC CODE` section of `chain.py`. This code defines the name of the collection, how to load the data, and the human-language description of both the contents of the collection and all of the metadata. The human-language descriptions are used by the self-query retriever to help the LLM convert the question into filters on the metadata when searching the data in Timescale-vector. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:57.465Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-timescale-hybrid-search-time/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-timescale-hybrid-search-time/",
"description": "This template shows how to use timescale-vector with the self-query retriver to perform hybrid search on similarity and time.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4929",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-timescale-hybrid-search-time\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:57 GMT",
"etag": "W/\"65299a6970db2cba311175e8c4b553ac\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::dzpq5-1713753957025-a9ca35e4484e"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-timescale-hybrid-search-time/",
"property": "og:url"
},
{
"content": "RAG with Timescale Vector using hybrid search | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template shows how to use timescale-vector with the self-query retriver to perform hybrid search on similarity and time.",
"property": "og:description"
}
],
"title": "RAG with Timescale Vector using hybrid search | 🦜️🔗 LangChain"
} | This template shows how to use timescale-vector with the self-query retriver to perform hybrid search on similarity and time. This is useful any time your data has a strong time-based component. Some examples of such data are:
News articles (politics, business, etc)
Blog posts, documentation or other published material (public or private).
Social media posts
Changelogs of any kind
Messages
Such items are often searched by both similarity and time. For example: Show me all news about Toyota trucks from 2022.
Timescale Vector provides superior performance when searching for embeddings within a particular timeframe by leveraging automatic table partitioning to isolate data for particular time-ranges.
Langchain's self-query retriever allows deducing time-ranges (as well as other search criteria) from the text of user queries.
What is Timescale Vector?
Timescale Vector is PostgreSQL++ for AI applications.
Timescale Vector enables you to efficiently store and query billions of vector embeddings in PostgreSQL.
Enhances pgvector with faster and more accurate similarity search on 1B+ vectors via DiskANN inspired indexing algorithm.
Enables fast time-based vector search via automatic time-based partitioning and indexing.
Provides a familiar SQL interface for querying vector embeddings and relational data.
Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:
Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database.
Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.
Enables a worry-free experience with enterprise-grade security and compliance.
How to access Timescale Vector
Timescale Vector is available on Timescale, the cloud PostgreSQL platform. (There is no self-hosted version at this time.)
LangChain users get a 90-day free trial for Timescale Vector.
To get started, signup to Timescale, create a new database and follow this notebook!
See the installation instructions for more details on using Timescale Vector in python.
Environment Setup
This template uses Timescale Vector as a vectorstore and requires that TIMESCALES_SERVICE_URL. Signup for a 90-day trial here if you don't yet have an account.
To load the sample dataset, set LOAD_SAMPLE_DATA=1. To load your own dataset see the section below.
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-timescale-hybrid-search-time
If you want to add this to an existing project, you can just run:
langchain app add rag-timescale-hybrid-search-time
And add the following code to your server.py file:
from rag_timescale_hybrid_search.chain import chain as rag_timescale_hybrid_search_chain
add_routes(app, rag_timescale_hybrid_search_chain, path="/rag-timescale-hybrid-search")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-timescale-hybrid-search/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-timescale-hybrid-search")
Loading your own dataset
To load your own dataset you will have to modify the code in the DATASET SPECIFIC CODE section of chain.py. This code defines the name of the collection, how to load the data, and the human-language description of both the contents of the collection and all of the metadata. The human-language descriptions are used by the self-query retriever to help the LLM convert the question into filters on the metadata when searching the data in Timescale-vector. |
https://python.langchain.com/docs/templates/rag-conversation-zep/ | ## rag-conversation-zep
This template demonstrates building a RAG conversation app using Zep.
Included in this template:
* Populating a [Zep Document Collection](https://docs.getzep.com/sdk/documents/) with a set of documents (a Collection is analogous to an index in other Vector Databases).
* Using Zep's [integrated embedding](https://docs.getzep.com/deployment/embeddings/) functionality to embed the documents as vectors.
* Configuring a LangChain [ZepVectorStore Retriever](https://docs.getzep.com/sdk/documents/) to retrieve documents using Zep's built, hardware accelerated in [Maximal Marginal Relevance](https://docs.getzep.com/sdk/search_query/) (MMR) re-ranking.
* Prompts, a simple chat history data structure, and other components required to build a RAG conversation app.
* The RAG conversation chain.
## About [Zep - Fast, scalable building blocks for LLM Apps](https://www.getzep.com/)[](#about-zep---fast-scalable-building-blocks-for-llm-apps "Direct link to about-zep---fast-scalable-building-blocks-for-llm-apps")
Zep is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.
Key Features:
* Fast! Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.
* Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.
* Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.
* Hybrid search over memories and metadata, with messages automatically embedded on creation.
* Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.
* Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.
* Python and JavaScript SDKs.
Zep project: [https://github.com/getzep/zep](https://github.com/getzep/zep) | Docs: [https://docs.getzep.com/](https://docs.getzep.com/)
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set up a Zep service by following the [Quick Start Guide](https://docs.getzep.com/deployment/quickstart/).
## Ingesting Documents into a Zep Collection[](#ingesting-documents-into-a-zep-collection "Direct link to Ingesting Documents into a Zep Collection")
Run `python ingest.py` to ingest the test documents into a Zep Collection. Review the file to modify the Collection name and document source.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-conversation-zep
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-conversation-zep
```
And add the following code to your `server.py` file:
```
from rag_conversation_zep import chain as rag_conversation_zep_chainadd_routes(app, rag_conversation_zep_chain, path="/rag-conversation-zep")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-conversation-zep/playground](http://127.0.0.1:8000/rag-conversation-zep/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-conversation-zep")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:58.156Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-conversation-zep/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-conversation-zep/",
"description": "This template demonstrates building a RAG conversation app using Zep.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-conversation-zep\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:58 GMT",
"etag": "W/\"e5aa57cdaeaa24724518cf1a3b124a2a\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::rtm4t-1713753958049-fdc3d2d40bd2"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-conversation-zep/",
"property": "og:url"
},
{
"content": "rag-conversation-zep | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template demonstrates building a RAG conversation app using Zep.",
"property": "og:description"
}
],
"title": "rag-conversation-zep | 🦜️🔗 LangChain"
} | rag-conversation-zep
This template demonstrates building a RAG conversation app using Zep.
Included in this template:
Populating a Zep Document Collection with a set of documents (a Collection is analogous to an index in other Vector Databases).
Using Zep's integrated embedding functionality to embed the documents as vectors.
Configuring a LangChain ZepVectorStore Retriever to retrieve documents using Zep's built, hardware accelerated in Maximal Marginal Relevance (MMR) re-ranking.
Prompts, a simple chat history data structure, and other components required to build a RAG conversation app.
The RAG conversation chain.
About Zep - Fast, scalable building blocks for LLM Apps
Zep is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.
Key Features:
Fast! Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.
Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.
Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.
Hybrid search over memories and metadata, with messages automatically embedded on creation.
Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.
Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.
Python and JavaScript SDKs.
Zep project: https://github.com/getzep/zep | Docs: https://docs.getzep.com/
Environment Setup
Set up a Zep service by following the Quick Start Guide.
Ingesting Documents into a Zep Collection
Run python ingest.py to ingest the test documents into a Zep Collection. Review the file to modify the Collection name and document source.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U "langchain-cli[serve]"
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-conversation-zep
If you want to add this to an existing project, you can just run:
langchain app add rag-conversation-zep
And add the following code to your server.py file:
from rag_conversation_zep import chain as rag_conversation_zep_chain
add_routes(app, rag_conversation_zep_chain, path="/rag-conversation-zep")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-conversation-zep/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-conversation-zep") |
https://python.langchain.com/docs/templates/rag-vectara-multiquery/ | ## rag-vectara-multiquery
This template performs multiquery RAG with vectara.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
* `VECTARA_CUSTOMER_ID`
* `VECTARA_CORPUS_ID`
* `VECTARA_API_KEY`
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-vectara-multiquery
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-vectara-multiquery
```
And add the following code to your `server.py` file:
```
from rag_vectara import chain as rag_vectara_chainadd_routes(app, rag_vectara_chain, path="/rag-vectara-multiquery")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "vectara-demo"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-vectara-multiquery/playground](http://127.0.0.1:8000/rag-vectara-multiquery/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-vectara-multiquery")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:58.281Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-vectara-multiquery/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-vectara-multiquery/",
"description": "This template performs multiquery RAG with vectara.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3756",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-vectara-multiquery\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:57 GMT",
"etag": "W/\"17841a3e220b55845515c91e00e32ed0\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::kflrz-1713753957923-525567649f61"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-vectara-multiquery/",
"property": "og:url"
},
{
"content": "rag-vectara-multiquery | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs multiquery RAG with vectara.",
"property": "og:description"
}
],
"title": "rag-vectara-multiquery | 🦜️🔗 LangChain"
} | rag-vectara-multiquery
This template performs multiquery RAG with vectara.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
VECTARA_CUSTOMER_ID
VECTARA_CORPUS_ID
VECTARA_API_KEY
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-vectara-multiquery
If you want to add this to an existing project, you can just run:
langchain app add rag-vectara-multiquery
And add the following code to your server.py file:
from rag_vectara import chain as rag_vectara_chain
add_routes(app, rag_vectara_chain, path="/rag-vectara-multiquery")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "vectara-demo"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-vectara-multiquery/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-vectara-multiquery")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-elasticsearch/ | ## rag-elasticsearch
This template performs RAG using [Elasticsearch](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch).
It relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To connect to your Elasticsearch instance, use the following environment variables:
```
export ELASTIC_CLOUD_ID = <ClOUD_ID>export ELASTIC_USERNAME = <ClOUD_USERNAME>export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
```
For local development with Docker, use:
```
export ES_URL="http://localhost:9200"
```
And run an Elasticsearch instance in Docker with
```
docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0
```
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-elasticsearch
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-elasticsearch
```
And add the following code to your `server.py` file:
```
from rag_elasticsearch import chain as rag_elasticsearch_chainadd_routes(app, rag_elasticsearch_chain, path="/rag-elasticsearch")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-elasticsearch/playground](http://127.0.0.1:8000/rag-elasticsearch/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-elasticsearch")
```
For loading the fictional workplace documents, run the following command from the root of this repository:
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders). | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:58.419Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-elasticsearch/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-elasticsearch/",
"description": "This template performs RAG using Elasticsearch.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-elasticsearch\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:58 GMT",
"etag": "W/\"28d3f7e72a1b596fe52e6f2e72ca7493\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::76nkp-1713753958136-30a69cb23317"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-elasticsearch/",
"property": "og:url"
},
{
"content": "rag-elasticsearch | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Elasticsearch.",
"property": "og:description"
}
],
"title": "rag-elasticsearch | 🦜️🔗 LangChain"
} | rag-elasticsearch
This template performs RAG using Elasticsearch.
It relies on sentence transformer MiniLM-L6-v2 for embedding passages and questions.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
To connect to your Elasticsearch instance, use the following environment variables:
export ELASTIC_CLOUD_ID = <ClOUD_ID>
export ELASTIC_USERNAME = <ClOUD_USERNAME>
export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
For local development with Docker, use:
export ES_URL="http://localhost:9200"
And run an Elasticsearch instance in Docker with
docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-elasticsearch
If you want to add this to an existing project, you can just run:
langchain app add rag-elasticsearch
And add the following code to your server.py file:
from rag_elasticsearch import chain as rag_elasticsearch_chain
add_routes(app, rag_elasticsearch_chain, path="/rag-elasticsearch")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-elasticsearch/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-elasticsearch")
For loading the fictional workplace documents, run the following command from the root of this repository:
However, you can choose from a large number of document loaders here. |
https://python.langchain.com/docs/templates/rag-fusion/ | ## rag-fusion
This template enables RAG fusion using a re-implementation of the project found [here](https://github.com/Raudaschl/rag-fusion).
It performs multiple query generation and Reciprocal Rank Fusion to re-rank search results.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-fusion
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-fusion
```
And add the following code to your `server.py` file:
```
from rag_fusion.chain import chain as rag_fusion_chainadd_routes(app, rag_fusion_chain, path="/rag-fusion")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-fusion/playground](http://127.0.0.1:8000/rag-fusion/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-fusion")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:58.599Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-fusion/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-fusion/",
"description": "This template enables RAG fusion using a re-implementation of the project found here.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-fusion\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:58 GMT",
"etag": "W/\"8c7f545deec638381c685363aa5fa884\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::gwsts-1713753958166-794035a68a3d"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-fusion/",
"property": "og:url"
},
{
"content": "rag-fusion | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template enables RAG fusion using a re-implementation of the project found here.",
"property": "og:description"
}
],
"title": "rag-fusion | 🦜️🔗 LangChain"
} | rag-fusion
This template enables RAG fusion using a re-implementation of the project found here.
It performs multiple query generation and Reciprocal Rank Fusion to re-rank search results.
Environment Setup
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-fusion
If you want to add this to an existing project, you can just run:
langchain app add rag-fusion
And add the following code to your server.py file:
from rag_fusion.chain import chain as rag_fusion_chain
add_routes(app, rag_fusion_chain, path="/rag-fusion")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-fusion/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-fusion")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/nvidia-rag-canonical/ | ## nvidia-rag-canonical
This template performs RAG using Milvus Vector Store and NVIDIA Models (Embedding and Chat).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
You should export your NVIDIA API Key as an environment variable. If you do not have an NVIDIA API Key, you can create one by following these steps:
1. Create a free account with the [NVIDIA GPU Cloud](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.
2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.
3. Select the `API` option and click `Generate Key`.
4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.
```
export NVIDIA_API_KEY=...
```
For instructions on hosting the Milvus Vector Store, refer to the section at the bottom.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To use the NVIDIA models, install the Langchain NVIDIA AI Endpoints package:
```
pip install -U langchain_nvidia_aiplay
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package nvidia-rag-canonical
```
If you want to add this to an existing project, you can just run:
```
langchain app add nvidia-rag-canonical
```
And add the following code to your `server.py` file:
```
from nvidia_rag_canonical import chain as nvidia_rag_canonical_chainadd_routes(app, nvidia_rag_canonical_chain, path="/nvidia-rag-canonical")
```
If you want to set up an ingestion pipeline, you can add the following code to your `server.py` file:
```
from nvidia_rag_canonical import ingest as nvidia_rag_ingestadd_routes(app, nvidia_rag_ingest, path="/nvidia-rag-ingest")
```
Note that for files ingested by the ingestion API, the server will need to be restarted for the newly ingested files to be accessible by the retriever.
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you DO NOT already have a Milvus Vector Store you want to connect to, see `Milvus Setup` section below before proceeding.
If you DO have a Milvus Vector Store you want to connect to, edit the connection details in `nvidia_rag_canonical/chain.py`
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/nvidia-rag-canonical/playground](http://127.0.0.1:8000/nvidia-rag-canonical/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/nvidia-rag-canonical")
```
## Milvus Setup[](#milvus-setup "Direct link to Milvus Setup")
Use this step if you need to create a Milvus Vector Store and ingest data. We will first follow the standard Milvus setup instructions [here](https://milvus.io/docs/install_standalone-docker.md).
1. Download the Docker Compose YAML file.
```
wget https://github.com/milvus-io/milvus/releases/download/v2.3.3/milvus-standalone-docker-compose.yml -O docker-compose.yml
```
2. Start the Milvus Vector Store container
```
sudo docker compose up -d
```
3. Install the PyMilvus package to interact with the Milvus container.
4. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:
Note that you can (and should!) change this to ingest data of your choice. | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:58.735Z",
"loadedUrl": "https://python.langchain.com/docs/templates/nvidia-rag-canonical/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/nvidia-rag-canonical/",
"description": "This template performs RAG using Milvus Vector Store and NVIDIA Models (Embedding and Chat).",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4942",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"nvidia-rag-canonical\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:58 GMT",
"etag": "W/\"bd51540ff3d7a94325d5f6b598ad5cd8\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::5fbxs-1713753958184-912f14ce35b1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/nvidia-rag-canonical/",
"property": "og:url"
},
{
"content": "nvidia-rag-canonical | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template performs RAG using Milvus Vector Store and NVIDIA Models (Embedding and Chat).",
"property": "og:description"
}
],
"title": "nvidia-rag-canonical | 🦜️🔗 LangChain"
} | nvidia-rag-canonical
This template performs RAG using Milvus Vector Store and NVIDIA Models (Embedding and Chat).
Environment Setup
You should export your NVIDIA API Key as an environment variable. If you do not have an NVIDIA API Key, you can create one by following these steps:
Create a free account with the NVIDIA GPU Cloud service, which hosts AI solution catalogs, containers, models, etc.
Navigate to Catalog > AI Foundation Models > (Model with API endpoint).
Select the API option and click Generate Key.
Save the generated key as NVIDIA_API_KEY. From there, you should have access to the endpoints.
export NVIDIA_API_KEY=...
For instructions on hosting the Milvus Vector Store, refer to the section at the bottom.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To use the NVIDIA models, install the Langchain NVIDIA AI Endpoints package:
pip install -U langchain_nvidia_aiplay
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package nvidia-rag-canonical
If you want to add this to an existing project, you can just run:
langchain app add nvidia-rag-canonical
And add the following code to your server.py file:
from nvidia_rag_canonical import chain as nvidia_rag_canonical_chain
add_routes(app, nvidia_rag_canonical_chain, path="/nvidia-rag-canonical")
If you want to set up an ingestion pipeline, you can add the following code to your server.py file:
from nvidia_rag_canonical import ingest as nvidia_rag_ingest
add_routes(app, nvidia_rag_ingest, path="/nvidia-rag-ingest")
Note that for files ingested by the ingestion API, the server will need to be restarted for the newly ingested files to be accessible by the retriever.
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you DO NOT already have a Milvus Vector Store you want to connect to, see Milvus Setup section below before proceeding.
If you DO have a Milvus Vector Store you want to connect to, edit the connection details in nvidia_rag_canonical/chain.py
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/nvidia-rag-canonical/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/nvidia-rag-canonical")
Milvus Setup
Use this step if you need to create a Milvus Vector Store and ingest data. We will first follow the standard Milvus setup instructions here.
Download the Docker Compose YAML file.
wget https://github.com/milvus-io/milvus/releases/download/v2.3.3/milvus-standalone-docker-compose.yml -O docker-compose.yml
Start the Milvus Vector Store container
sudo docker compose up -d
Install the PyMilvus package to interact with the Milvus container.
Let's now ingest some data! We can do that by moving into this directory and running the code in ingest.py, eg:
Note that you can (and should!) change this to ingest data of your choice. |
https://python.langchain.com/docs/templates/rag-conversation/ | ## rag-conversation
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
It passes both a conversation history and retrieved documents into an LLM for synthesis.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-conversation
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-conversation
```
And add the following code to your `server.py` file:
```
from rag_conversation import chain as rag_conversation_chainadd_routes(app, rag_conversation_chain, path="/rag-conversation")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-conversation/playground](http://127.0.0.1:8000/rag-conversation/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-conversation")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:58.938Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-conversation/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-conversation/",
"description": "This template is used for conversational retrieval, which is one of the most popular LLM use-cases.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4938",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-conversation\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:58 GMT",
"etag": "W/\"cbee2fa2a6e2ffbc225e8955335d20c9\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::6jz7h-1713753958282-d6533107f085"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-conversation/",
"property": "og:url"
},
{
"content": "rag-conversation | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template is used for conversational retrieval, which is one of the most popular LLM use-cases.",
"property": "og:description"
}
],
"title": "rag-conversation | 🦜️🔗 LangChain"
} | rag-conversation
This template is used for conversational retrieval, which is one of the most popular LLM use-cases.
It passes both a conversation history and retrieved documents into an LLM for synthesis.
Environment Setup
This template uses Pinecone as a vectorstore and requires that PINECONE_API_KEY, PINECONE_ENVIRONMENT, and PINECONE_INDEX are set.
Set the OPENAI_API_KEY environment variable to access the OpenAI models.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-conversation
If you want to add this to an existing project, you can just run:
langchain app add rag-conversation
And add the following code to your server.py file:
from rag_conversation import chain as rag_conversation_chain
add_routes(app, rag_conversation_chain, path="/rag-conversation")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-conversation/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-conversation")
Help us out by providing feedback on this documentation page: |
https://python.langchain.com/docs/templates/rag-google-cloud-sensitive-data-protection/ | ## rag-google-cloud-sensitive-data-protection
This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and PaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.
This template is an application that utilizes Google Sensitive Data Protection, a service for detecting and redacting sensitive data in text, and PaLM 2 for Chat (chat-bison), although you can use any model.
For more context on using Sensitive Data Protection, check [here](https://cloud.google.com/dlp/docs/sensitive-data-protection-overview).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Before using this template, please ensure that you enable the [DLP API](https://console.cloud.google.com/marketplace/product/google/dlp.googleapis.com) and [Vertex AI API](https://console.cloud.google.com/marketplace/product/google/aiplatform.googleapis.com) in your Google Cloud project.
For some common environment troubleshooting steps related to Google Cloud, see the bottom of this readme.
Set the following environment variables:
* `GOOGLE_CLOUD_PROJECT_ID` - Your Google Cloud project ID.
* `MODEL_TYPE` - The model type for Vertex AI Search (e.g. `chat-bison`)
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-google-cloud-sensitive-data-protection
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-google-cloud-sensitive-data-protection
```
And add the following code to your `server.py` file:
```
from rag_google_cloud_sensitive_data_protection.chain import chain as rag_google_cloud_sensitive_data_protection_chainadd_routes(app, rag_google_cloud_sensitive_data_protection_chain, path="/rag-google-cloud-sensitive-data-protection")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground](http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-google-cloud-sensitive-data-protection")
```
```
# Troubleshooting Google CloudYou can set your `gcloud` credentials with their CLI using `gcloud auth application-default login`You can set your `gcloud` project with the following commands```bashgcloud config set project <your project>gcloud auth application-default set-quota-project <your project>export GOOGLE_CLOUD_PROJECT_ID=<your project>
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:59.142Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-google-cloud-sensitive-data-protection/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-google-cloud-sensitive-data-protection/",
"description": "This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "0",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-google-cloud-sensitive-data-protection\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:58 GMT",
"etag": "W/\"858073c3715989541e6d1a6fed692083\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "iad1::9vmcv-1713753958420-cbc8f2ea5b83"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-google-cloud-sensitive-data-protection/",
"property": "og:url"
},
{
"content": "rag-google-cloud-sensitive-data-protection | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and",
"property": "og:description"
}
],
"title": "rag-google-cloud-sensitive-data-protection | 🦜️🔗 LangChain"
} | rag-google-cloud-sensitive-data-protection
This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and PaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.
This template is an application that utilizes Google Sensitive Data Protection, a service for detecting and redacting sensitive data in text, and PaLM 2 for Chat (chat-bison), although you can use any model.
For more context on using Sensitive Data Protection, check here.
Environment Setup
Before using this template, please ensure that you enable the DLP API and Vertex AI API in your Google Cloud project.
For some common environment troubleshooting steps related to Google Cloud, see the bottom of this readme.
Set the following environment variables:
GOOGLE_CLOUD_PROJECT_ID - Your Google Cloud project ID.
MODEL_TYPE - The model type for Vertex AI Search (e.g. chat-bison)
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-google-cloud-sensitive-data-protection
If you want to add this to an existing project, you can just run:
langchain app add rag-google-cloud-sensitive-data-protection
And add the following code to your server.py file:
from rag_google_cloud_sensitive_data_protection.chain import chain as rag_google_cloud_sensitive_data_protection_chain
add_routes(app, rag_google_cloud_sensitive_data_protection_chain, path="/rag-google-cloud-sensitive-data-protection")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-google-cloud-sensitive-data-protection")
# Troubleshooting Google Cloud
You can set your `gcloud` credentials with their CLI using `gcloud auth application-default login`
You can set your `gcloud` project with the following commands
```bash
gcloud config set project <your project>
gcloud auth application-default set-quota-project <your project>
export GOOGLE_CLOUD_PROJECT_ID=<your project> |
https://python.langchain.com/docs/templates/rag-gemini-multi-modal/ | ## rag-gemini-multi-modal
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
Given a question, relevat slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis.
## Input[](#input "Direct link to Input")
Supply a slide deck as pdf in the `/docs` directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
Example questions to ask can be:
```
How many customers does Datadog have?What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
```
To create an index of the slide deck, run:
```
poetry installpython ingest.py
```
## Storage[](#storage "Direct link to Storage")
This template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.
You can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
```
vectorstore_mmembd = Chroma( collection_name="multi-modal-rag", persist_directory=str(re_vectorstore_path), embedding_function=OpenCLIPEmbeddings( model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k" ),)
```
## LLM[](#llm "Direct link to LLM")
The app will retrieve images using multi-modal embeddings, and pass them to Google Gemini.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Set your `GOOGLE_API_KEY` environment variable in order to access Gemini.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-gemini-multi-modal
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-gemini-multi-modal
```
And add the following code to your `server.py` file:
```
from rag_gemini_multi_modal import chain as rag_gemini_multi_modal_chainadd_routes(app, rag_gemini_multi_modal_chain, path="/rag-gemini-multi-modal")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-gemini-multi-modal/playground](http://127.0.0.1:8000/rag-gemini-multi-modal/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-gemini-multi-modal")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:59.396Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-gemini-multi-modal/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-gemini-multi-modal/",
"description": "Multi-modal LLMs enable visual assistants that can perform question-answering about images.",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4937",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-gemini-multi-modal\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:58 GMT",
"etag": "W/\"664a71c1531727559f1f46eb870fe683\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::xvkrm-1713753958669-8f33ab6af1c1"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-gemini-multi-modal/",
"property": "og:url"
},
{
"content": "rag-gemini-multi-modal | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "Multi-modal LLMs enable visual assistants that can perform question-answering about images.",
"property": "og:description"
}
],
"title": "rag-gemini-multi-modal | 🦜️🔗 LangChain"
} | rag-gemini-multi-modal
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
Given a question, relevat slides are retrieved and passed to Google Gemini for answer synthesis.
Input
Supply a slide deck as pdf in the /docs directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.
Example questions to ask can be:
How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
To create an index of the slide deck, run:
poetry install
python ingest.py
Storage
This template will use OpenCLIP multi-modal embeddings to embed the images.
You can select different embedding model options (see results here).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirments, ViT-H-14.
You can choose alternative OpenCLIPEmbeddings models in rag_chroma_multi_modal/ingest.py:
vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(re_vectorstore_path),
embedding_function=OpenCLIPEmbeddings(
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
),
)
LLM
The app will retrieve images using multi-modal embeddings, and pass them to Google Gemini.
Environment Setup
Set your GOOGLE_API_KEY environment variable in order to access Gemini.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-gemini-multi-modal
If you want to add this to an existing project, you can just run:
langchain app add rag-gemini-multi-modal
And add the following code to your server.py file:
from rag_gemini_multi_modal import chain as rag_gemini_multi_modal_chain
add_routes(app, rag_gemini_multi_modal_chain, path="/rag-gemini-multi-modal")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-gemini-multi-modal/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-gemini-multi-modal") |
https://python.langchain.com/docs/templates/rag-google-cloud-vertexai-search/ | ## rag-google-cloud-vertexai-search
This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and PaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.
For more context on building RAG applications with Vertex AI Search, check [here](https://cloud.google.com/generative-ai-app-builder/docs/enterprise-search-introduction).
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
Before using this template, please ensure that you are authenticated with Vertex AI Search. See the authentication guide: [here](https://cloud.google.com/generative-ai-app-builder/docs/authentication).
You will also need to create:
* A search application [here](https://cloud.google.com/generative-ai-app-builder/docs/create-engine-es)
* A data store [here](https://cloud.google.com/generative-ai-app-builder/docs/create-data-store-es)
A suitable dataset to test this template with is the Alphabet Earnings Reports, which you can find [here](https://abc.xyz/investor/). The data is also available at `gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs`.
Set the following environment variables:
* `GOOGLE_CLOUD_PROJECT_ID` - Your Google Cloud project ID.
* `DATA_STORE_ID` - The ID of the data store in Vertex AI Search, which is a 36-character alphanumeric value found on the data store details page.
* `MODEL_TYPE` - The model type for Vertex AI Search.
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package rag-google-cloud-vertexai-search
```
If you want to add this to an existing project, you can just run:
```
langchain app add rag-google-cloud-vertexai-search
```
And add the following code to your `server.py` file:
```
from rag_google_cloud_vertexai_search.chain import chain as rag_google_cloud_vertexai_search_chainadd_routes(app, rag_google_cloud_vertexai_search_chain, path="/rag-google-cloud-vertexai-search")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground](http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/rag-google-cloud-vertexai-search")
``` | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:59.742Z",
"loadedUrl": "https://python.langchain.com/docs/templates/rag-google-cloud-vertexai-search/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/rag-google-cloud-vertexai-search/",
"description": "This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "4938",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"rag-google-cloud-vertexai-search\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:59 GMT",
"etag": "W/\"61e698d36a5b2d8845277e4134ea18fa\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "sfo1::wkrjw-1713753959623-537864fcba88"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/rag-google-cloud-vertexai-search/",
"property": "og:url"
},
{
"content": "rag-google-cloud-vertexai-search | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and",
"property": "og:description"
}
],
"title": "rag-google-cloud-vertexai-search | 🦜️🔗 LangChain"
} | rag-google-cloud-vertexai-search
This template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and PaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.
For more context on building RAG applications with Vertex AI Search, check here.
Environment Setup
Before using this template, please ensure that you are authenticated with Vertex AI Search. See the authentication guide: here.
You will also need to create:
A search application here
A data store here
A suitable dataset to test this template with is the Alphabet Earnings Reports, which you can find here. The data is also available at gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs.
Set the following environment variables:
GOOGLE_CLOUD_PROJECT_ID - Your Google Cloud project ID.
DATA_STORE_ID - The ID of the data store in Vertex AI Search, which is a 36-character alphanumeric value found on the data store details page.
MODEL_TYPE - The model type for Vertex AI Search.
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package rag-google-cloud-vertexai-search
If you want to add this to an existing project, you can just run:
langchain app add rag-google-cloud-vertexai-search
And add the following code to your server.py file:
from rag_google_cloud_vertexai_search.chain import chain as rag_google_cloud_vertexai_search_chain
add_routes(app, rag_google_cloud_vertexai_search_chain, path="/rag-google-cloud-vertexai-search")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-google-cloud-vertexai-search") |
https://python.langchain.com/docs/templates/research-assistant/ | ## research-assistant
This template implements a version of
[GPT Researcher](https://github.com/assafelovic/gpt-researcher) that you can use as a starting point for a research agent.
## Environment Setup[](#environment-setup "Direct link to Environment Setup")
The default template relies on ChatOpenAI and DuckDuckGo, so you will need the following environment variable:
* `OPENAI_API_KEY`
And to use the Tavily LLM-optimized search engine, you will need:
* `TAVILY_API_KEY`
## Usage[](#usage "Direct link to Usage")
To use this package, you should first have the LangChain CLI installed:
```
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```
langchain app new my-app --package research-assistant
```
If you want to add this to an existing project, you can just run:
```
langchain app add research-assistant
```
And add the following code to your `server.py` file:
```
from research_assistant import chain as research_assistant_chainadd_routes(app, research_assistant_chain, path="/research-assistant")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=<your-api-key>export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000/)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/research-assistant/playground](http://127.0.0.1:8000/research-assistant/playground)
We can access the template from code with:
```
from langserve.client import RemoteRunnablerunnable = RemoteRunnable("http://localhost:8000/research-assistant")
```
* * *
#### Help us out by providing feedback on this documentation page: | null | {
"depth": 1,
"httpStatusCode": 200,
"loadedTime": "2024-04-22T02:45:59.857Z",
"loadedUrl": "https://python.langchain.com/docs/templates/research-assistant/",
"referrerUrl": "https://python.langchain.com/sitemap.xml"
} | {
"author": null,
"canonicalUrl": "https://python.langchain.com/docs/templates/research-assistant/",
"description": "This template implements a version of",
"headers": {
":status": 200,
"accept-ranges": null,
"access-control-allow-origin": "*",
"age": "3758",
"cache-control": "public, max-age=0, must-revalidate",
"content-disposition": "inline; filename=\"research-assistant\"",
"content-length": null,
"content-type": "text/html; charset=utf-8",
"date": "Mon, 22 Apr 2024 02:45:59 GMT",
"etag": "W/\"e5e9706250844f2fb48abcf829ea39ad\"",
"server": "Vercel",
"strict-transport-security": "max-age=63072000",
"x-vercel-cache": "HIT",
"x-vercel-id": "cle1::9tn2v-1713753959695-f0a77d16c595"
},
"jsonLd": null,
"keywords": null,
"languageCode": "en",
"openGraph": [
{
"content": "https://python.langchain.com/img/brand/theme-image.png",
"property": "og:image"
},
{
"content": "https://python.langchain.com/docs/templates/research-assistant/",
"property": "og:url"
},
{
"content": "research-assistant | 🦜️🔗 LangChain",
"property": "og:title"
},
{
"content": "This template implements a version of",
"property": "og:description"
}
],
"title": "research-assistant | 🦜️🔗 LangChain"
} | research-assistant
This template implements a version of
GPT Researcher that you can use as a starting point for a research agent.
Environment Setup
The default template relies on ChatOpenAI and DuckDuckGo, so you will need the following environment variable:
OPENAI_API_KEY
And to use the Tavily LLM-optimized search engine, you will need:
TAVILY_API_KEY
Usage
To use this package, you should first have the LangChain CLI installed:
pip install -U langchain-cli
To create a new LangChain project and install this as the only package, you can do:
langchain app new my-app --package research-assistant
If you want to add this to an existing project, you can just run:
langchain app add research-assistant
And add the following code to your server.py file:
from research_assistant import chain as research_assistant_chain
add_routes(app, research_assistant_chain, path="/research-assistant")
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
If you are inside this directory, then you can spin up a LangServe instance directly by:
This will start the FastAPI app with a server is running locally at http://localhost:8000
We can see all templates at http://127.0.0.1:8000/docs We can access the playground at http://127.0.0.1:8000/research-assistant/playground
We can access the template from code with:
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/research-assistant")
Help us out by providing feedback on this documentation page: |