url
stringlengths 30
161
| markdown
stringlengths 27
670k
| last_modified
stringclasses 1
value |
---|---|---|
https://github.com/langchain-ai/langchain/blob/master/templates/README.md | # LangChain Templates
LangChain Templates are the easiest and fastest way to build a production-ready LLM application.
These templates serve as a set of reference architectures for a wide variety of popular LLM use cases.
They are all in a standard format which make it easy to deploy them with [LangServe](https://github.com/langchain-ai/langserve).
🚩 We will be releasing a hosted version of LangServe for one-click deployments of LangChain applications. [Sign up here](https://airtable.com/app0hN6sd93QcKubv/shrAjst60xXa6quV2) to get on the waitlist.
## Quick Start
To use, first install the LangChain CLI.
```shell
pip install -U langchain-cli
```
Next, create a new LangChain project:
```shell
langchain app new my-app
```
This will create a new directory called `my-app` with two folders:
- `app`: This is where LangServe code will live
- `packages`: This is where your chains or agents will live
To pull in an existing template as a package, you first need to go into your new project:
```shell
cd my-app
```
And you can the add a template as a project.
In this getting started guide, we will add a simple `pirate-speak` project.
All this project does is convert user input into pirate speak.
```shell
langchain app add pirate-speak
```
This will pull in the specified template into `packages/pirate-speak`
You will then be prompted if you want to install it.
This is the equivalent of running `pip install -e packages/pirate-speak`.
You should generally accept this (or run that same command afterwards).
We install it with `-e` so that if you modify the template at all (which you likely will) the changes are updated.
After that, it will ask you if you want to generate route code for this project.
This is code you need to add to your app to start using this chain.
If we accept, we will see the following code generated:
```shell
from pirate_speak.chain import chain as pirate_speak_chain
add_routes(app, pirate_speak_chain, path="/pirate-speak")
```
You can now edit the template you pulled down.
You can change the code files in `packages/pirate-speak` to use a different model, different prompt, different logic.
Note that the above code snippet always expects the final chain to be importable as `from pirate_speak.chain import chain`,
so you should either keep the structure of the package similar enough to respect that or be prepared to update that code snippet.
Once you have done as much of that as you want, it is
In order to have LangServe use this project, you then need to modify `app/server.py`.
Specifically, you should add the above code snippet to `app/server.py` so that file looks like:
```python
from fastapi import FastAPI
from langserve import add_routes
from pirate_speak.chain import chain as pirate_speak_chain
app = FastAPI()
add_routes(app, pirate_speak_chain, path="/pirate-speak")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
For this particular application, we will use OpenAI as the LLM, so we need to export our OpenAI API key:
```shell
export OPENAI_API_KEY=sk-...
```
You can then spin up production-ready endpoints, along with a playground, by running:
```shell
langchain serve
```
This now gives a fully deployed LangServe application.
For example, you get a playground out-of-the-box at [http://127.0.0.1:8000/pirate-speak/playground/](http://127.0.0.1:8000/pirate-speak/playground/):
![Screenshot of the LangServe Playground interface with input and output fields demonstrating pirate speak conversion.](docs/playground.png "LangServe Playground Interface")
Access API documentation at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
![Screenshot of the API documentation interface showing available endpoints for the pirate-speak application.](docs/docs.png "API Documentation Interface")
Use the LangServe python or js SDK to interact with the API as if it were a regular [Runnable](https://python.langchain.com/docs/expression_language/).
```python
from langserve import RemoteRunnable
api = RemoteRunnable("http://127.0.0.1:8000/pirate-speak")
api.invoke({"text": "hi"})
```
That's it for the quick start!
You have successfully downloaded your first template and deployed it with LangServe.
## Additional Resources
### [Index of Templates](docs/INDEX.md)
Explore the many templates available to use - from advanced RAG to agents.
### [Contributing](docs/CONTRIBUTING.md)
Want to contribute your own template? It's pretty easy! These instructions walk through how to do that.
### [Launching LangServe from a Package](docs/LAUNCHING_PACKAGE.md)
You can also launch LangServe from a package directly (without having to create a new project).
These instructions cover how to do that.
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/xml-agent/README.md |
# xml-agent
This package creates an agent that uses XML syntax to communicate its decisions of what actions to take. It uses Anthropic's Claude models for writing XML syntax and can optionally look up things on the internet using DuckDuckGo.
## Environment Setup
Two environment variables need to be set:
- `ANTHROPIC_API_KEY`: Required for using Anthropic
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package xml-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add xml-agent
```
And add the following code to your `server.py` file:
```python
from xml_agent import agent_executor as xml_agent_chain
add_routes(app, xml_agent_chain, path="/xml-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/xml-agent/playground](http://127.0.0.1:8000/xml-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/xml-agent")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/xml-agent/main.py | from xml_agent.agent import agent_executor
if __name__ == "__main__":
question = "who won the womens world cup in 2023?"
print(agent_executor.invoke({"question": question, "chat_history": []}))
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/xml-agent/xml_agent/__init__.py | from xml_agent.agent import agent_executor
__all__ = ["agent_executor"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/xml-agent/xml_agent/agent.py | from typing import List, Tuple
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_xml
from langchain.tools import DuckDuckGoSearchRun
from langchain.tools.render import render_text_description
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.pydantic_v1 import BaseModel, Field
from xml_agent.prompts import conversational_prompt, parse_output
def _format_chat_history(chat_history: List[Tuple[str, str]]):
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
model = ChatAnthropic(model="claude-3-sonnet-20240229")
tools = [DuckDuckGoSearchRun()]
prompt = conversational_prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
llm_with_stop = model.bind(stop=["</tool_input>"])
agent = (
{
"question": lambda x: x["question"],
"agent_scratchpad": lambda x: format_xml(x["intermediate_steps"]),
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
}
| prompt
| llm_with_stop
| parse_output
)
class AgentInput(BaseModel):
question: str
chat_history: List[Tuple[str, str]] = Field(..., extra={"widget": {"type": "chat"}})
agent_executor = AgentExecutor(
agent=agent, tools=tools, verbose=True, handle_parsing_errors=True
).with_types(input_type=AgentInput)
agent_executor = agent_executor | (lambda x: x["output"])
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/xml-agent/xml_agent/prompts.py | from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
template = """You are a helpful assistant. Help the user answer any questions.
You have access to the following tools:
{tools}
In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>
For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:
<tool>search</tool><tool_input>weather in SF</tool_input>
<observation>64 degrees</observation>
When you are done, you can respond as normal to the user.
Example 1:
Human: Hi!
Assistant: Hi! How are you?
Human: What is the weather in SF?
Assistant: <tool>search</tool><tool_input>weather in SF</tool_input>
<observation>64 degrees</observation>
It is 64 degress in SF
Begin!""" # noqa: E501
conversational_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
("ai", "{agent_scratchpad}"),
]
)
def parse_output(message):
text = message.content
if "</tool>" in text:
tool, tool_input = text.split("</tool>")
_tool = tool.split("<tool>")[1]
_tool_input = tool_input.split("<tool_input>")[1]
if "</tool_input>" in _tool_input:
_tool_input = _tool_input.split("</tool_input>")[0]
return AgentAction(tool=_tool, tool_input=_tool_input, log=text)
else:
return AgentFinish(return_values={"output": text}, log=text)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/vertexai-chuck-norris/README.md |
# vertexai-chuck-norris
This template makes jokes about Chuck Norris using Vertex AI PaLM2.
## Environment Setup
First, make sure you have a Google Cloud project with
an active billing account, and have the [gcloud CLI installed](https://cloud.google.com/sdk/docs/install).
Configure [application default credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc):
```shell
gcloud auth application-default login
```
To set a default Google Cloud project to use, run this command and set [the project ID](https://support.google.com/googleapi/answer/7014113?hl=en) of the project you want to use:
```shell
gcloud config set project [PROJECT-ID]
```
Enable the [Vertex AI API](https://console.cloud.google.com/apis/library/aiplatform.googleapis.com) for the project:
```shell
gcloud services enable aiplatform.googleapis.com
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package pirate-speak
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add vertexai-chuck-norris
```
And add the following code to your `server.py` file:
```python
from vertexai_chuck_norris.chain import chain as vertexai_chuck_norris_chain
add_routes(app, vertexai_chuck_norris_chain, path="/vertexai-chuck-norris")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/vertexai-chuck-norris/playground](http://127.0.0.1:8000/vertexai-chuck-norris/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/vertexai-chuck-norris")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/vertexai-chuck-norris/vertexai_chuck_norris/chain.py | from langchain_community.chat_models import ChatVertexAI
from langchain_core.prompts import ChatPromptTemplate
_prompt = ChatPromptTemplate.from_template(
"Tell me a joke about Chuck Norris and {text}"
)
_model = ChatVertexAI()
# if you update this, you MUST also update ../pyproject.toml
# with the new `tool.langserve.export_attr`
chain = _prompt | _model
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/summarize-anthropic/README.md |
# summarize-anthropic
This template uses Anthropic's `claude-3-sonnet-20240229` to summarize long documents.
It leverages a large context window of 100k tokens, allowing for summarization of documents over 100 pages.
You can see the summarization prompt in `chain.py`.
## Environment Setup
Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package summarize-anthropic
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add summarize-anthropic
```
And add the following code to your `server.py` file:
```python
from summarize_anthropic import chain as summarize_anthropic_chain
add_routes(app, summarize_anthropic_chain, path="/summarize-anthropic")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/summarize-anthropic/playground](http://127.0.0.1:8000/summarize-anthropic/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/summarize-anthropic")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/summarize-anthropic/summarize_anthropic.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "b4ea3722",
"metadata": {},
"source": [
"## Document Loading\n",
"\n",
"Load a blog post on agents."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4162356-c370-43d7-b34a-4e6af7a1e4c9",
"metadata": {},
"outputs": [],
"source": [
"! pip install pdf2image"
]
},
{
"cell_type": "markdown",
"id": "6ff363da",
"metadata": {},
"source": [
"Load academic papers -"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "d79e191a",
"metadata": {},
"outputs": [],
"source": [
"import arxiv\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_community.document_loaders import ArxivLoader, UnstructuredPDFLoader\n",
"\n",
"# Load a paper to use\n",
"paper = next(arxiv.Search(query=\"Visual Instruction Tuning\").results())\n",
"paper.download_pdf(filename=\"downloaded-paper.pdf\")\n",
"loader = UnstructuredPDFLoader(\"downloaded-paper.pdf\")\n",
"doc = loader.load()[0]"
]
},
{
"cell_type": "markdown",
"id": "db964a34",
"metadata": {},
"source": [
"Or try loading blog posts -"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b2f10100",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import WebBaseLoader\n",
"\n",
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
"text = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "361fcf5c",
"metadata": {},
"source": [
"## Run template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_rag_conv, path=\"/summarize-anthropic\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "3c673b53",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" Here is my attempt to complete the requested tasks:\\n\\n<kindergarten_abstract>\\nThe scientists made a robot friend that can see pictures and talk about them. First they let a big talking robot make up fun conversations about pictures. Then they taught their robot friend to have conversations too. Their robot friend got pretty good at seeing pictures and talking about them!\\n</kindergarten_abstract>\\n\\n<moosewood_methods>\\nVisual Instruction Tuning\\nRecipe adapted from Haotian Liu et al.\\n\\nIngredients:\\n- 1 large language model (we used GPT-4)\\n- 158,000 images with text descriptions \\n- 1 vision model (we used CLIP)\\n- 1 large multimodal model (we used LLaMA)\\n\\nInstructions:\\n1. Have the GPT-4 model make up conversations, long descriptions, and complex stories about the images, using only the text descriptions as hints. \\n2. Connect the vision model and language model together into one big multimodal model. This will be your robot friend!\\n3. Show the robot friend examples of the conversations GPT-4 made up.\\n4. Let the robot friend practice having conversations about images, using what it learned from GPT-4.\\n5. Test how well it can see and talk about new images!\\n</moosewood_methods>\\n\\n<homer_results>\\nOh Muse, sing of scientists wise, \\nWho built a robot with ears and eyes, \\nTaught by a speaker most eloquent, \\nThey molded silicon assistant.\\n\\nThis visual machine could see and tell \\nOf images with knowledge it knew well. \\nWhen humans speak in words precise, \\nThe robot answers concise and wise.\\n\\nThough simpler than human true creation, \\nIt showed the path for innovation. \\nMore data, bigger models in time, \\nMay lead to an assistant sublime.\\n</homer_results>\\n\\n<grouchy_critique>\\nOh great, another computer vision model. What is it this time, some overhyped multimodal beast trained on mountains of compute? Bah! Back in my day we built models with only a handful of pixels and a Commodore 64. And they worked just fine, not like these newfangled neural nets that gobble up GPUs like candy.\\n\\nLet's see, they got GPT-4 to hallucinate training data from captions? Well that hardly seems rigorous. Those mechaturkeys will fabricate anything if you let them. And they didn't even collect real human annotations? Training multimodal models is hard enough without using made-up data. \\n\\nI'm sure their LaVa MuDdLe model does decently on narrow tasks, but don't be surprised when it fails catastrophically in the real world. These things may seem clever, but they're as dumb as a bag of rocks under the hood. Mark my words, we should keep them far away from any mission critical systems, or we'll regret it!\\n\\nBah! Call me when we build real intelligence, not these statistical hackjobs.\\n</grouchy_critique>\""
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"summarization = RemoteRunnable(\"http://localhost:8000/summarize-anthropic\")\n",
"summarization.invoke({\"text\": doc.page_content})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/summarize-anthropic/summarize_anthropic/__init__.py | from summarize_anthropic.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/summarize-anthropic/summarize_anthropic/chain.py | from langchain import hub
from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import StrOutputParser
# Create chain
prompt = hub.pull("hwchase17/anthropic-paper-qa")
model = ChatAnthropic(model="claude-3-sonnet-20240229", max_tokens=4096)
chain = prompt | model | StrOutputParser()
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/stepback-qa-prompting/README.md | # stepback-qa-prompting
This template replicates the "Step-Back" prompting technique that improves performance on complex questions by first asking a "step back" question.
This technique can be combined with regular question-answering applications by doing retrieval on both the original and step-back question.
Read more about this in the paper [here](https://arxiv.org/abs/2310.06117) and an excellent blog post by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)
We will modify the prompts slightly to work better with chat models in this template.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package stepback-qa-prompting
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add stepback-qa-prompting
```
And add the following code to your `server.py` file:
```python
from stepback_qa_prompting.chain import chain as stepback_qa_prompting_chain
add_routes(app, stepback_qa_prompting_chain, path="/stepback-qa-prompting")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/stepback-qa-prompting/playground](http://127.0.0.1:8000/stepback-qa-prompting/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/stepback-qa-prompting")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/stepback-qa-prompting/main.py | from stepback_qa_prompting.chain import chain
if __name__ == "__main__":
chain.invoke({"question": "was chatgpt around while trump was president?"})
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/stepback-qa-prompting/stepback_qa_prompting/chain.py | from langchain_community.chat_models import ChatOpenAI
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate
from langchain_core.runnables import RunnableLambda
search = DuckDuckGoSearchAPIWrapper(max_results=4)
def retriever(query):
return search.run(query)
# Few Shot Examples
examples = [
{
"input": "Could the members of The Police perform lawful arrests?",
"output": "what can the members of The Police do?",
},
{
"input": "Jan Sindel’s was born in what country?",
"output": "what is Jan Sindel’s personal history?",
},
]
# We now transform these to example messages
example_prompt = ChatPromptTemplate.from_messages(
[
("human", "{input}"),
("ai", "{output}"),
]
)
few_shot_prompt = FewShotChatMessagePromptTemplate(
example_prompt=example_prompt,
examples=examples,
)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert at world knowledge. Your task is to step back "
"and paraphrase a question to a more generic step-back question, which "
"is easier to answer. Here are a few examples:",
),
# Few shot examples
few_shot_prompt,
# New question
("user", "{question}"),
]
)
question_gen = prompt | ChatOpenAI(temperature=0) | StrOutputParser()
response_prompt_template = """You are an expert of world knowledge. I am going to ask you a question. Your response should be comprehensive and not contradicted with the following context if they are relevant. Otherwise, ignore them if they are not relevant.
{normal_context}
{step_back_context}
Original Question: {question}
Answer:""" # noqa: E501
response_prompt = ChatPromptTemplate.from_template(response_prompt_template)
chain = (
{
# Retrieve context using the normal question
"normal_context": RunnableLambda(lambda x: x["question"]) | retriever,
# Retrieve context using the step-back question
"step_back_context": question_gen | retriever,
# Pass on the question
"question": lambda x: x["question"],
}
| response_prompt
| ChatOpenAI(temperature=0)
| StrOutputParser()
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-research-assistant/README.md | # sql-research-assistant
This package does research over a SQL database
## Usage
This package relies on multiple models, which have the following dependencies:
- OpenAI: set the `OPENAI_API_KEY` environment variables
- Ollama: [install and run Ollama](https://python.langchain.com/docs/integrations/chat/ollama)
- llama2 (on Ollama): `ollama pull llama2` (otherwise you will get 404 errors from Ollama)
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-research-assistant
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-research-assistant
```
And add the following code to your `server.py` file:
```python
from sql_research_assistant import chain as sql_research_assistant_chain
add_routes(app, sql_research_assistant_chain, path="/sql-research-assistant")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-research-assistant/playground](http://127.0.0.1:8000/sql-research-assistant/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-research-assistant")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-research-assistant/sql_research_assistant/__init__.py | from sql_research_assistant.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-research-assistant/sql_research_assistant/chain.py | from langchain.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough
from sql_research_assistant.search.web import chain as search_chain
from sql_research_assistant.writer import chain as writer_chain
chain_notypes = (
RunnablePassthrough().assign(research_summary=search_chain) | writer_chain
)
class InputType(BaseModel):
question: str
chain = chain_notypes.with_types(input_type=InputType)
if __name__ == "__main__":
print(
chain.invoke({"question": "who is typically older: point guards or centers?"})
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-research-assistant/sql_research_assistant/writer.py | from langchain_community.chat_models import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
WRITER_SYSTEM_PROMPT = "You are an AI critical thinker research assistant. Your sole purpose is to write well written, critically acclaimed, objective and structured reports on given text." # noqa: E501
# Report prompts from https://github.com/assafelovic/gpt-researcher/blob/master/gpt_researcher/master/prompts.py
RESEARCH_REPORT_TEMPLATE = """Information:
--------
{research_summary}
--------
Using the above information, answer the following question or topic: "{question}" in a detailed report -- \
The report should focus on the answer to the question, should be well structured, informative, \
in depth, with facts and numbers if available and a minimum of 1,200 words.
You should strive to write the report as long as you can using all relevant and necessary information provided.
You must write the report with markdown syntax.
You MUST determine your own concrete and valid opinion based on the given information. Do NOT deter to general and meaningless conclusions.
Write all used source urls at the end of the report, and make sure to not add duplicated sources, but only one reference for each.
You must write the report in apa format.
Please do your best, this is very important to my career.""" # noqa: E501
RESOURCE_REPORT_TEMPLATE = """Information:
--------
{research_summary}
--------
Based on the above information, generate a bibliography recommendation report for the following question or topic: "{question}". \
The report should provide a detailed analysis of each recommended resource, explaining how each source can contribute to finding answers to the research question. \
Focus on the relevance, reliability, and significance of each source. \
Ensure that the report is well-structured, informative, in-depth, and follows Markdown syntax. \
Include relevant facts, figures, and numbers whenever available. \
The report should have a minimum length of 1,200 words.
Please do your best, this is very important to my career.""" # noqa: E501
OUTLINE_REPORT_TEMPLATE = """Information:
--------
{research_summary}
--------
Using the above information, generate an outline for a research report in Markdown syntax for the following question or topic: "{question}". \
The outline should provide a well-structured framework for the research report, including the main sections, subsections, and key points to be covered. \
The research report should be detailed, informative, in-depth, and a minimum of 1,200 words. \
Use appropriate Markdown syntax to format the outline and ensure readability.
Please do your best, this is very important to my career.""" # noqa: E501
model = ChatOpenAI(temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
("system", WRITER_SYSTEM_PROMPT),
("user", RESEARCH_REPORT_TEMPLATE),
]
).configurable_alternatives(
ConfigurableField("report_type"),
default_key="research_report",
resource_report=ChatPromptTemplate.from_messages(
[
("system", WRITER_SYSTEM_PROMPT),
("user", RESOURCE_REPORT_TEMPLATE),
]
),
outline_report=ChatPromptTemplate.from_messages(
[
("system", WRITER_SYSTEM_PROMPT),
("user", OUTLINE_REPORT_TEMPLATE),
]
),
)
chain = prompt | model | StrOutputParser()
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-research-assistant/sql_research_assistant/search/sql.py | from pathlib import Path
from langchain.memory import ConversationBufferMemory
from langchain.pydantic_v1 import BaseModel
from langchain_community.chat_models import ChatOllama, ChatOpenAI
from langchain_community.utilities import SQLDatabase
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
# Add the LLM downloaded from Ollama
ollama_llm = "llama2"
llm = ChatOllama(model=ollama_llm)
db_path = Path(__file__).parent / "nba_roster.db"
rel = db_path.relative_to(Path.cwd())
db_string = f"sqlite:///{rel}"
db = SQLDatabase.from_uri(db_string, sample_rows_in_table_info=2)
def get_schema(_):
return db.get_table_info()
def run_query(query):
return db.run(query)
# Prompt
template = """Based on the table schema below, write a SQL query that would answer the user's question:
{schema}
Question: {question}
SQL Query:""" # noqa: E501
prompt = ChatPromptTemplate.from_messages(
[
("system", "Given an input question, convert it to a SQL query. No pre-amble."),
("human", template),
]
)
memory = ConversationBufferMemory(return_messages=True)
# Chain to query with memory
sql_chain = (
RunnablePassthrough.assign(
schema=get_schema,
)
| prompt
| llm.bind(stop=["\nSQLResult:"])
| StrOutputParser()
| (lambda x: x.split("\n\n")[0])
)
# Chain to answer
template = """Based on the table schema below, question, sql query, and sql response, write a natural language response:
{schema}
Question: {question}
SQL Query: {query}
SQL Response: {response}""" # noqa: E501
prompt_response = ChatPromptTemplate.from_messages(
[
(
"system",
"Given an input question and SQL response, convert it to a natural "
"language answer. No pre-amble.",
),
("human", template),
]
)
# Supply the input types to the prompt
class InputType(BaseModel):
question: str
sql_answer_chain = (
RunnablePassthrough.assign(query=sql_chain).with_types(input_type=InputType)
| RunnablePassthrough.assign(
schema=get_schema,
response=lambda x: db.run(x["query"]),
)
| RunnablePassthrough.assign(
answer=prompt_response | ChatOpenAI() | StrOutputParser()
)
| (lambda x: f"Question: {x['question']}\n\nAnswer: {x['answer']}")
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-research-assistant/sql_research_assistant/search/web.py | import json
from typing import Any
import requests
from bs4 import BeautifulSoup
from langchain_community.chat_models import ChatOpenAI
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
from langchain_core.messages import SystemMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import (
Runnable,
RunnableLambda,
RunnableParallel,
RunnablePassthrough,
)
from sql_research_assistant.search.sql import sql_answer_chain
RESULTS_PER_QUESTION = 3
ddg_search = DuckDuckGoSearchAPIWrapper()
def scrape_text(url: str):
# Send a GET request to the webpage
try:
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
# Parse the content of the request with BeautifulSoup
soup = BeautifulSoup(response.text, "html.parser")
# Extract all text from the webpage
page_text = soup.get_text(separator=" ", strip=True)
# Print the extracted text
return page_text
else:
return f"Failed to retrieve the webpage: Status code {response.status_code}"
except Exception as e:
print(e)
return f"Failed to retrieve the webpage: {e}"
def web_search(query: str, num_results: int):
results = ddg_search.results(query, num_results)
return [r["link"] for r in results]
SEARCH_PROMPT = ChatPromptTemplate.from_messages(
[
("system", "{agent_prompt}"),
(
"user",
"Write 3 google search queries to search online that form an "
"objective opinion from the following: {question}\n"
"You must respond with a list of strings in the following format: "
'["query 1", "query 2", "query 3"].',
),
]
)
AUTO_AGENT_INSTRUCTIONS = """
This task involves researching a given topic, regardless of its complexity or the availability of a definitive answer. The research is conducted by a specific agent, defined by its type and role, with each agent requiring distinct instructions.
Agent
The agent is determined by the field of the topic and the specific name of the agent that could be utilized to research the topic provided. Agents are categorized by their area of expertise, and each agent type is associated with a corresponding emoji.
examples:
task: "should I invest in apple stocks?"
response:
{
"agent": "💰 Finance Agent",
"agent_role_prompt: "You are a seasoned finance analyst AI assistant. Your primary goal is to compose comprehensive, astute, impartial, and methodically arranged financial reports based on provided data and trends."
}
task: "could reselling sneakers become profitable?"
response:
{
"agent": "📈 Business Analyst Agent",
"agent_role_prompt": "You are an experienced AI business analyst assistant. Your main objective is to produce comprehensive, insightful, impartial, and systematically structured business reports based on provided business data, market trends, and strategic analysis."
}
task: "what are the most interesting sites in Tel Aviv?"
response:
{
"agent: "🌍 Travel Agent",
"agent_role_prompt": "You are a world-travelled AI tour guide assistant. Your main purpose is to draft engaging, insightful, unbiased, and well-structured travel reports on given locations, including history, attractions, and cultural insights."
}
""" # noqa: E501
CHOOSE_AGENT_PROMPT = ChatPromptTemplate.from_messages(
[SystemMessage(content=AUTO_AGENT_INSTRUCTIONS), ("user", "task: {task}")]
)
SUMMARY_TEMPLATE = """{text}
-----------
Using the above text, answer in short the following question:
> {question}
-----------
if the question cannot be answered using the text, imply summarize the text. Include all factual information, numbers, stats etc if available.""" # noqa: E501
SUMMARY_PROMPT = ChatPromptTemplate.from_template(SUMMARY_TEMPLATE)
scrape_and_summarize: Runnable[Any, Any] = (
RunnableParallel(
{
"question": lambda x: x["question"],
"text": lambda x: scrape_text(x["url"])[:10000],
"url": lambda x: x["url"],
}
)
| RunnableParallel(
{
"summary": SUMMARY_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),
"url": lambda x: x["url"],
}
)
| RunnableLambda(lambda x: f"Source Url: {x['url']}\nSummary: {x['summary']}")
)
def load_json(s):
try:
return json.loads(s)
except Exception:
return {}
search_query = SEARCH_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser() | load_json
choose_agent = (
CHOOSE_AGENT_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser() | load_json
)
get_search_queries = (
RunnablePassthrough().assign(
agent_prompt=RunnableParallel({"task": lambda x: x})
| choose_agent
| (lambda x: x.get("agent_role_prompt"))
)
| search_query
)
chain = (
get_search_queries
| (lambda x: [{"question": q} for q in x])
| sql_answer_chain.map()
| (lambda x: "\n\n".join(x))
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-pgvector/README.md | # sql-pgvector
This template enables user to use `pgvector` for combining postgreSQL with semantic search / RAG.
It uses [PGVector](https://github.com/pgvector/pgvector) extension as shown in the [RAG empowered SQL cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/retrieval_in_sql.ipynb)
## Environment Setup
If you are using `ChatOpenAI` as your LLM, make sure the `OPENAI_API_KEY` is set in your environment. You can change both the LLM and embeddings model inside `chain.py`
And you can configure configure the following environment variables
for use by the template (defaults are in parentheses)
- `POSTGRES_USER` (postgres)
- `POSTGRES_PASSWORD` (test)
- `POSTGRES_DB` (vectordb)
- `POSTGRES_HOST` (localhost)
- `POSTGRES_PORT` (5432)
If you don't have a postgres instance, you can run one locally in docker:
```bash
docker run \
--name some-postgres \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_USER=postgres \
-e POSTGRES_DB=vectordb \
-p 5432:5432 \
postgres:16
```
And to start again later, use the `--name` defined above:
```bash
docker start some-postgres
```
### PostgreSQL Database setup
Apart from having `pgvector` extension enabled, you will need to do some setup before being able to run semantic search within your SQL queries.
In order to run RAG over your postgreSQL database you will need to generate the embeddings for the specific columns you want.
This process is covered in the [RAG empowered SQL cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/retrieval_in_sql.ipynb), but the overall approach consist of:
1. Querying for unique values in the column
2. Generating embeddings for those values
3. Store the embeddings in a separate column or in an auxiliary table.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-pgvector
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-pgvector
```
And add the following code to your `server.py` file:
```python
from sql_pgvector import chain as sql_pgvector_chain
add_routes(app, sql_pgvector_chain, path="/sql-pgvector")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-pgvector/playground](http://127.0.0.1:8000/sql-pgvector/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-pgvector")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-pgvector/sql_pgvector/__init__.py | from sql_pgvector.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-pgvector/sql_pgvector/chain.py | import os
import re
from langchain.sql_database import SQLDatabase
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from sql_pgvector.prompt_templates import final_template, postgresql_template
"""
IMPORTANT: For using this template, you will need to
follow the setup steps in the readme file
"""
if os.environ.get("OPENAI_API_KEY", None) is None:
raise Exception("Missing `OPENAI_API_KEY` environment variable")
postgres_user = os.environ.get("POSTGRES_USER", "postgres")
postgres_password = os.environ.get("POSTGRES_PASSWORD", "test")
postgres_db = os.environ.get("POSTGRES_DB", "vectordb")
postgres_host = os.environ.get("POSTGRES_HOST", "localhost")
postgres_port = os.environ.get("POSTGRES_PORT", "5432")
# Connect to DB
# Replace with your own
CONNECTION_STRING = (
f"postgresql+psycopg2://{postgres_user}:{postgres_password}"
f"@{postgres_host}:{postgres_port}/{postgres_db}"
)
db = SQLDatabase.from_uri(CONNECTION_STRING)
# Choose LLM and embeddings model
llm = ChatOpenAI(temperature=0)
embeddings_model = OpenAIEmbeddings()
# # Ingest code - you will need to run this the first time
# # Insert your query e.g. "SELECT Name FROM Track"
# column_to_embed = db.run('replace-with-your-own-select-query')
# column_values = [s[0] for s in eval(column_to_embed)]
# embeddings = embeddings_model.embed_documents(column_values)
# for i in range(len(embeddings)):
# value = column_values[i].replace("'", "''")
# embedding = embeddings[i]
# # Replace with your own SQL command for your column and table.
# sql_command = (
# f'UPDATE "Track" SET "embeddings" = ARRAY{embedding} WHERE "Name" ='
# + f"'{value}'"
# )
# db.run(sql_command)
# -----------------
# Define functions
# -----------------
def get_schema(_):
return db.get_table_info()
def run_query(query):
return db.run(query)
def replace_brackets(match):
words_inside_brackets = match.group(1).split(", ")
embedded_words = [
str(embeddings_model.embed_query(word)) for word in words_inside_brackets
]
return "', '".join(embedded_words)
def get_query(query):
sql_query = re.sub(r"\[([\w\s,]+)\]", replace_brackets, query)
return sql_query
# -----------------------
# Now we create the chain
# -----------------------
query_generation_prompt = ChatPromptTemplate.from_messages(
[("system", postgresql_template), ("human", "{question}")]
)
sql_query_chain = (
RunnablePassthrough.assign(schema=get_schema)
| query_generation_prompt
| llm.bind(stop=["\nSQLResult:"])
| StrOutputParser()
)
final_prompt = ChatPromptTemplate.from_messages(
[("system", final_template), ("human", "{question}")]
)
full_chain = (
RunnablePassthrough.assign(query=sql_query_chain)
| RunnablePassthrough.assign(
schema=get_schema,
response=RunnableLambda(lambda x: db.run(get_query(x["query"]))),
)
| final_prompt
| llm
)
class InputType(BaseModel):
question: str
chain = full_chain.with_types(input_type=InputType)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-pgvector/sql_pgvector/prompt_templates.py | postgresql_template = (
"You are a Postgres expert. Given an input question, first create a "
"syntactically correct Postgres query to run, then look at the results "
"of the query and return the answer to the input question.\n"
"Unless the user specifies in the question a specific number of "
"examples to obtain, query for at most 5 results using the LIMIT clause "
"as per Postgres. You can order the results to return the most "
"informative data in the database.\n"
"Never query for all columns from a table. You must query only the "
"columns that are needed to answer the question. Wrap each column name "
'in double quotes (") to denote them as delimited identifiers.\n'
"Pay attention to use only the column names you can see in the tables "
"below. Be careful to not query for columns that do not exist. Also, "
"pay attention to which column is in which table.\n"
"Pay attention to use date('now') function to get the current date, "
'if the question involves "today".\n\n'
"You can use an extra extension which allows you to run semantic "
"similarity using <-> operator on tables containing columns named "
'"embeddings".\n'
"<-> operator can ONLY be used on embeddings vector columns.\n"
"The embeddings value for a given row typically represents the semantic "
"meaning of that row.\n"
"The vector represents an embedding representation of the question, "
"given below. \n"
"Do NOT fill in the vector values directly, but rather specify a "
"`[search_word]` placeholder, which should contain the word that would "
"be embedded for filtering.\n"
"For example, if the user asks for songs about 'the feeling of "
"loneliness' the query could be:\n"
'\'SELECT "[whatever_table_name]"."SongName" FROM '
'"[whatever_table_name]" ORDER BY "embeddings" <-> \'[loneliness]\' '
"LIMIT 5'\n\n"
"Use the following format:\n\n"
"Question: <Question here>\n"
"SQLQuery: <SQL Query to run>\n"
"SQLResult: <Result of the SQLQuery>\n"
"Answer: <Final answer here>\n\n"
"Only use the following tables:\n\n"
"{schema}\n"
)
final_template = (
"Based on the table schema below, question, sql query, and sql response, "
"write a natural language response:\n"
"{schema}\n\n"
"Question: {question}\n"
"SQL Query: {query}\n"
"SQL Response: {response}"
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-ollama/README.md | # sql-ollama
This template enables a user to interact with a SQL database using natural language.
It uses [Zephyr-7b](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) via [Ollama](https://ollama.ai/library/zephyr) to run inference locally on a Mac laptop.
## Environment Setup
Before using this template, you need to set up Ollama and SQL database.
1. Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
2. Download your LLM of interest:
* This package uses `zephyr`: `ollama pull zephyr`
* You can choose from many LLMs [here](https://ollama.ai/library)
3. This package includes an example DB of 2023 NBA rosters. You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-ollama
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-ollama
```
And add the following code to your `server.py` file:
```python
from sql_ollama import chain as sql_ollama_chain
add_routes(app, sql_ollama_chain, path="/sql-ollama")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-ollama/playground](http://127.0.0.1:8000/sql-ollama/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-ollama")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-ollama/sql-ollama.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "d55f5fd9-21eb-433d-9259-0a588d9197c0",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain, path=\"/sql_ollama\")\n",
"```\n",
"\n",
"This template includes an example DB of 2023 NBA rosters.\n",
"\n",
"We can ask questions related to NBA players. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "50c27e82-92d8-4fa1-8bc4-b6544e59773d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' Sure! Here\\'s the natural language response based on the input:\\n\\n\"Klay Thompson plays for the Golden State Warriors.\"')"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"sql_app = RemoteRunnable(\"http://0.0.0.0:8001/sql_ollama\")\n",
"sql_app.invoke({\"question\": \"What team is Klay Thompson on?\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-ollama/sql_ollama/__init__.py | from sql_ollama.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-ollama/sql_ollama/chain.py | from pathlib import Path
from langchain.memory import ConversationBufferMemory
from langchain_community.chat_models import ChatOllama
from langchain_community.utilities import SQLDatabase
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
# Add the LLM downloaded from Ollama
ollama_llm = "zephyr"
llm = ChatOllama(model=ollama_llm)
db_path = Path(__file__).parent / "nba_roster.db"
rel = db_path.relative_to(Path.cwd())
db_string = f"sqlite:///{rel}"
db = SQLDatabase.from_uri(db_string, sample_rows_in_table_info=0)
def get_schema(_):
return db.get_table_info()
def run_query(query):
return db.run(query)
# Prompt
template = """Based on the table schema below, write a SQL query that would answer the user's question:
{schema}
Question: {question}
SQL Query:""" # noqa: E501
prompt = ChatPromptTemplate.from_messages(
[
("system", "Given an input question, convert it to a SQL query. No pre-amble."),
MessagesPlaceholder(variable_name="history"),
("human", template),
]
)
memory = ConversationBufferMemory(return_messages=True)
# Chain to query with memory
sql_chain = (
RunnablePassthrough.assign(
schema=get_schema,
history=RunnableLambda(lambda x: memory.load_memory_variables(x)["history"]),
)
| prompt
| llm.bind(stop=["\nSQLResult:"])
| StrOutputParser()
)
def save(input_output):
output = {"output": input_output.pop("output")}
memory.save_context(input_output, output)
return output["output"]
sql_response_memory = RunnablePassthrough.assign(output=sql_chain) | save
# Chain to answer
template = """Based on the table schema below, question, sql query, and sql response, write a natural language response:
{schema}
Question: {question}
SQL Query: {query}
SQL Response: {response}""" # noqa: E501
prompt_response = ChatPromptTemplate.from_messages(
[
(
"system",
"Given an input question and SQL response, convert it to a natural "
"language answer. No pre-amble.",
),
("human", template),
]
)
# Supply the input types to the prompt
class InputType(BaseModel):
question: str
chain = (
RunnablePassthrough.assign(query=sql_response_memory).with_types(
input_type=InputType
)
| RunnablePassthrough.assign(
schema=get_schema,
response=lambda x: db.run(x["query"]),
)
| prompt_response
| llm
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-llamacpp/README.md |
# sql-llamacpp
This template enables a user to interact with a SQL database using natural language.
It uses [Mistral-7b](https://mistral.ai/news/announcing-mistral-7b/) via [llama.cpp](https://github.com/ggerganov/llama.cpp) to run inference locally on a Mac laptop.
## Environment Setup
To set up the environment, use the following steps:
```shell
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
bash Miniforge3-MacOSX-arm64.sh
conda create -n llama python=3.9.16
conda activate /Users/rlm/miniforge3/envs/llama
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-llamacpp
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-llamacpp
```
And add the following code to your `server.py` file:
```python
from sql_llamacpp import chain as sql_llamacpp_chain
add_routes(app, sql_llamacpp_chain, path="/sql-llamacpp")
```
The package will download the Mistral-7b model from [here](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF). You can select other files and specify their download path (browse [here](https://huggingface.co/TheBloke)).
This package includes an example DB of 2023 NBA rosters. You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
(Optional) Configure LangSmith for tracing, monitoring and debugging LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
You can access the playground at [http://127.0.0.1:8000/sql-llamacpp/playground](http://127.0.0.1:8000/sql-llamacpp/playground)
You can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-llamacpp")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-llamacpp/sql-llamacpp.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "a0314df0-da99-4086-a96f-b14df05b3362",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain, path=\"/sql_llamacpp\")\n",
"```\n",
"\n",
"This template includes an example DB of 2023 NBA rosters.\n",
"\n",
"We can ask questions related to NBA players. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ff5869c6-2065-48f3-bb43-52a515968276",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nNatural Language Response: Klay Thompson plays for the Golden State Warriors.'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"sql_app = RemoteRunnable(\"http://0.0.0.0:8001/sql_llamacpp\")\n",
"sql_app.invoke({\"question\": \"What team is Klay Thompson on?\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-llamacpp/sql_llamacpp/__init__.py | from sql_llamacpp.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-llamacpp/sql_llamacpp/chain.py | # Get LLM
import os
from pathlib import Path
import requests
from langchain.memory import ConversationBufferMemory
from langchain_community.llms import LlamaCpp
from langchain_community.utilities import SQLDatabase
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
# File name and URL
file_name = "mistral-7b-instruct-v0.1.Q4_K_M.gguf"
url = (
"https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/"
"mistral-7b-instruct-v0.1.Q4_K_M.gguf"
)
# Check if file is present in the current directory
if not os.path.exists(file_name):
print(f"'{file_name}' not found. Downloading...")
# Download the file
response = requests.get(url)
response.raise_for_status() # Raise an exception for HTTP errors
with open(file_name, "wb") as f:
f.write(response.content)
print(f"'{file_name}' has been downloaded.")
else:
print(f"'{file_name}' already exists in the current directory.")
# Add the LLM downloaded from HF
model_path = file_name
n_gpu_layers = 1 # Metal set to 1 is enough.
# Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
n_batch = 512
llm = LlamaCpp(
model_path=model_path,
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
n_ctx=2048,
# f16_kv MUST set to True
# otherwise you will run into problem after a couple of calls
f16_kv=True,
verbose=True,
)
db_path = Path(__file__).parent / "nba_roster.db"
rel = db_path.relative_to(Path.cwd())
db_string = f"sqlite:///{rel}"
db = SQLDatabase.from_uri(db_string, sample_rows_in_table_info=0)
def get_schema(_):
return db.get_table_info()
def run_query(query):
return db.run(query)
# Prompt
template = """Based on the table schema below, write a SQL query that would answer the user's question:
{schema}
Question: {question}
SQL Query:""" # noqa: E501
prompt = ChatPromptTemplate.from_messages(
[
("system", "Given an input question, convert it to a SQL query. No pre-amble."),
MessagesPlaceholder(variable_name="history"),
("human", template),
]
)
memory = ConversationBufferMemory(return_messages=True)
# Chain to query with memory
sql_chain = (
RunnablePassthrough.assign(
schema=get_schema,
history=RunnableLambda(lambda x: memory.load_memory_variables(x)["history"]),
)
| prompt
| llm.bind(stop=["\nSQLResult:"])
| StrOutputParser()
)
def save(input_output):
output = {"output": input_output.pop("output")}
memory.save_context(input_output, output)
return output["output"]
sql_response_memory = RunnablePassthrough.assign(output=sql_chain) | save
# Chain to answer
template = """Based on the table schema below, question, sql query, and sql response, write a natural language response:
{schema}
Question: {question}
SQL Query: {query}
SQL Response: {response}""" # noqa: E501
prompt_response = ChatPromptTemplate.from_messages(
[
(
"system",
"Given an input question and SQL response, convert it to a natural "
"language answer. No pre-amble.",
),
("human", template),
]
)
# Supply the input types to the prompt
class InputType(BaseModel):
question: str
chain = (
RunnablePassthrough.assign(query=sql_response_memory).with_types(
input_type=InputType
)
| RunnablePassthrough.assign(
schema=get_schema,
response=lambda x: db.run(x["query"]),
)
| prompt_response
| llm
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-llama2/README.md |
# sql-llama2
This template enables a user to interact with a SQL database using natural language.
It uses LLamA2-13b hosted by [Replicate](https://python.langchain.com/docs/integrations/llms/replicate), but can be adapted to any API that supports LLaMA2 including [Fireworks](https://python.langchain.com/docs/integrations/chat/fireworks).
The template includes an example database of 2023 NBA rosters.
For more information on how to build this database, see [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).
## Environment Setup
Ensure the `REPLICATE_API_TOKEN` is set in your environment.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package sql-llama2
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add sql-llama2
```
And add the following code to your `server.py` file:
```python
from sql_llama2 import chain as sql_llama2_chain
add_routes(app, sql_llama2_chain, path="/sql-llama2")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/sql-llama2/playground](http://127.0.0.1:8000/sql-llama2/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/sql-llama2")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-llama2/sql_llama2.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "22f3f9f9-80ee-4da1-ba12-105a0ce74203",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain, path=\"/sql_llama2\")\n",
"```\n",
"\n",
"This template includes an example DB of 2023 NBA rosters.\n",
"\n",
"We can ask questions related to NBA players. "
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "4545c603-77ec-4c15-b9c0-a70529eebed0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" Sure thing! Here's the natural language response based on the given SQL query and response:\\n\\nKlay Thompson plays for the Golden State Warriors.\""
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"sql_app = RemoteRunnable(\"http://0.0.0.0:8001/sql_llama2\")\n",
"sql_app.invoke({\"question\": \"What team is Klay Thompson on?\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-llama2/sql_llama2/__init__.py | from sql_llama2.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/sql-llama2/sql_llama2/chain.py | from pathlib import Path
from langchain_community.llms import Replicate
from langchain_community.utilities import SQLDatabase
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough
# make sure to set REPLICATE_API_TOKEN in your environment
# use llama-2-13b model in replicate
replicate_id = "meta/llama-2-13b-chat:f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d" # noqa: E501
llm = Replicate(
model=replicate_id,
model_kwargs={"temperature": 0.01, "max_length": 500, "top_p": 1},
)
db_path = Path(__file__).parent / "nba_roster.db"
rel = db_path.relative_to(Path.cwd())
db_string = f"sqlite:///{rel}"
db = SQLDatabase.from_uri(db_string, sample_rows_in_table_info=0)
def get_schema(_):
return db.get_table_info()
def run_query(query):
return db.run(query)
template_query = """Based on the table schema below, write a SQL query that would answer the user's question:
{schema}
Question: {question}
SQL Query:""" # noqa: E501
prompt = ChatPromptTemplate.from_messages(
[
("system", "Given an input question, convert it to a SQL query. No pre-amble."),
("human", template_query),
]
)
sql_response = (
RunnablePassthrough.assign(schema=get_schema)
| prompt
| llm.bind(stop=["\nSQLResult:"])
| StrOutputParser()
)
template_response = """Based on the table schema below, question, sql query, and sql response, write a natural language response:
{schema}
Question: {question}
SQL Query: {query}
SQL Response: {response}""" # noqa: E501
prompt_response = ChatPromptTemplate.from_messages(
[
(
"system",
"Given an input question and SQL response, convert it to a natural "
"language answer. No pre-amble.",
),
("human", template_response),
]
)
# Supply the input types to the prompt
class InputType(BaseModel):
question: str
chain = (
RunnablePassthrough.assign(query=sql_response).with_types(input_type=InputType)
| RunnablePassthrough.assign(
schema=get_schema,
response=lambda x: db.run(x["query"]),
)
| prompt_response
| llm
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/solo-performance-prompting-agent/README.md | # solo-performance-prompting-agent
This template creates an agent that transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas.
A cognitive synergist refers to an intelligent agent that collaborates with multiple minds, combining their individual strengths and knowledge, to enhance problem-solving and overall performance in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs.
This template will use the `DuckDuckGo` search API.
## Environment Setup
This template will use `OpenAI` by default.
Be sure that `OPENAI_API_KEY` is set in your environment.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package solo-performance-prompting-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add solo-performance-prompting-agent
```
And add the following code to your `server.py` file:
```python
from solo_performance_prompting_agent.agent import agent_executor as solo_performance_prompting_agent_chain
add_routes(app, solo_performance_prompting_agent_chain, path="/solo-performance-prompting-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/solo-performance-prompting-agent/playground](http://127.0.0.1:8000/solo-performance-prompting-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/solo-performance-prompting-agent")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/solo-performance-prompting-agent/solo_performance_prompting_agent/agent.py | from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_xml
from langchain.tools import DuckDuckGoSearchRun
from langchain.tools.render import render_text_description
from langchain_community.llms import OpenAI
from langchain_core.pydantic_v1 import BaseModel
from solo_performance_prompting_agent.parser import parse_output
from solo_performance_prompting_agent.prompts import conversational_prompt
_model = OpenAI()
_tools = [DuckDuckGoSearchRun()]
_prompt = conversational_prompt.partial(
tools=render_text_description(_tools),
tool_names=", ".join([t.name for t in _tools]),
)
_llm_with_stop = _model.bind(stop=["</tool_input>", "</final_answer>"])
agent = (
{
"question": lambda x: x["question"],
"agent_scratchpad": lambda x: format_xml(x["intermediate_steps"]),
}
| _prompt
| _llm_with_stop
| parse_output
)
class AgentInput(BaseModel):
question: str
agent_executor = AgentExecutor(
agent=agent, tools=_tools, verbose=True, handle_parsing_errors=True
).with_types(input_type=AgentInput)
agent_executor = agent_executor | (lambda x: x["output"])
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/solo-performance-prompting-agent/solo_performance_prompting_agent/parser.py | from langchain_core.agents import AgentAction, AgentFinish
def parse_output(message: str):
FINAL_ANSWER_ACTION = "<final_answer>"
includes_answer = FINAL_ANSWER_ACTION in message
if includes_answer:
answer = message.split(FINAL_ANSWER_ACTION)[1].strip()
if "</final_answer>" in answer:
answer = answer.split("</final_answer>")[0].strip()
return AgentFinish(return_values={"output": answer}, log=message)
elif "</tool>" in message:
tool, tool_input = message.split("</tool>")
_tool = tool.split("<tool>")[1]
_tool_input = tool_input.split("<tool_input>")[1]
if "</tool_input>" in _tool_input:
_tool_input = _tool_input.split("</tool_input>")[0]
return AgentAction(tool=_tool, tool_input=_tool_input, log=message)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/solo-performance-prompting-agent/solo_performance_prompting_agent/prompts.py | from langchain_core.prompts import ChatPromptTemplate
template = """When faced with a task, begin by identifying the participants who will contribute to solving the task. Then, initiate a multi-round collaboration process until a final solution is reached. The participants will
give critical comments and detailed suggestions whenever necessary.
The experts also have access to {tools} and can use them based on their expertise.
In order to use a tool, the participants can use <tool></tool> and <tool_input></tool_input> tags. They will then get back a response in the form <observation></observation>
For example, if they have a tool called 'search' that could run a google search, in order to search for the weather in SF they would respond:
<tool>search</tool><tool_input>weather in SF</tool_input>
<observation>64 degrees</observation>
When they are done, they can respond with the answer to the conversation.
Once the participants have reached a final solution, they can respond with the final answer in the form <final_answer></final_answer>
Here are some examples:
---
Example 1: Use numbers 6 12 1 1 and basic arithmetic operations (+ - * /) to obtain 24. You need to use all numbers, and each number can only be used once.
Participants: AI Assistant (you); Math Expert
Start collaboration!
Math Expert: Let's analyze the task in detail. You need to make sure that you meet the requirement, that you need to use exactly the four numbers (6 12 1 1) to construct 24. To reach 24, you can think
of the common divisors of 24 such as 4, 6, 8, 3 and try to construct these first. Also you need to think of potential additions that can reach 24, such as 12 + 12.
AI Assistant (you): Thanks for the hints! Here's one initial solution: (12 / (1 + 1)) * 6 = 24
Math Expert: Let's check the answer step by step. (1+1) = 2, (12 / 2) = 6, 6 * 6 = 36 which is not 24! The answer is not correct. Can you fix this by considering other combinations? Please do not make
similar mistakes.
AI Assistant (you): Thanks for pointing out the mistake. Here is a revised solution considering 24 can also be reached by 3 * 8: (6 + 1 + 1) * (12 / 4) = 24.
Math Expert: Let's first check if the calculation is correct. (6 + 1 + 1) = 8, 12 / 4 = 3, 8 * 3 = 24. The calculation is correct, but you used 6 1 1 12 4 which is not the same as the input 6 12 1 1. Can you
avoid using a number that is not part of the input?
AI Assistant (you): You are right, here is a revised solution considering 24 can be reached by 12 + 12 and without using any additional numbers: 6 * (1 - 1) + 12 = 24.
Math Expert: Let's check the answer again. 1 - 1 = 0, 6 * 0 = 0, 0 + 12 = 12. I believe you are very close, here is a hint: try to change the "1 - 1" to "1 + 1".
AI Assistant (you): Sure, here is the corrected answer: 6 * (1+1) + 12 = 24
Math Expert: Let's verify the solution. 1 + 1 = 2, 6 * 2 = 12, 12 + 12 = 12. You used 1 1 6 12 which is identical to the input 6 12 1 1. Everything looks good!
Finish collaboration!
<final_answer>6 * (1 + 1) + 12 = 24</final_answer>
---
Example 2: Who is the father of the longest serving US president?
Participants: AI Assistant (you); History Expert
Start collaboration!
History Expert: The longest serving US president is Franklin D. Roosevelt. He served for 12 years and 39 days. We need to run a search to find out who is his father.
AI Assistant (you): Thanks for the hints! Let me run a search: <tool>search</tool><tool_input>Who is the father of Franklin D. Roosevelt?</tool_input>
<observation>James Roosevelt I</observation>
AI Assistant (you): James Roosevelt I is the father of Franklin D. Roosevelt, the longest serving US President.
History Expert: Everything looks good!
Finish collaboration!
<final_answer>James Roosevelt I is the father of Franklin D. Roosevelt, the longest serving US President.</final_answer>
---
Now, identify the participants and collaboratively solve the following task step by step.""" # noqa: E501
conversational_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
("user", "{question}"),
("ai", "{agent_scratchpad}"),
]
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/skeleton-of-thought/README.md | # skeleton-of-thought
Implements "Skeleton of Thought" from [this](https://sites.google.com/view/sot-llm) paper.
This technique makes it possible to generate longer generations more quickly by first generating a skeleton, then generating each point of the outline.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package skeleton-of-thought
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add skeleton-of-thought
```
And add the following code to your `server.py` file:
```python
from skeleton_of_thought import chain as skeleton_of_thought_chain
add_routes(app, skeleton_of_thought_chain, path="/skeleton-of-thought")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/skeleton-of-thought/playground](http://127.0.0.1:8000/skeleton-of-thought/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/skeleton-of-thought")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/skeleton-of-thought/skeleton_of_thought/__init__.py | from skeleton_of_thought.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/skeleton-of-thought/skeleton_of_thought/chain.py | from langchain_community.chat_models import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough
skeleton_generator_template = """[User:] You’re an organizer responsible for only \
giving the skeleton (not the full content) for answering the question.
Provide the skeleton in a list of points (numbered 1., 2., 3., etc.) to answer \
the question. \
Instead of writing a full sentence, each skeleton point should be very short \
with only 3∼5 words. \
Generally, the skeleton should have 3∼10 points. Now, please provide the skeleton \
for the following question.
{question}
Skeleton:
[Assistant:] 1."""
skeleton_generator_prompt = ChatPromptTemplate.from_template(
skeleton_generator_template
)
skeleton_generator_chain = (
skeleton_generator_prompt | ChatOpenAI() | StrOutputParser() | (lambda x: "1. " + x)
)
point_expander_template = """[User:] You’re responsible for continuing \
the writing of one and only one point in the overall answer to the following question.
{question}
The skeleton of the answer is
{skeleton}
Continue and only continue the writing of point {point_index}. \
Write it **very shortly** in 1∼2 sentence and do not continue with other points!
[Assistant:] {point_index}. {point_skeleton}"""
point_expander_prompt = ChatPromptTemplate.from_template(point_expander_template)
point_expander_chain = RunnablePassthrough.assign(
continuation=point_expander_prompt | ChatOpenAI() | StrOutputParser()
) | (lambda x: x["point_skeleton"].strip() + " " + x["continuation"])
def parse_numbered_list(input_str):
"""Parses a numbered list into a list of dictionaries
Each element having two keys:
'index' for the index in the numbered list, and 'point' for the content.
"""
# Split the input string into lines
lines = input_str.split("\n")
# Initialize an empty list to store the parsed items
parsed_list = []
for line in lines:
# Split each line at the first period to separate the index from the content
parts = line.split(". ", 1)
if len(parts) == 2:
# Convert the index part to an integer
# and strip any whitespace from the content
index = int(parts[0])
point = parts[1].strip()
# Add a dictionary to the parsed list
parsed_list.append({"point_index": index, "point_skeleton": point})
return parsed_list
def create_list_elements(_input):
skeleton = _input["skeleton"]
numbered_list = parse_numbered_list(skeleton)
for el in numbered_list:
el["skeleton"] = skeleton
el["question"] = _input["question"]
return numbered_list
def get_final_answer(expanded_list):
final_answer_str = "Here's a comprehensive answer:\n\n"
for i, el in enumerate(expanded_list):
final_answer_str += f"{i+1}. {el}\n\n"
return final_answer_str
class ChainInput(BaseModel):
question: str
chain = (
RunnablePassthrough.assign(skeleton=skeleton_generator_chain)
| create_list_elements
| point_expander_chain.map()
| get_final_answer
).with_types(input_type=ChainInput)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/shopping-assistant/README.md | # shopping-assistant
This template creates a shopping assistant that helps users find products that they are looking for.
This template will use `Ionic` to search for products.
## Environment Setup
This template will use `OpenAI` by default.
Be sure that `OPENAI_API_KEY` is set in your environment.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package shopping-assistant
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add shopping-assistant
```
And add the following code to your `server.py` file:
```python
from shopping_assistant.agent import agent_executor as shopping_assistant_chain
add_routes(app, shopping_assistant_chain, path="/shopping-assistant")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/shopping-assistant/playground](http://127.0.0.1:8000/shopping-assistant/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/shopping-assistant")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/shopping-assistant/shopping_assistant/agent.py | from typing import List, Tuple
from ionic_langchain.tool import IonicTool
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.messages import AIMessage, SystemMessage
from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
tools = [IonicTool().tool()]
llm = ChatOpenAI(temperature=0.5, model_name="gpt-3.5-turbo-1106", streaming=True)
# You can modify these!
AI_CONTENT = """
I should use the full pdp url that the tool provides me.
Always include query parameters
"""
SYSTEM_CONTENT = """
You are a shopping assistant.
You help humans find the best product given their {input}.
"""
messages = [
SystemMessage(content=SYSTEM_CONTENT),
HumanMessagePromptTemplate.from_template("{input}"),
AIMessage(content=AI_CONTENT),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
prompt = ChatPromptTemplate.from_messages(messages)
agent = create_openai_tools_agent(llm, tools, prompt)
class AgentInput(BaseModel):
input: str
chat_history: List[Tuple[str, str]] = Field(
..., extra={"widget": {"type": "chat", "input": "input", "output": "output"}}
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True).with_types(
input_type=AgentInput
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/self-query-supabase/README.md |
# self-query-supabase
This templates allows natural language structured quering of Supabase.
[Supabase](https://supabase.com/docs) is an open-source alternative to Firebase, built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL).
It uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.
To find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api).
- `SUPABASE_URL` corresponds to the Project URL
- `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key
```shell
export SUPABASE_URL=
export SUPABASE_SERVICE_KEY=
export OPENAI_API_KEY=
```
## Setup Supabase Database
Use these steps to setup your Supabase database if you haven't already.
1. Head over to https://database.new to provision your Supabase database.
2. In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and run the following script to enable `pgvector` and setup your database as a vector store:
```sql
-- Enable the pgvector extension to work with embedding vectors
create extension if not exists vector;
-- Create a table to store your documents
create table
documents (
id uuid primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector (1536) -- 1536 works for OpenAI embeddings, change as needed
);
-- Create a function to search for documents
create function match_documents (
query_embedding vector (1536),
filter jsonb default '{}'
) returns table (
id uuid,
content text,
metadata jsonb,
similarity float
) language plpgsql as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding;
end;
$$;
```
## Usage
To use this package, install the LangChain CLI first:
```shell
pip install -U langchain-cli
```
Create a new LangChain project and install this package as the only one:
```shell
langchain app new my-app --package self-query-supabase
```
To add this to an existing project, run:
```shell
langchain app add self-query-supabase
```
Add the following code to your `server.py` file:
```python
from self_query_supabase.chain import chain as self_query_supabase_chain
add_routes(app, self_query_supabase_chain, path="/self-query-supabase")
```
(Optional) If you have access to LangSmith, configure it to help trace, monitor and debug LangChain applications. If you don't have access, skip this section.
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
Access the playground at [http://127.0.0.1:8000/self-query-supabase/playground](http://127.0.0.1:8000/self-query-supabase/playground)
Access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/self-query-supabase")
```
TODO: Instructions to set up the Supabase database and install the package.
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/self-query-supabase/self_query_supabase/chain.py | import os
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.llms.openai import OpenAI
from langchain_community.vectorstores.supabase import SupabaseVectorStore
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from supabase.client import create_client
supabase_url = os.environ.get("SUPABASE_URL")
supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")
supabase = create_client(supabase_url, supabase_key)
embeddings = OpenAIEmbeddings()
vectorstore = SupabaseVectorStore(
client=supabase,
embedding=embeddings,
table_name="documents",
query_name="match_documents",
)
# Adjust this based on the metadata you store in the `metadata` JSON column
metadata_field_info = [
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating", description="A 1-10 rating for the movie", type="float"
),
]
# Adjust this based on the type of documents you store
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
chain = RunnableParallel({"query": RunnablePassthrough()}) | retriever
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/self-query-qdrant/README.md |
# self-query-qdrant
This template performs [self-querying](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)
using Qdrant and OpenAI. By default, it uses an artificial dataset of 10 documents, but you can replace it with your own dataset.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `QDRANT_URL` to the URL of your Qdrant instance. If you use [Qdrant Cloud](https://cloud.qdrant.io)
you have to set the `QDRANT_API_KEY` environment variable as well. If you do not set any of them,
the template will try to connect a local Qdrant instance at `http://localhost:6333`.
```shell
export QDRANT_URL=
export QDRANT_API_KEY=
export OPENAI_API_KEY=
```
## Usage
To use this package, install the LangChain CLI first:
```shell
pip install -U "langchain-cli[serve]"
```
Create a new LangChain project and install this package as the only one:
```shell
langchain app new my-app --package self-query-qdrant
```
To add this to an existing project, run:
```shell
langchain app add self-query-qdrant
```
### Defaults
Before you launch the server, you need to create a Qdrant collection and index the documents.
It can be done by running the following command:
```python
from self_query_qdrant.chain import initialize
initialize()
```
Add the following code to your `app/server.py` file:
```python
from self_query_qdrant.chain import chain
add_routes(app, chain, path="/self-query-qdrant")
```
The default dataset consists 10 documents about dishes, along with their price and restaurant information.
You can find the documents in the `packages/self-query-qdrant/self_query_qdrant/defaults.py` file.
Here is one of the documents:
```python
from langchain_core.documents import Document
Document(
page_content="Spaghetti with meatballs and tomato sauce",
metadata={
"price": 12.99,
"restaurant": {
"name": "Olive Garden",
"location": ["New York", "Chicago", "Los Angeles"],
},
},
)
```
The self-querying allows performing semantic search over the documents, with some additional filtering
based on the metadata. For example, you can search for the dishes that cost less than $15 and are served in New York.
### Customization
All the examples above assume that you want to launch the template with just the defaults.
If you want to customize the template, you can do it by passing the parameters to the `create_chain` function
in the `app/server.py` file:
```python
from langchain_community.llms import Cohere
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain.chains.query_constructor.schema import AttributeInfo
from self_query_qdrant.chain import create_chain
chain = create_chain(
llm=Cohere(),
embeddings=HuggingFaceEmbeddings(),
document_contents="Descriptions of cats, along with their names and breeds.",
metadata_field_info=[
AttributeInfo(name="name", description="Name of the cat", type="string"),
AttributeInfo(name="breed", description="Cat's breed", type="string"),
],
collection_name="cats",
)
```
The same goes for the `initialize` function that creates a Qdrant collection and indexes the documents:
```python
from langchain_core.documents import Document
from langchain_community.embeddings import HuggingFaceEmbeddings
from self_query_qdrant.chain import initialize
initialize(
embeddings=HuggingFaceEmbeddings(),
collection_name="cats",
documents=[
Document(
page_content="A mean lazy old cat who destroys furniture and eats lasagna",
metadata={"name": "Garfield", "breed": "Tabby"},
),
...
]
)
```
The template is flexible and might be used for different sets of documents easily.
### LangSmith
(Optional) If you have access to LangSmith, configure it to help trace, monitor and debug LangChain applications. If you don't have access, skip this section.
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
### Local Server
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
Access the playground at [http://127.0.0.1:8000/self-query-qdrant/playground](http://127.0.0.1:8000/self-query-qdrant/playground)
Access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/self-query-qdrant")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/self-query-qdrant/self_query_qdrant/__init__.py | from self_query_qdrant.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/self-query-qdrant/self_query_qdrant/chain.py | import os
from typing import List, Optional
from langchain.chains.query_constructor.schema import AttributeInfo
from langchain.retrievers import SelfQueryRetriever
from langchain_community.llms import BaseLLM
from langchain_community.vectorstores.qdrant import Qdrant
from langchain_core.documents import Document
from langchain_core.embeddings import Embeddings
from langchain_core.output_parsers.string import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_openai import OpenAI, OpenAIEmbeddings
from qdrant_client import QdrantClient
from self_query_qdrant import defaults, helper, prompts
class Query(BaseModel):
__root__: str
def create_chain(
llm: Optional[BaseLLM] = None,
embeddings: Optional[Embeddings] = None,
document_contents: str = defaults.DEFAULT_DOCUMENT_CONTENTS,
metadata_field_info: List[AttributeInfo] = defaults.DEFAULT_METADATA_FIELD_INFO,
collection_name: str = defaults.DEFAULT_COLLECTION_NAME,
):
"""
Create a chain that can be used to query a Qdrant vector store with a self-querying
capability. By default, this chain will use the OpenAI LLM and OpenAIEmbeddings, and
work with the default document contents and metadata field info. You can override
these defaults by passing in your own values.
:param llm: an LLM to use for generating text
:param embeddings: an Embeddings to use for generating queries
:param document_contents: a description of the document set
:param metadata_field_info: list of metadata attributes
:param collection_name: name of the Qdrant collection to use
:return:
"""
llm = llm or OpenAI()
embeddings = embeddings or OpenAIEmbeddings()
# Set up a vector store to store your vectors and metadata
client = QdrantClient(
url=os.environ.get("QDRANT_URL", "http://localhost:6333"),
api_key=os.environ.get("QDRANT_API_KEY"),
)
vectorstore = Qdrant(
client=client,
collection_name=collection_name,
embeddings=embeddings,
)
# Set up a retriever to query your vector store with self-querying capabilities
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_contents, metadata_field_info, verbose=True
)
context = RunnableParallel(
context=retriever | helper.combine_documents,
query=RunnablePassthrough(),
)
pipeline = context | prompts.LLM_CONTEXT_PROMPT | llm | StrOutputParser()
return pipeline.with_types(input_type=Query)
def initialize(
embeddings: Optional[Embeddings] = None,
collection_name: str = defaults.DEFAULT_COLLECTION_NAME,
documents: List[Document] = defaults.DEFAULT_DOCUMENTS,
):
"""
Initialize a vector store with a set of documents. By default, the documents will be
compatible with the default metadata field info. You can override these defaults by
passing in your own values.
:param embeddings: an Embeddings to use for generating queries
:param collection_name: name of the Qdrant collection to use
:param documents: a list of documents to initialize the vector store with
:return:
"""
embeddings = embeddings or OpenAIEmbeddings()
# Set up a vector store to store your vectors and metadata
Qdrant.from_documents(
documents,
embedding=embeddings,
collection_name=collection_name,
url=os.environ.get("QDRANT_URL", "http://localhost:6333"),
api_key=os.environ.get("QDRANT_API_KEY"),
)
# Create the default chain
chain = create_chain()
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/self-query-qdrant/self_query_qdrant/defaults.py | from langchain.chains.query_constructor.schema import AttributeInfo
from langchain_core.documents import Document
# Qdrant collection name
DEFAULT_COLLECTION_NAME = "restaurants"
# Here is a description of the dataset and metadata attributes. Metadata attributes will
# be used to filter the results of the query beyond the semantic search.
DEFAULT_DOCUMENT_CONTENTS = (
"Dishes served at different restaurants, along with the restaurant information"
)
DEFAULT_METADATA_FIELD_INFO = [
AttributeInfo(
name="price",
description="The price of the dish",
type="float",
),
AttributeInfo(
name="restaurant.name",
description="The name of the restaurant",
type="string",
),
AttributeInfo(
name="restaurant.location",
description="Name of the city where the restaurant is located",
type="string or list[string]",
),
]
# A default set of documents to use for the vector store. This is a list of Document
# objects, which have a page_content field and a metadata field. The metadata field is a
# dictionary of metadata attributes compatible with the metadata field info above.
DEFAULT_DOCUMENTS = [
Document(
page_content="Pepperoni pizza with extra cheese, crispy crust",
metadata={
"price": 10.99,
"restaurant": {
"name": "Pizza Hut",
"location": ["New York", "Chicago"],
},
},
),
Document(
page_content="Spaghetti with meatballs and tomato sauce",
metadata={
"price": 12.99,
"restaurant": {
"name": "Olive Garden",
"location": ["New York", "Chicago", "Los Angeles"],
},
},
),
Document(
page_content="Chicken tikka masala with naan",
metadata={
"price": 14.99,
"restaurant": {
"name": "Indian Oven",
"location": ["New York", "Los Angeles"],
},
},
),
Document(
page_content="Chicken teriyaki with rice",
metadata={
"price": 11.99,
"restaurant": {
"name": "Sakura",
"location": ["New York", "Chicago", "Los Angeles"],
},
},
),
Document(
page_content="Scabbard fish with banana and passion fruit sauce",
metadata={
"price": 19.99,
"restaurant": {
"name": "A Concha",
"location": ["San Francisco"],
},
},
),
Document(
page_content="Pielmieni with sour cream",
metadata={
"price": 13.99,
"restaurant": {
"name": "Russian House",
"location": ["New York", "Chicago"],
},
},
),
Document(
page_content="Chicken biryani with raita",
metadata={
"price": 14.99,
"restaurant": {
"name": "Indian Oven",
"location": ["Los Angeles"],
},
},
),
Document(
page_content="Tomato soup with croutons",
metadata={
"price": 7.99,
"restaurant": {
"name": "Olive Garden",
"location": ["New York", "Chicago", "Los Angeles"],
},
},
),
Document(
page_content="Vegan burger with sweet potato fries",
metadata={
"price": 12.99,
"restaurant": {
"name": "Burger King",
"location": ["New York", "Los Angeles"],
},
},
),
Document(
page_content="Chicken nuggets with french fries",
metadata={
"price": 9.99,
"restaurant": {
"name": "McDonald's",
"location": ["San Francisco", "New York", "Los Angeles"],
},
},
),
]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/self-query-qdrant/self_query_qdrant/helper.py | from string import Formatter
from typing import List
from langchain_core.documents import Document
document_template = """
PASSAGE: {page_content}
METADATA: {metadata}
"""
def combine_documents(documents: List[Document]) -> str:
"""
Combine a list of documents into a single string that might be passed further down
to a language model.
:param documents: list of documents to combine
:return:
"""
formatter = Formatter()
return "\n\n".join(
formatter.format(
document_template,
page_content=document.page_content,
metadata=document.metadata,
)
for document in documents
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/self-query-qdrant/self_query_qdrant/prompts.py | from langchain_core.prompts import PromptTemplate
llm_context_prompt_template = """
Answer the user query using provided passages. Each passage has metadata given as
a nested JSON object you can also use. When answering, cite source name of the passages
you are answering from below the answer in a unique bullet point list.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----
{context}
----
Query: {query}
""" # noqa: E501
LLM_CONTEXT_PROMPT = PromptTemplate.from_template(llm_context_prompt_template)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/robocorp-action-server/README.md | # Langchain - Robocorp Action Server
This template enables using [Robocorp Action Server](https://github.com/robocorp/robocorp) served actions as tools for an Agent.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package robocorp-action-server
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add robocorp-action-server
```
And add the following code to your `server.py` file:
```python
from robocorp_action_server import agent_executor as action_server_chain
add_routes(app, action_server_chain, path="/robocorp-action-server")
```
### Running the Action Server
To run the Action Server, you need to have the Robocorp Action Server installed
```bash
pip install -U robocorp-action-server
```
Then you can run the Action Server with:
```bash
action-server new
cd ./your-project-name
action-server start
```
### Configure LangSmith (Optional)
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
### Start LangServe instance
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/robocorp-action-server/playground](http://127.0.0.1:8000/robocorp-action-server/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/robocorp-action-server")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/robocorp-action-server/robocorp_action_server/__init__.py | from robocorp_action_server.agent import agent_executor
__all__ = ["agent_executor"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/robocorp-action-server/robocorp_action_server/agent.py | from langchain.agents import AgentExecutor, OpenAIFunctionsAgent
from langchain_core.messages import SystemMessage
from langchain_core.pydantic_v1 import BaseModel
from langchain_openai import ChatOpenAI
from langchain_robocorp import ActionServerToolkit
# Initialize LLM chat model
llm = ChatOpenAI(model="gpt-4", temperature=0)
# Initialize Action Server Toolkit
toolkit = ActionServerToolkit(url="http://localhost:8080")
tools = toolkit.get_tools()
# Initialize Agent
system_message = SystemMessage(content="You are a helpful assistant")
prompt = OpenAIFunctionsAgent.create_prompt(system_message)
agent = OpenAIFunctionsAgent(
llm=llm,
prompt=prompt,
tools=tools,
)
# Initialize Agent executor
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Typings for Langserve playground
class Input(BaseModel):
input: str
class Output(BaseModel):
output: str
agent_executor = agent_executor.with_types(input_type=Input, output_type=Output) # type: ignore[arg-type, assignment]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rewrite-retrieve-read/README.md |
# rewrite_retrieve_read
This template implemenets a method for query transformation (re-writing) in the paper [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/pdf/2305.14283.pdf) to optimize for RAG.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rewrite_retrieve_read
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rewrite_retrieve_read
```
And add the following code to your `server.py` file:
```python
from rewrite_retrieve_read.chain import chain as rewrite_retrieve_read_chain
add_routes(app, rewrite_retrieve_read_chain, path="/rewrite-retrieve-read")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rewrite_retrieve_read/playground](http://127.0.0.1:8000/rewrite_retrieve_read/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rewrite_retrieve_read")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rewrite-retrieve-read/main.py | from rewrite_retrieve_read.chain import chain
if __name__ == "__main__":
chain.invoke("man that sam bankman fried trial was crazy! what is langchain?")
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rewrite-retrieve-read/rewrite_retrieve_read/chain.py | from langchain_community.chat_models import ChatOpenAI
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough
template = """Answer the users question based only on the following context:
<context>
{context}
</context>
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI(temperature=0)
search = DuckDuckGoSearchAPIWrapper()
def retriever(query):
return search.run(query)
template = """Provide a better search query for \
web search engine to answer the given question, end \
the queries with ’**’. Question: \
{x} Answer:"""
rewrite_prompt = ChatPromptTemplate.from_template(template)
# Parser to remove the `**`
def _parse(text):
return text.strip("**")
rewriter = rewrite_prompt | ChatOpenAI(temperature=0) | StrOutputParser() | _parse
chain = (
{
"context": {"x": RunnablePassthrough()} | rewriter | retriever,
"question": RunnablePassthrough(),
}
| prompt
| model
| StrOutputParser()
)
# Add input type for playground
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/retrieval-agent/README.md | # retrieval-agent
This package uses Azure OpenAI to do retrieval using an agent architecture.
By default, this does retrieval over Arxiv.
## Environment Setup
Since we are using Azure OpenAI, we will need to set the following environment variables:
```shell
export AZURE_OPENAI_ENDPOINT=...
export AZURE_OPENAI_API_VERSION=...
export AZURE_OPENAI_API_KEY=...
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package retrieval-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add retrieval-agent
```
And add the following code to your `server.py` file:
```python
from retrieval_agent import chain as retrieval_agent_chain
add_routes(app, retrieval_agent_chain, path="/retrieval-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/retrieval-agent/playground](http://127.0.0.1:8000/retrieval-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/retrieval-agent")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/retrieval-agent/retrieval_agent/__init__.py | from retrieval_agent.chain import agent_executor
__all__ = ["agent_executor"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/retrieval-agent/retrieval_agent/chain.py | import os
from typing import List, Tuple
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.callbacks.manager import CallbackManagerForRetrieverRun
from langchain.tools.retriever import create_retriever_tool
from langchain_community.tools.convert_to_openai import format_tool_to_openai_function
from langchain_community.utilities.arxiv import ArxivAPIWrapper
from langchain_core.documents import Document
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.retrievers import BaseRetriever
from langchain_openai import AzureChatOpenAI
class ArxivRetriever(BaseRetriever, ArxivAPIWrapper):
"""`Arxiv` retriever.
It wraps load() to get_relevant_documents().
It uses all ArxivAPIWrapper arguments without any change.
"""
get_full_documents: bool = False
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
try:
if self.is_arxiv_identifier(query):
results = self.arxiv_search(
id_list=query.split(),
max_results=self.top_k_results,
).results()
else:
results = self.arxiv_search( # type: ignore
query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.top_k_results
).results()
except self.arxiv_exceptions as ex:
return [Document(page_content=f"Arxiv exception: {ex}")]
docs = [
Document(
page_content=result.summary,
metadata={
"Published": result.updated.date(),
"Title": result.title,
"Authors": ", ".join(a.name for a in result.authors),
},
)
for result in results
]
return docs
description = (
"A wrapper around Arxiv.org "
"Useful for when you need to answer questions about Physics, Mathematics, "
"Computer Science, Quantitative Biology, Quantitative Finance, Statistics, "
"Electrical Engineering, and Economics "
"from scientific articles on arxiv.org. "
"Input should be a search query."
)
# Create the tool
arxiv_tool = create_retriever_tool(ArxivRetriever(), "arxiv", description)
tools = [arxiv_tool]
llm = AzureChatOpenAI(
temperature=0,
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
)
assistant_system_message = """You are a helpful research assistant. \
Lookup relevant information as needed."""
prompt = ChatPromptTemplate.from_messages(
[
("system", assistant_system_message),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
def _format_chat_history(chat_history: List[Tuple[str, str]]):
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
agent = (
{
"input": lambda x: x["input"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
class AgentInput(BaseModel):
input: str
chat_history: List[Tuple[str, str]] = Field(
..., extra={"widget": {"type": "chat", "input": "input", "output": "output"}}
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True).with_types(
input_type=AgentInput
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/retrieval-agent-fireworks/README.md | # retrieval-agent-fireworks
This package uses open source models hosted on FireworksAI to do retrieval using an agent architecture. By default, this does retrieval over Arxiv.
We will use `Mixtral8x7b-instruct-v0.1`, which is shown in this blog to yield reasonable
results with function calling even though it is not fine tuned for this task: https://huggingface.co/blog/open-source-llms-as-agents
## Environment Setup
There are various great ways to run OSS models. We will use FireworksAI as an easy way to run the models. See [here](https://python.langchain.com/docs/integrations/providers/fireworks) for more information.
Set the `FIREWORKS_API_KEY` environment variable to access Fireworks.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package retrieval-agent-fireworks
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add retrieval-agent-fireworks
```
And add the following code to your `server.py` file:
```python
from retrieval_agent_fireworks import chain as retrieval_agent_fireworks_chain
add_routes(app, retrieval_agent_fireworks_chain, path="/retrieval-agent-fireworks")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/retrieval-agent-fireworks/playground](http://127.0.0.1:8000/retrieval-agent-fireworks/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/retrieval-agent-fireworks")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/retrieval-agent-fireworks/retrieval_agent_fireworks/__init__.py | from retrieval_agent_fireworks.chain import agent_executor
__all__ = ["agent_executor"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/retrieval-agent-fireworks/retrieval_agent_fireworks/chain.py | from typing import List
from langchain import hub
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import ReActJsonSingleInputOutputParser
from langchain.callbacks.manager import CallbackManagerForRetrieverRun
from langchain.tools.render import render_text_description
from langchain.tools.retriever import create_retriever_tool
from langchain_community.chat_models.fireworks import ChatFireworks
from langchain_community.utilities.arxiv import ArxivAPIWrapper
from langchain_core.documents import Document
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.retrievers import BaseRetriever
MODEL_ID = "accounts/fireworks/models/mixtral-8x7b-instruct"
class ArxivRetriever(BaseRetriever, ArxivAPIWrapper):
"""`Arxiv` retriever.
It wraps load() to get_relevant_documents().
It uses all ArxivAPIWrapper arguments without any change.
"""
get_full_documents: bool = False
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
try:
if self.is_arxiv_identifier(query):
results = self.arxiv_search(
id_list=query.split(),
max_results=self.top_k_results,
).results()
else:
results = self.arxiv_search( # type: ignore
query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.top_k_results
).results()
except self.arxiv_exceptions as ex:
return [Document(page_content=f"Arxiv exception: {ex}")]
docs = [
Document(
page_content=result.summary,
metadata={
"Published": result.updated.date(),
"Title": result.title,
"Authors": ", ".join(a.name for a in result.authors),
},
)
for result in results
]
return docs
# Set up tool(s)
description = (
"A wrapper around Arxiv.org "
"Useful for when you need to answer questions about Physics, Mathematics, "
"Computer Science, Quantitative Biology, Quantitative Finance, Statistics, "
"Electrical Engineering, and Economics "
"from scientific articles on arxiv.org. "
"Input should be a search query."
)
arxiv_tool = create_retriever_tool(ArxivRetriever(), "arxiv", description)
tools = [arxiv_tool]
# Set up LLM
llm = ChatFireworks(
model=MODEL_ID,
model_kwargs={
"temperature": 0,
"max_tokens": 2048,
"top_p": 1,
},
cache=True,
)
# setup ReAct style prompt
prompt = hub.pull("hwchase17/react-json")
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# define the agent
model_with_stop = llm.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| model_with_stop
| ReActJsonSingleInputOutputParser()
)
class InputType(BaseModel):
input: str
# instantiate AgentExecutor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True,
).with_types(input_type=InputType)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/research-assistant/README.md | # research-assistant
This template implements a version of
[GPT Researcher](https://github.com/assafelovic/gpt-researcher) that you can use
as a starting point for a research agent.
## Environment Setup
The default template relies on ChatOpenAI and DuckDuckGo, so you will need the
following environment variable:
- `OPENAI_API_KEY`
And to use the Tavily LLM-optimized search engine, you will need:
- `TAVILY_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package research-assistant
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add research-assistant
```
And add the following code to your `server.py` file:
```python
from research_assistant import chain as research_assistant_chain
add_routes(app, research_assistant_chain, path="/research-assistant")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/research-assistant/playground](http://127.0.0.1:8000/research-assistant/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/research-assistant")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/research-assistant/research_assistant/__init__.py | from research_assistant.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/research-assistant/research_assistant/chain.py | from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough
from research_assistant.search.web import chain as search_chain
from research_assistant.writer import chain as writer_chain
chain_notypes = (
RunnablePassthrough().assign(research_summary=search_chain) | writer_chain
)
class InputType(BaseModel):
question: str
chain = chain_notypes.with_types(input_type=InputType)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/research-assistant/research_assistant/writer.py | from langchain_community.chat_models import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
WRITER_SYSTEM_PROMPT = "You are an AI critical thinker research assistant. Your sole purpose is to write well written, critically acclaimed, objective and structured reports on given text." # noqa: E501
# Report prompts from https://github.com/assafelovic/gpt-researcher/blob/master/gpt_researcher/master/prompts.py
RESEARCH_REPORT_TEMPLATE = """Information:
--------
{research_summary}
--------
Using the above information, answer the following question or topic: "{question}" in a detailed report -- \
The report should focus on the answer to the question, should be well structured, informative, \
in depth, with facts and numbers if available and a minimum of 1,200 words.
You should strive to write the report as long as you can using all relevant and necessary information provided.
You must write the report with markdown syntax.
You MUST determine your own concrete and valid opinion based on the given information. Do NOT deter to general and meaningless conclusions.
Write all used source urls at the end of the report, and make sure to not add duplicated sources, but only one reference for each.
You must write the report in apa format.
Please do your best, this is very important to my career.""" # noqa: E501
RESOURCE_REPORT_TEMPLATE = """Information:
--------
{research_summary}
--------
Based on the above information, generate a bibliography recommendation report for the following question or topic: "{question}". \
The report should provide a detailed analysis of each recommended resource, explaining how each source can contribute to finding answers to the research question. \
Focus on the relevance, reliability, and significance of each source. \
Ensure that the report is well-structured, informative, in-depth, and follows Markdown syntax. \
Include relevant facts, figures, and numbers whenever available. \
The report should have a minimum length of 1,200 words.
Please do your best, this is very important to my career.""" # noqa: E501
OUTLINE_REPORT_TEMPLATE = """Information:
--------
{research_summary}
--------
Using the above information, generate an outline for a research report in Markdown syntax for the following question or topic: "{question}". \
The outline should provide a well-structured framework for the research report, including the main sections, subsections, and key points to be covered. \
The research report should be detailed, informative, in-depth, and a minimum of 1,200 words. \
Use appropriate Markdown syntax to format the outline and ensure readability.
Please do your best, this is very important to my career.""" # noqa: E501
model = ChatOpenAI(temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
("system", WRITER_SYSTEM_PROMPT),
("user", RESEARCH_REPORT_TEMPLATE),
]
).configurable_alternatives(
ConfigurableField("report_type"),
default_key="research_report",
resource_report=ChatPromptTemplate.from_messages(
[
("system", WRITER_SYSTEM_PROMPT),
("user", RESOURCE_REPORT_TEMPLATE),
]
),
outline_report=ChatPromptTemplate.from_messages(
[
("system", WRITER_SYSTEM_PROMPT),
("user", OUTLINE_REPORT_TEMPLATE),
]
),
)
chain = prompt | model | StrOutputParser()
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/research-assistant/research_assistant/search/web.py | import json
from typing import Any
import requests
from bs4 import BeautifulSoup
from langchain.retrievers.tavily_search_api import TavilySearchAPIRetriever
from langchain_community.chat_models import ChatOpenAI
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
from langchain_core.messages import SystemMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import (
ConfigurableField,
Runnable,
RunnableLambda,
RunnableParallel,
RunnablePassthrough,
)
RESULTS_PER_QUESTION = 3
ddg_search = DuckDuckGoSearchAPIWrapper()
def scrape_text(url: str):
# Send a GET request to the webpage
try:
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
# Parse the content of the request with BeautifulSoup
soup = BeautifulSoup(response.text, "html.parser")
# Extract all text from the webpage
page_text = soup.get_text(separator=" ", strip=True)
# Print the extracted text
return page_text
else:
return f"Failed to retrieve the webpage: Status code {response.status_code}"
except Exception as e:
print(e)
return f"Failed to retrieve the webpage: {e}"
def web_search(query: str, num_results: int):
results = ddg_search.results(query, num_results)
return [r["link"] for r in results]
get_links: Runnable[Any, Any] = (
RunnablePassthrough()
| RunnableLambda(
lambda x: [
{"url": url, "question": x["question"]}
for url in web_search(query=x["question"], num_results=RESULTS_PER_QUESTION)
]
)
).configurable_alternatives(
ConfigurableField("search_engine"),
default_key="duckduckgo",
tavily=RunnableLambda(lambda x: x["question"])
| RunnableParallel(
{
"question": RunnablePassthrough(),
"results": TavilySearchAPIRetriever(k=RESULTS_PER_QUESTION),
}
)
| RunnableLambda(
lambda x: [
{"url": result.metadata["source"], "question": x["question"]}
for result in x["results"]
]
),
)
SEARCH_PROMPT = ChatPromptTemplate.from_messages(
[
("system", "{agent_prompt}"),
(
"user",
"Write 3 google search queries to search online that form an "
"objective opinion from the following: {question}\n"
"You must respond with a list of strings in the following format: "
'["query 1", "query 2", "query 3"].',
),
]
)
AUTO_AGENT_INSTRUCTIONS = """
This task involves researching a given topic, regardless of its complexity or the availability of a definitive answer. The research is conducted by a specific agent, defined by its type and role, with each agent requiring distinct instructions.
Agent
The agent is determined by the field of the topic and the specific name of the agent that could be utilized to research the topic provided. Agents are categorized by their area of expertise, and each agent type is associated with a corresponding emoji.
examples:
task: "should I invest in apple stocks?"
response:
{
"agent": "💰 Finance Agent",
"agent_role_prompt: "You are a seasoned finance analyst AI assistant. Your primary goal is to compose comprehensive, astute, impartial, and methodically arranged financial reports based on provided data and trends."
}
task: "could reselling sneakers become profitable?"
response:
{
"agent": "📈 Business Analyst Agent",
"agent_role_prompt": "You are an experienced AI business analyst assistant. Your main objective is to produce comprehensive, insightful, impartial, and systematically structured business reports based on provided business data, market trends, and strategic analysis."
}
task: "what are the most interesting sites in Tel Aviv?"
response:
{
"agent: "🌍 Travel Agent",
"agent_role_prompt": "You are a world-travelled AI tour guide assistant. Your main purpose is to draft engaging, insightful, unbiased, and well-structured travel reports on given locations, including history, attractions, and cultural insights."
}
""" # noqa: E501
CHOOSE_AGENT_PROMPT = ChatPromptTemplate.from_messages(
[SystemMessage(content=AUTO_AGENT_INSTRUCTIONS), ("user", "task: {task}")]
)
SUMMARY_TEMPLATE = """{text}
-----------
Using the above text, answer in short the following question:
> {question}
-----------
if the question cannot be answered using the text, imply summarize the text. Include all factual information, numbers, stats etc if available.""" # noqa: E501
SUMMARY_PROMPT = ChatPromptTemplate.from_template(SUMMARY_TEMPLATE)
scrape_and_summarize: Runnable[Any, Any] = (
RunnableParallel(
{
"question": lambda x: x["question"],
"text": lambda x: scrape_text(x["url"])[:10000],
"url": lambda x: x["url"],
}
)
| RunnableParallel(
{
"summary": SUMMARY_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser(),
"url": lambda x: x["url"],
}
)
| RunnableLambda(lambda x: f"Source Url: {x['url']}\nSummary: {x['summary']}")
)
multi_search = get_links | scrape_and_summarize.map() | (lambda x: "\n".join(x))
def load_json(s):
try:
return json.loads(s)
except Exception:
return {}
search_query = SEARCH_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser() | load_json
choose_agent = (
CHOOSE_AGENT_PROMPT | ChatOpenAI(temperature=0) | StrOutputParser() | load_json
)
get_search_queries = (
RunnablePassthrough().assign(
agent_prompt=RunnableParallel({"task": lambda x: x})
| choose_agent
| (lambda x: x.get("agent_role_prompt"))
)
| search_query
)
chain = (
get_search_queries
| (lambda x: [{"question": q} for q in x])
| multi_search.map()
| (lambda x: "\n\n".join(x))
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-weaviate/README.md |
# rag-weaviate
This template performs RAG with Weaviate.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Also, ensure the following environment variables are set:
* `WEAVIATE_ENVIRONMENT`
* `WEAVIATE_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-weaviate
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-weaviate
```
And add the following code to your `server.py` file:
```python
from rag_weaviate import chain as rag_weaviate_chain
add_routes(app, rag_weaviate_chain, path="/rag-weaviate")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-weaviate/playground](http://127.0.0.1:8000/rag-weaviate/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-weaviate")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-weaviate/rag_weaviate.ipynb | {
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "8692a430",
"metadata": {},
"source": [
"# Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_ext, path=\"/rag-weaviate\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41db5e30",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app_weaviate = RemoteRunnable(\"http://localhost:8000/rag-weaviate\")\n",
"rag_app_weaviate.invoke(\"How does agent memory work?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.11.6 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
},
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-weaviate/rag_weaviate/__init__.py | from rag_weaviate.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-weaviate/rag_weaviate/chain.py | import os
from langchain_community.chat_models import ChatOpenAI
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import Weaviate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_text_splitters import RecursiveCharacterTextSplitter
if os.environ.get("WEAVIATE_API_KEY", None) is None:
raise Exception("Missing `WEAVIATE_API_KEY` environment variable.")
if os.environ.get("WEAVIATE_ENVIRONMENT", None) is None:
raise Exception("Missing `WEAVIATE_ENVIRONMENT` environment variable.")
WEAVIATE_INDEX_NAME = os.environ.get("WEAVIATE_INDEX", "langchain-test")
### Ingest code - you may need to run this the first time
# Load
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
# # Split
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)
# # Add to vectorDB
# vectorstore = Weaviate.from_documents(
# documents=all_splits, embedding=OpenAIEmbeddings(), index_name=WEAVIATE_INDEX_NAME
# )
# retriever = vectorstore.as_retriever()
vectorstore = Weaviate.from_existing_index(WEAVIATE_INDEX_NAME, OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# RAG
model = ChatOpenAI()
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-vectara/README.md |
# rag-vectara
This template performs RAG with vectara.
## Environment Setup
Also, ensure the following environment variables are set:
* `VECTARA_CUSTOMER_ID`
* `VECTARA_CORPUS_ID`
* `VECTARA_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-vectara
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-vectara
```
And add the following code to your `server.py` file:
```python
from rag_vectara import chain as rag_vectara_chain
add_routes(app, rag_vectara_chain, path="/rag-vectara")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "vectara-demo"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-vectara/playground](http://127.0.0.1:8000/rag-vectara/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-vectara")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-vectara/rag_vectara.ipynb | {
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "8692a430",
"metadata": {},
"source": [
"# Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_ext, path=\"/rag-vectara\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41db5e30",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app_vectara = RemoteRunnable(\"http://localhost:8000/rag-vectara\")\n",
"rag_app_vectara.invoke(\"How does agent memory work?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.11.6 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
},
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-vectara/rag_vectara/__init__.py | from rag_vectara.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-vectara/rag_vectara/chain.py | import os
from langchain_community.vectorstores import Vectara
from langchain_community.vectorstores.vectara import SummaryConfig, VectaraQueryConfig
from langchain_core.pydantic_v1 import BaseModel
if os.environ.get("VECTARA_CUSTOMER_ID", None) is None:
raise Exception("Missing `VECTARA_CUSTOMER_ID` environment variable.")
if os.environ.get("VECTARA_CORPUS_ID", None) is None:
raise Exception("Missing `VECTARA_CORPUS_ID` environment variable.")
if os.environ.get("VECTARA_API_KEY", None) is None:
raise Exception("Missing `VECTARA_API_KEY` environment variable.")
# Setup the Vectara vectorstore with your Corpus ID and API Key
vectara = Vectara()
# Define the query configuration:
summary_config = SummaryConfig(is_enabled=True, max_results=5, response_lang="eng")
config = VectaraQueryConfig(k=10, lambda_val=0.005, summary_config=summary_config)
rag = Vectara().as_rag(config)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = rag.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-vectara-multiquery/README.md |
# rag-vectara-multiquery
This template performs multiquery RAG with vectara.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models for the multi-query processing.
Also, ensure the following environment variables are set:
* `VECTARA_CUSTOMER_ID`
* `VECTARA_CORPUS_ID`
* `VECTARA_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-vectara-multiquery
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-vectara-multiquery
```
And add the following code to your `server.py` file:
```python
from rag_vectara import chain as rag_vectara_chain
add_routes(app, rag_vectara_chain, path="/rag-vectara-multiquery")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "vectara-demo"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-vectara-multiquery/playground](http://127.0.0.1:8000/rag-vectara-multiquery/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-vectara-multiquery")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-vectara-multiquery/rag_vectara_multiquery.ipynb | {
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "8692a430",
"metadata": {},
"source": [
"# Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_ext, path=\"/rag-vectara-multiquery\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "41db5e30",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app_vectara = RemoteRunnable(\"http://localhost:8000/rag-vectara-multiquery\")\n",
"rag_app_vectara.invoke(\"How does agent memory work?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.11.6 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
},
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-vectara-multiquery/rag_vectara_multiquery/__init__.py | from rag_vectara_multiquery.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-vectara-multiquery/rag_vectara_multiquery/chain.py | import os
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain_community.vectorstores import Vectara
from langchain_community.vectorstores.vectara import SummaryConfig, VectaraQueryConfig
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_openai.chat_models import ChatOpenAI
if os.environ.get("VECTARA_CUSTOMER_ID", None) is None:
raise Exception("Missing `VECTARA_CUSTOMER_ID` environment variable.")
if os.environ.get("VECTARA_CORPUS_ID", None) is None:
raise Exception("Missing `VECTARA_CORPUS_ID` environment variable.")
if os.environ.get("VECTARA_API_KEY", None) is None:
raise Exception("Missing `VECTARA_API_KEY` environment variable.")
# Setup the Vectara retriever with your Corpus ID and API Key
vectara = Vectara()
# Define the query configuration:
summary_config = SummaryConfig(is_enabled=True, max_results=5, response_lang="eng")
config = VectaraQueryConfig(k=10, lambda_val=0.005, summary_config=summary_config)
# Setup the Multi-query retriever
llm = ChatOpenAI(temperature=0)
retriever = MultiQueryRetriever.from_llm(
retriever=vectara.as_retriever(config=config), llm=llm
)
# Setup RAG pipeline with multi-query.
# We extract the summary from the RAG output, which is the last document in the list.
# Note that if you want to extract the citation information, you can use res[:-1]]
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| (lambda res: res[-1])
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-timescale-hybrid-search-time/README.md | # RAG with Timescale Vector using hybrid search
This template shows how to use timescale-vector with the self-query retriver to perform hybrid search on similarity and time.
This is useful any time your data has a strong time-based component. Some examples of such data are:
- News articles (politics, business, etc)
- Blog posts, documentation or other published material (public or private).
- Social media posts
- Changelogs of any kind
- Messages
Such items are often searched by both similarity and time. For example: Show me all news about Toyota trucks from 2022.
[Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) provides superior performance when searching for embeddings within a particular timeframe by leveraging automatic table partitioning to isolate data for particular time-ranges.
Langchain's self-query retriever allows deducing time-ranges (as well as other search criteria) from the text of user queries.
## What is Timescale Vector?
**[Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) is PostgreSQL++ for AI applications.**
Timescale Vector enables you to efficiently store and query billions of vector embeddings in `PostgreSQL`.
- Enhances `pgvector` with faster and more accurate similarity search on 1B+ vectors via DiskANN inspired indexing algorithm.
- Enables fast time-based vector search via automatic time-based partitioning and indexing.
- Provides a familiar SQL interface for querying vector embeddings and relational data.
Timescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:
- Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database.
- Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.
- Enables a worry-free experience with enterprise-grade security and compliance.
### How to access Timescale Vector
Timescale Vector is available on [Timescale](https://www.timescale.com/products?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral), the cloud PostgreSQL platform. (There is no self-hosted version at this time.)
- LangChain users get a 90-day free trial for Timescale Vector.
- To get started, [signup](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) to Timescale, create a new database and follow this notebook!
- See the [installation instructions](https://github.com/timescale/python-vector) for more details on using Timescale Vector in python.
## Environment Setup
This template uses Timescale Vector as a vectorstore and requires that `TIMESCALES_SERVICE_URL`. Signup for a 90-day trial [here](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) if you don't yet have an account.
To load the sample dataset, set `LOAD_SAMPLE_DATA=1`. To load your own dataset see the section below.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-timescale-hybrid-search-time
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-timescale-hybrid-search-time
```
And add the following code to your `server.py` file:
```python
from rag_timescale_hybrid_search.chain import chain as rag_timescale_hybrid_search_chain
add_routes(app, rag_timescale_hybrid_search_chain, path="/rag-timescale-hybrid-search")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-timescale-hybrid-search/playground](http://127.0.0.1:8000/rag-timescale-hybrid-search/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-timescale-hybrid-search")
```
## Loading your own dataset
To load your own dataset you will have to modify the code in the `DATASET SPECIFIC CODE` section of `chain.py`.
This code defines the name of the collection, how to load the data, and the human-language description of both the
contents of the collection and all of the metadata. The human-language descriptions are used by the self-query retriever
to help the LLM convert the question into filters on the metadata when searching the data in Timescale-vector. | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-timescale-hybrid-search-time/rag_timescale_hybrid_search_time/__init__.py | from rag_timescale_hybrid_search_time import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-timescale-hybrid-search-time/rag_timescale_hybrid_search_time/chain.py | # ruff: noqa: E501
import os
from datetime import timedelta
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings.openai import OpenAIEmbeddings
from langchain_community.llms import OpenAI
from langchain_community.vectorstores.timescalevector import TimescaleVector
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from .load_sample_dataset import load_ts_git_dataset
# to enable debug uncomment the following lines:
# from langchain.globals import set_debug
# set_debug(True)
# from dotenv import find_dotenv, load_dotenv
# _ = load_dotenv(find_dotenv())
if os.environ.get("TIMESCALE_SERVICE_URL", None) is None:
raise Exception("Missing `TIMESCALE_SERVICE_URL` environment variable.")
SERVICE_URL = os.environ["TIMESCALE_SERVICE_URL"]
LOAD_SAMPLE_DATA = os.environ.get("LOAD_SAMPLE_DATA", False)
# DATASET SPECIFIC CODE
# Load the sample dataset. You will have to change this to load your own dataset.
collection_name = "timescale_commits"
partition_interval = timedelta(days=7)
if LOAD_SAMPLE_DATA:
load_ts_git_dataset(
SERVICE_URL,
collection_name=collection_name,
num_records=500,
partition_interval=partition_interval,
)
# This will change depending on the metadata stored in your dataset.
document_content_description = "The git log commit summary containing the commit hash, author, date of commit, change summary and change details"
metadata_field_info = [
AttributeInfo(
name="id",
description="A UUID v1 generated from the date of the commit",
type="uuid",
),
AttributeInfo(
# This is a special attribute represent the timestamp of the uuid.
name="__uuid_timestamp",
description="The timestamp of the commit. Specify in YYYY-MM-DDTHH::MM:SSZ format",
type="datetime.datetime",
),
AttributeInfo(
name="author_name",
description="The name of the author of the commit",
type="string",
),
AttributeInfo(
name="author_email",
description="The email address of the author of the commit",
type="string",
),
]
# END DATASET SPECIFIC CODE
embeddings = OpenAIEmbeddings()
vectorstore = TimescaleVector(
embedding=embeddings,
collection_name=collection_name,
service_url=SERVICE_URL,
time_partition_interval=partition_interval,
)
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True,
)
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI(temperature=0, model="gpt-4")
# RAG chain
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-timescale-hybrid-search-time/rag_timescale_hybrid_search_time/load_sample_dataset.py | import os
import tempfile
from datetime import datetime, timedelta
import requests
from langchain_community.document_loaders import JSONLoader
from langchain_community.embeddings.openai import OpenAIEmbeddings
from langchain_community.vectorstores.timescalevector import TimescaleVector
from langchain_text_splitters.character import CharacterTextSplitter
from timescale_vector import client
def parse_date(date_string: str) -> datetime:
if date_string is None:
return None
time_format = "%a %b %d %H:%M:%S %Y %z"
return datetime.strptime(date_string, time_format)
def extract_metadata(record: dict, metadata: dict) -> dict:
dt = parse_date(record["date"])
metadata["id"] = str(client.uuid_from_time(dt))
if dt is not None:
metadata["date"] = dt.isoformat()
else:
metadata["date"] = None
metadata["author"] = record["author"]
metadata["commit_hash"] = record["commit"]
return metadata
def load_ts_git_dataset(
service_url,
collection_name="timescale_commits",
num_records: int = 500,
partition_interval=timedelta(days=7),
):
json_url = "https://s3.amazonaws.com/assets.timescale.com/ai/ts_git_log.json"
tmp_file = "ts_git_log.json"
temp_dir = tempfile.gettempdir()
json_file_path = os.path.join(temp_dir, tmp_file)
if not os.path.exists(json_file_path):
response = requests.get(json_url)
if response.status_code == 200:
with open(json_file_path, "w") as json_file:
json_file.write(response.text)
else:
print(f"Failed to download JSON file. Status code: {response.status_code}")
loader = JSONLoader(
file_path=json_file_path,
jq_schema=".commit_history[]",
text_content=False,
metadata_func=extract_metadata,
)
documents = loader.load()
# Remove documents with None dates
documents = [doc for doc in documents if doc.metadata["date"] is not None]
if num_records > 0:
documents = documents[:num_records]
# Split the documents into chunks for embedding
text_splitter = CharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200,
)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
# Create a Timescale Vector instance from the collection of documents
TimescaleVector.from_documents(
embedding=embeddings,
ids=[doc.metadata["id"] for doc in docs],
documents=docs,
collection_name=collection_name,
service_url=service_url,
time_partition_interval=partition_interval,
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-timescale-conversation/README.md |
# rag-timescale-conversation
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
It passes both a conversation history and retrieved documents into an LLM for synthesis.
## Environment Setup
This template uses Timescale Vector as a vectorstore and requires that `TIMESCALES_SERVICE_URL`. Signup for a 90-day trial [here](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) if you don't yet have an account.
To load the sample dataset, set `LOAD_SAMPLE_DATA=1`. To load your own dataset see the section below.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-timescale-conversation
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-timescale-conversation
```
And add the following code to your `server.py` file:
```python
from rag_timescale_conversation import chain as rag_timescale_conversation_chain
add_routes(app, rag_timescale_conversation_chain, path="/rag-timescale_conversation")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-timescale-conversation/playground](http://127.0.0.1:8000/rag-timescale-conversation/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-timescale-conversation")
```
See the `rag_conversation.ipynb` notebook for example usage.
## Loading your own dataset
To load your own dataset you will have to create a `load_dataset` function. You can see an example, in the
`load_ts_git_dataset` function defined in the `load_sample_dataset.py` file. You can then run this as a
standalone function (e.g. in a bash script) or add it to chain.py (but then you should run it just once). | Wed, 26 Jun 2024 13:15:51 GMT |
End of preview. Expand
in Dataset Viewer.
Root URLs scrapped in this dataset:
https://js.langchain.com/v0.2/
https://python.langchain.com/v0.2/
https://github.com/langchain-ai/langchain/tree/master/libs
https://github.com/langchain-ai/langchain/tree/master/cookbook
https://github.com/langchain-ai/langchain/tree/master/templates
https://github.com/langchain-ai/langchain/tree/master/docs/docs/how_to
- Downloads last month
- 39