id
stringlengths
14
16
text
stringlengths
31
2.73k
metadata
dict
4b8fd24265dd-0
.md .pdf Quickstart Guide Contents Installation Environment Setup Building a Language Model Application: LLMs Building a Language Model Application: Chat Models Quickstart Guide# This tutorial gives you a quick walkthrough about building an end-to-end language model application with LangChain. Installation# To get started, install LangChain with the following command: pip install langchain Environment Setup# Using LangChain will usually require integrations with one or more model providers, data stores, apis, etc. For this example, we will be using OpenAI’s APIs, so we will first need to install their SDK: pip install openai We will then need to set the environment variable in the terminal. export OPENAI_API_KEY="..." Alternatively, you could do this from inside the Jupyter notebook (or Python script): import os os.environ["OPENAI_API_KEY"] = "..." Building a Language Model Application: LLMs# Now that we have installed LangChain and set up our environment, we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications. LLMs: Get predictions from a language model The most basic building block of LangChain is calling an LLM on some input. Let’s walk through a simple example of how to do this. For this purpose, let’s pretend we are building a service that generates a company name based on what the company makes. In order to do this, we first need to import the LLM wrapper. from langchain.llms import OpenAI We can then initialize the wrapper with any arguments. In this example, we probably want the outputs to be MORE random, so we’ll initialize it with a HIGH temperature.
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-1
llm = OpenAI(temperature=0.9) We can now call it on some input! text = "What would be a good company name for a company that makes colorful socks?" print(llm(text)) Feetful of Fun For more details on how to use LLMs within LangChain, see the LLM getting started guide. Prompt Templates: Manage prompts for LLMs Calling an LLM is a great first step, but it’s just the beginning. Normally when you use an LLM in an application, you are not sending user input directly to the LLM. Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM. For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks. In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information. This is easy to do with LangChain! First lets define the prompt template: from langchain.prompts import PromptTemplate prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) Let’s now see how this works! We can call the .format method to format it. print(prompt.format(product="colorful socks")) What is a good name for a company that makes colorful socks? For more details, check out the getting started guide for prompts. Chains: Combine LLMs and prompts in multi-step workflows Up until now, we’ve worked with the PromptTemplate and LLM primitives by themselves. But of course, a real application is not just one primitive, but rather a combination of them.
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-2
A chain in LangChain is made up of links, which can be either primitives like LLMs or other chains. The most core type of chain is an LLMChain, which consists of a PromptTemplate and an LLM. Extending the previous example, we can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. from langchain.prompts import PromptTemplate from langchain.llms import OpenAI llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM: from langchain.chains import LLMChain chain = LLMChain(llm=llm, prompt=prompt) Now we can run that chain only specifying the product! chain.run("colorful socks") # -> '\n\nSocktastic!' There we go! There’s the first chain - an LLM Chain. This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains. For more details, check out the getting started guide for chains. Agents: Dynamically Call Chains Based on User Input So far the chains we’ve looked at run in a predetermined order. Agents no longer do: they use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user. When used correctly agents can be extremely powerful. In this tutorial, we show you how to easily use agents through the simplest, highest level API.
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-3
In order to load agents, you should understand the following concepts: Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output. LLM: The language model powering the agent. Agent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon). Agents: For a list of supported agents and their specifications, see here. Tools: For a list of predefined tools and their specifications, see here. For this example, you will also need to install the SerpAPI Python package. pip install google-search-results And set the appropriate environment variables. import os os.environ["SERPAPI_API_KEY"] = "..." Now we can get started! from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. llm = OpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-4
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?") > Entering new AgentExecutor chain... I need to find the temperature first, then use the calculator to raise it to the .023 power. Action: Search Action Input: "High temperature in SF yesterday" Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ... Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power. Action: Calculator Action Input: 57^.023 Observation: Answer: 1.0974509573251117 Thought: I now know the final answer Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117. > Finished chain. Memory: Add State to Chains and Agents So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or agent to have some concept of “memory” so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of “short-term memory”. On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of “long-term memory”. For more concrete ideas on the latter, see this awesome paper.
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-5
LangChain provides several specially created chains just for this purpose. This notebook walks through using one of those chains (the ConversationChain) with two different types of memory. By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let’s take a look at using this chain (setting verbose=True so we can see the prompt). from langchain import OpenAI, ConversationChain llm = OpenAI(temperature=0) conversation = ConversationChain(llm=llm, verbose=True) conversation.predict(input="Hi there!") > Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: > Finished chain. ' Hello! How are you today?' conversation.predict(input="I'm doing well! Just having a conversation with an AI.") > Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hello! How are you today? Human: I'm doing well! Just having a conversation with an AI. AI: > Finished chain. " That's great! What would you like to talk about?" Building a Language Model Application: Chat Models#
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-6
Building a Language Model Application: Chat Models# Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs. Chat model APIs are fairly new, so we are still figuring out the correct abstractions. Get Message Completions from a Chat Model You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage. from langchain.chat_models import ChatOpenAI from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) You can get completions by passing in a single message. chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) # -> AIMessage(content="J'aime programmer.", additional_kwargs={}) You can also pass in multiple messages for OpenAI’s gpt-3.5-turbo and gpt-4 models. messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) # -> AIMessage(content="J'aime programmer.", additional_kwargs={}) You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter:
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-7
batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.") ], ] result = chat.generate(batch_messages) result # -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}}) You can recover things like token usage from this LLMResult: result.llm_output['token_usage'] # -> {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89} Chat Prompt Templates Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate,
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-8
from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) chat = ChatOpenAI(temperature=0) template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) # -> AIMessage(content="J'aime programmer.", additional_kwargs={}) Chains with Chat Models The LLMChain discussed in the above section can be used with chat models as well: from langchain.chat_models import ChatOpenAI from langchain import LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) chat = ChatOpenAI(temperature=0) template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) chain = LLMChain(llm=chat, prompt=chat_prompt) chain.run(input_language="English", output_language="French", text="I love programming.") # -> "J'aime programmer." Agents with Chat Models Agents can also be used with chat models, you can initialize one using AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type.
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-9
from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. chat = ChatOpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?") > Entering new AgentExecutor chain... Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power. Action: { "action": "Search", "action_input": "Olivia Wilde boyfriend" } Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Thought:I need to use a search engine to find Harry Styles' current age. Action: { "action": "Search",
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-10
Action: { "action": "Search", "action_input": "Harry Styles age" } Observation: 29 years Thought:Now I need to calculate 29 raised to the 0.23 power. Action: { "action": "Calculator", "action_input": "29^0.23" } Observation: Answer: 2.169459462491557 Thought:I now know the final answer. Final Answer: 2.169459462491557 > Finished chain. '2.169459462491557' Memory: Add State to Chains and Agents You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object. from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate ) from langchain.chains import ConversationChain from langchain.chat_models import ChatOpenAI from langchain.memory import ConversationBufferMemory prompt = ChatPromptTemplate.from_messages([ SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."), MessagesPlaceholder(variable_name="history"), HumanMessagePromptTemplate.from_template("{input}") ]) llm = ChatOpenAI(temperature=0) memory = ConversationBufferMemory(return_messages=True) conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm) conversation.predict(input="Hi there!")
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
4b8fd24265dd-11
conversation.predict(input="Hi there!") # -> 'Hello! How can I assist you today?' conversation.predict(input="I'm doing well! Just having a conversation with an AI.") # -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?" conversation.predict(input="Tell me about yourself.") # -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?" previous Welcome to LangChain next Models Contents Installation Environment Setup Building a Language Model Application: LLMs Building a Language Model Application: Chat Models By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/getting_started/getting_started.html" }
cdf09583deb5-0
Source code for langchain.python """Mock Python REPL.""" import sys from io import StringIO from typing import Dict, Optional from pydantic import BaseModel, Field [docs]class PythonREPL(BaseModel): """Simulates a standalone Python REPL.""" globals: Optional[Dict] = Field(default_factory=dict, alias="_globals") locals: Optional[Dict] = Field(default_factory=dict, alias="_locals") [docs] def run(self, command: str) -> str: """Run command with own globals/locals and returns anything printed.""" old_stdout = sys.stdout sys.stdout = mystdout = StringIO() try: exec(command, self.globals, self.locals) sys.stdout = old_stdout output = mystdout.getvalue() except Exception as e: sys.stdout = old_stdout output = str(e) return output By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/python.html" }
15b4336f74f4-0
Source code for langchain.text_splitter """Functionality for splitting text.""" from __future__ import annotations import copy import logging from abc import ABC, abstractmethod from typing import ( AbstractSet, Any, Callable, Collection, Iterable, List, Literal, Optional, Union, ) from langchain.docstore.document import Document logger = logging.getLogger() [docs]class TextSplitter(ABC): """Interface for splitting text into chunks.""" def __init__( self, chunk_size: int = 4000, chunk_overlap: int = 200, length_function: Callable[[str], int] = len, ): """Create a new TextSplitter.""" if chunk_overlap > chunk_size: raise ValueError( f"Got a larger chunk overlap ({chunk_overlap}) than chunk size " f"({chunk_size}), should be smaller." ) self._chunk_size = chunk_size self._chunk_overlap = chunk_overlap self._length_function = length_function [docs] @abstractmethod def split_text(self, text: str) -> List[str]: """Split text into multiple components.""" [docs] def create_documents( self, texts: List[str], metadatas: Optional[List[dict]] = None ) -> List[Document]: """Create documents from a list of texts.""" _metadatas = metadatas or [{}] * len(texts) documents = [] for i, text in enumerate(texts): for chunk in self.split_text(text): new_doc = Document( page_content=chunk, metadata=copy.deepcopy(_metadatas[i]) )
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html" }
15b4336f74f4-1
page_content=chunk, metadata=copy.deepcopy(_metadatas[i]) ) documents.append(new_doc) return documents [docs] def split_documents(self, documents: List[Document]) -> List[Document]: """Split documents.""" texts = [doc.page_content for doc in documents] metadatas = [doc.metadata for doc in documents] return self.create_documents(texts, metadatas) def _join_docs(self, docs: List[str], separator: str) -> Optional[str]: text = separator.join(docs) text = text.strip() if text == "": return None else: return text def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]: # We now want to combine these smaller pieces into medium size # chunks to send to the LLM. separator_len = self._length_function(separator) docs = [] current_doc: List[str] = [] total = 0 for d in splits: _len = self._length_function(d) if ( total + _len + (separator_len if len(current_doc) > 0 else 0) > self._chunk_size ): if total > self._chunk_size: logger.warning( f"Created a chunk of size {total}, " f"which is longer than the specified {self._chunk_size}" ) if len(current_doc) > 0: doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) # Keep on popping if: # - we have a larger chunk than in the chunk overlap
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html" }
15b4336f74f4-2
# - we have a larger chunk than in the chunk overlap # - or if we still have any chunks and the length is long while total > self._chunk_overlap or ( total + _len + (separator_len if len(current_doc) > 0 else 0) > self._chunk_size and total > 0 ): total -= self._length_function(current_doc[0]) + ( separator_len if len(current_doc) > 1 else 0 ) current_doc = current_doc[1:] current_doc.append(d) total += _len + (separator_len if len(current_doc) > 1 else 0) doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) return docs [docs] @classmethod def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter: """Text splitter that uses HuggingFace tokenizer to count length.""" try: from transformers import PreTrainedTokenizerBase if not isinstance(tokenizer, PreTrainedTokenizerBase): raise ValueError( "Tokenizer received was not an instance of PreTrainedTokenizerBase" ) def _huggingface_tokenizer_length(text: str) -> int: return len(tokenizer.encode(text)) except ImportError: raise ValueError( "Could not import transformers python package. " "Please it install it with `pip install transformers`." ) return cls(length_function=_huggingface_tokenizer_length, **kwargs) [docs] @classmethod def from_tiktoken_encoder( cls, encoding_name: str = "gpt2",
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html" }
15b4336f74f4-3
cls, encoding_name: str = "gpt2", allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disallowed_special: Union[Literal["all"], Collection[str]] = "all", **kwargs: Any, ) -> TextSplitter: """Text splitter that uses tiktoken encoder to count length.""" try: import tiktoken except ImportError: raise ValueError( "Could not import tiktoken python package. " "This is needed in order to calculate max_tokens_for_prompt. " "Please it install it with `pip install tiktoken`." ) # create a GPT-3 encoder instance enc = tiktoken.get_encoding(encoding_name) def _tiktoken_encoder(text: str, **kwargs: Any) -> int: return len( enc.encode( text, allowed_special=allowed_special, disallowed_special=disallowed_special, **kwargs, ) ) return cls(length_function=_tiktoken_encoder, **kwargs) [docs]class CharacterTextSplitter(TextSplitter): """Implementation of splitting text that looks at characters.""" def __init__(self, separator: str = "\n\n", **kwargs: Any): """Create a new TextSplitter.""" super().__init__(**kwargs) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. if self._separator: splits = text.split(self._separator) else: splits = list(text)
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html" }
15b4336f74f4-4
splits = text.split(self._separator) else: splits = list(text) return self._merge_splits(splits, self._separator) [docs]class TokenTextSplitter(TextSplitter): """Implementation of splitting text that looks at tokens.""" def __init__( self, encoding_name: str = "gpt2", allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disallowed_special: Union[Literal["all"], Collection[str]] = "all", **kwargs: Any, ): """Create a new TextSplitter.""" super().__init__(**kwargs) try: import tiktoken except ImportError: raise ValueError( "Could not import tiktoken python package. " "This is needed in order to for TokenTextSplitter. " "Please it install it with `pip install tiktoken`." ) # create a GPT-3 encoder instance self._tokenizer = tiktoken.get_encoding(encoding_name) self._allowed_special = allowed_special self._disallowed_special = disallowed_special [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" splits = [] input_ids = self._tokenizer.encode( text, allowed_special=self._allowed_special, disallowed_special=self._disallowed_special, ) start_idx = 0 cur_idx = min(start_idx + self._chunk_size, len(input_ids)) chunk_ids = input_ids[start_idx:cur_idx] while start_idx < len(input_ids): splits.append(self._tokenizer.decode(chunk_ids)) start_idx += self._chunk_size - self._chunk_overlap
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html" }
15b4336f74f4-5
start_idx += self._chunk_size - self._chunk_overlap cur_idx = min(start_idx + self._chunk_size, len(input_ids)) chunk_ids = input_ids[start_idx:cur_idx] return splits [docs]class RecursiveCharacterTextSplitter(TextSplitter): """Implementation of splitting text that looks at characters. Recursively tries to split by different characters to find one that works. """ def __init__(self, separators: Optional[List[str]] = None, **kwargs: Any): """Create a new TextSplitter.""" super().__init__(**kwargs) self._separators = separators or ["\n\n", "\n", " ", ""] [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" final_chunks = [] # Get appropriate separator to use separator = self._separators[-1] for _s in self._separators: if _s == "": separator = _s break if _s in text: separator = _s break # Now that we have the separator, split the text if separator: splits = text.split(separator) else: splits = list(text) # Now go merging things, recursively splitting longer texts. _good_splits = [] for s in splits: if self._length_function(s) < self._chunk_size: _good_splits.append(s) else: if _good_splits: merged_text = self._merge_splits(_good_splits, separator) final_chunks.extend(merged_text) _good_splits = [] other_info = self.split_text(s)
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html" }
15b4336f74f4-6
_good_splits = [] other_info = self.split_text(s) final_chunks.extend(other_info) if _good_splits: merged_text = self._merge_splits(_good_splits, separator) final_chunks.extend(merged_text) return final_chunks [docs]class NLTKTextSplitter(TextSplitter): """Implementation of splitting text that looks at sentences using NLTK.""" def __init__(self, separator: str = "\n\n", **kwargs: Any): """Initialize the NLTK splitter.""" super().__init__(**kwargs) try: from nltk.tokenize import sent_tokenize self._tokenizer = sent_tokenize except ImportError: raise ImportError( "NLTK is not installed, please install it with `pip install nltk`." ) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. splits = self._tokenizer(text) return self._merge_splits(splits, self._separator) [docs]class SpacyTextSplitter(TextSplitter): """Implementation of splitting text that looks at sentences using Spacy.""" def __init__( self, separator: str = "\n\n", pipeline: str = "en_core_web_sm", **kwargs: Any ): """Initialize the spacy text splitter.""" super().__init__(**kwargs) try: import spacy except ImportError: raise ImportError( "Spacy is not installed, please install it with `pip install spacy`." ) self._tokenizer = spacy.load(pipeline) self._separator = separator
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html" }
15b4336f74f4-7
self._tokenizer = spacy.load(pipeline) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" splits = (str(s) for s in self._tokenizer(text).sents) return self._merge_splits(splits, self._separator) [docs]class MarkdownTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Markdown-formatted headings.""" def __init__(self, **kwargs: Any): """Initialize a MarkdownTextSplitter.""" separators = [ # First, try to split along Markdown headings (starting with level 2) "\n## ", "\n### ", "\n#### ", "\n##### ", "\n###### ", # Note the alternative syntax for headings (below) is not handled here # Heading level 2 # --------------- # End of code block "```\n\n", # Horizontal lines "\n\n***\n\n", "\n\n---\n\n", "\n\n___\n\n", # Note that this splitter doesn't handle horizontal lines defined # by *three or more* of ***, ---, or ___, but this is not handled "\n\n", "\n", " ", "", ] super().__init__(separators=separators, **kwargs) [docs]class LatexTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Latex-formatted layout elements.""" def __init__(self, **kwargs: Any): """Initialize a LatexTextSplitter.""" separators = [
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html" }
15b4336f74f4-8
"""Initialize a LatexTextSplitter.""" separators = [ # First, try to split along Latex sections "\n\\chapter{", "\n\\section{", "\n\\subsection{", "\n\\subsubsection{", # Now split by environments "\n\\begin{enumerate}", "\n\\begin{itemize}", "\n\\begin{description}", "\n\\begin{list}", "\n\\begin{quote}", "\n\\begin{quotation}", "\n\\begin{verse}", "\n\\begin{verbatim}", ## Now split by math environments "\n\\begin{align}", "$$", "$", # Now split by the normal type of lines " ", "", ] super().__init__(separators=separators, **kwargs) [docs]class PythonCodeTextSplitter(RecursiveCharacterTextSplitter): """Attempts to split the text along Python syntax.""" def __init__(self, **kwargs: Any): """Initialize a MarkdownTextSplitter.""" separators = [ # First, try to split along class definitions "\nclass ", "\ndef ", "\n\tdef ", # Now split by the normal type of lines "\n\n", "\n", " ", "", ] super().__init__(separators=separators, **kwargs) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/text_splitter.html" }
74207ddc3404-0
Source code for langchain.chains.llm_requests """Chain that hits a URL and then uses an LLM to parse results.""" from __future__ import annotations from typing import Dict, List from pydantic import Extra, Field, root_validator from langchain.chains import LLMChain from langchain.chains.base import Chain from langchain.requests import TextRequestsWrapper DEFAULT_HEADERS = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36" # noqa: E501 } [docs]class LLMRequestsChain(Chain): """Chain that hits a URL and then uses an LLM to parse results.""" llm_chain: LLMChain requests_wrapper: TextRequestsWrapper = Field( default_factory=TextRequestsWrapper, exclude=True ) text_length: int = 8000 requests_key: str = "requests_result" #: :meta private: input_key: str = "url" #: :meta private: output_key: str = "output" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] @root_validator()
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html" }
74207ddc3404-1
""" return [self.output_key] @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" try: from bs4 import BeautifulSoup # noqa: F401 except ImportError: raise ValueError( "Could not import bs4 python package. " "Please it install it with `pip install bs4`." ) return values def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: from bs4 import BeautifulSoup # Other keys are assumed to be needed for LLM prediction other_keys = {k: v for k, v in inputs.items() if k != self.input_key} url = inputs[self.input_key] res = self.requests_wrapper.get(url) # extract the text from the html soup = BeautifulSoup(res, "html.parser") other_keys[self.requests_key] = soup.get_text()[: self.text_length] result = self.llm_chain.predict(**other_keys) return {self.output_key: result} @property def _chain_type(self) -> str: return "llm_requests_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_requests.html" }
7ad2d453073d-0
Source code for langchain.chains.mapreduce """Map-reduce chain. Splits up a document, sends the smaller parts to the LLM with one prompt, then combines the results with another one. """ from __future__ import annotations from typing import Dict, List from pydantic import Extra from langchain.chains.base import Chain from langchain.chains.combine_documents.base import BaseCombineDocumentsChain from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains.llm import LLMChain from langchain.docstore.document import Document from langchain.llms.base import BaseLLM from langchain.prompts.base import BasePromptTemplate from langchain.text_splitter import TextSplitter [docs]class MapReduceChain(Chain): """Map-reduce chain.""" combine_documents_chain: BaseCombineDocumentsChain """Chain to use to combine documents.""" text_splitter: TextSplitter """Text splitter to use.""" input_key: str = "input_text" #: :meta private: output_key: str = "output_text" #: :meta private: [docs] @classmethod def from_params( cls, llm: BaseLLM, prompt: BasePromptTemplate, text_splitter: TextSplitter ) -> MapReduceChain: """Construct a map-reduce chain that uses the chain for map and reduce.""" llm_chain = LLMChain(llm=llm, prompt=prompt) reduce_chain = StuffDocumentsChain(llm_chain=llm_chain) combine_documents_chain = MapReduceDocumentsChain( llm_chain=llm_chain, combine_document_chain=reduce_chain ) return cls(
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html" }
7ad2d453073d-1
) return cls( combine_documents_chain=combine_documents_chain, text_splitter=text_splitter ) class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return [self.output_key] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: # Split the larger text into smaller chunks. texts = self.text_splitter.split_text(inputs[self.input_key]) docs = [Document(page_content=text) for text in texts] outputs, _ = self.combine_documents_chain.combine_docs(docs) return {self.output_key: outputs} By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/mapreduce.html" }
418420150e4a-0
Source code for langchain.chains.sequential """Chain pipeline where the outputs of one step feed directly into next.""" from typing import Dict, List from pydantic import Extra, root_validator from langchain.chains.base import Chain from langchain.input import get_color_mapping [docs]class SequentialChain(Chain): """Chain where the outputs of one chain feed directly into next.""" chains: List[Chain] input_variables: List[str] output_variables: List[str] #: :meta private: return_all: bool = False class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Return expected input keys to the chain. :meta private: """ return self.input_variables @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return self.output_variables @root_validator(pre=True) def validate_chains(cls, values: Dict) -> Dict: """Validate that the correct inputs exist for all chains.""" chains = values["chains"] input_variables = values["input_variables"] memory_keys = list() if "memory" in values and values["memory"] is not None: """Validate that prompt input variables are consistent.""" memory_keys = values["memory"].memory_variables if set(input_variables).intersection(set(memory_keys)): overlapping_keys = set(input_variables) & set(memory_keys) raise ValueError( f"The the input key(s) {''.join(overlapping_keys)} are found " f"in the Memory keys ({memory_keys}) - please use input and "
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html" }
418420150e4a-1
f"in the Memory keys ({memory_keys}) - please use input and " f"memory keys that don't overlap." ) known_variables = set(input_variables + memory_keys) for chain in chains: missing_vars = set(chain.input_keys).difference(known_variables) if missing_vars: raise ValueError( f"Missing required input keys: {missing_vars}, " f"only had {known_variables}" ) overlapping_keys = known_variables.intersection(chain.output_keys) if overlapping_keys: raise ValueError( f"Chain returned keys that already exist: {overlapping_keys}" ) known_variables |= set(chain.output_keys) if "output_variables" not in values: if values.get("return_all", False): output_keys = known_variables.difference(input_variables) else: output_keys = chains[-1].output_keys values["output_variables"] = output_keys else: missing_vars = set(values["output_variables"]).difference(known_variables) if missing_vars: raise ValueError( f"Expected output variables that were not found: {missing_vars}." ) return values def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: known_values = inputs.copy() for i, chain in enumerate(self.chains): outputs = chain(known_values, return_only_outputs=True) known_values.update(outputs) return {k: known_values[k] for k in self.output_variables} [docs]class SimpleSequentialChain(Chain): """Simple chain where the outputs of one step feed directly into next.""" chains: List[Chain] strip_outputs: bool = False
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html" }
418420150e4a-2
chains: List[Chain] strip_outputs: bool = False input_key: str = "input" #: :meta private: output_key: str = "output" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return [self.output_key] @root_validator() def validate_chains(cls, values: Dict) -> Dict: """Validate that chains are all single input/output.""" for chain in values["chains"]: if len(chain.input_keys) != 1: raise ValueError( "Chains used in SimplePipeline should all have one input, got " f"{chain} with {len(chain.input_keys)} inputs." ) if len(chain.output_keys) != 1: raise ValueError( "Chains used in SimplePipeline should all have one output, got " f"{chain} with {len(chain.output_keys)} outputs." ) return values def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: _input = inputs[self.input_key] color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))]) for i, chain in enumerate(self.chains): _input = chain.run(_input) if self.strip_outputs: _input = _input.strip()
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html" }
418420150e4a-3
if self.strip_outputs: _input = _input.strip() self.callback_manager.on_text( _input, color=color_mapping[str(i)], end="\n", verbose=self.verbose ) return {self.output_key: _input} By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/sequential.html" }
f5cf6b4d1d22-0
Source code for langchain.chains.transform """Chain that runs an arbitrary python function.""" from typing import Callable, Dict, List from langchain.chains.base import Chain [docs]class TransformChain(Chain): """Chain transform chain output. Example: .. code-block:: python from langchain import TransformChain transform_chain = TransformChain(input_variables=["text"], output_variables["entities"], transform=func()) """ input_variables: List[str] output_variables: List[str] transform: Callable[[Dict[str, str]], Dict[str, str]] @property def input_keys(self) -> List[str]: """Expect input keys. :meta private: """ return self.input_variables @property def output_keys(self) -> List[str]: """Return output keys. :meta private: """ return self.output_variables def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: return self.transform(inputs) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/transform.html" }
9fb40d9611b5-0
Source code for langchain.chains.loading """Functionality for loading chains.""" import json from pathlib import Path from typing import Any, Union import yaml from langchain.chains.api.base import APIChain from langchain.chains.base import Chain from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain from langchain.chains.combine_documents.map_rerank import MapRerankDocumentsChain from langchain.chains.combine_documents.refine import RefineDocumentsChain from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains.hyde.base import HypotheticalDocumentEmbedder from langchain.chains.llm import LLMChain from langchain.chains.llm_bash.base import LLMBashChain from langchain.chains.llm_checker.base import LLMCheckerChain from langchain.chains.llm_math.base import LLMMathChain from langchain.chains.llm_requests import LLMRequestsChain from langchain.chains.pal.base import PALChain from langchain.chains.qa_with_sources.base import QAWithSourcesChain from langchain.chains.qa_with_sources.vector_db import VectorDBQAWithSourcesChain from langchain.chains.retrieval_qa.base import VectorDBQA from langchain.chains.sql_database.base import SQLDatabaseChain from langchain.llms.loading import load_llm, load_llm_from_config from langchain.prompts.loading import load_prompt, load_prompt_from_config from langchain.utilities.loading import try_load_from_hub URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/chains/" def _load_llm_chain(config: dict, **kwargs: Any) -> LLMChain: """Load LLM chain from config dict.""" if "llm" in config:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-1
"""Load LLM chain from config dict.""" if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) elif "prompt_path" in config: prompt = load_prompt(config.pop("prompt_path")) else: raise ValueError("One of `prompt` or `prompt_path` must be present.") return LLMChain(llm=llm, prompt=prompt, **config) def _load_hyde_chain(config: dict, **kwargs: Any) -> HypotheticalDocumentEmbedder: """Load hypothetical document embedder chain from config dict.""" if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.") if "embeddings" in kwargs: embeddings = kwargs.pop("embeddings") else: raise ValueError("`embeddings` must be present.") return HypotheticalDocumentEmbedder( llm_chain=llm_chain, base_embeddings=embeddings, **config )
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-2
llm_chain=llm_chain, base_embeddings=embeddings, **config ) def _load_stuff_documents_chain(config: dict, **kwargs: Any) -> StuffDocumentsChain: if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.") if not isinstance(llm_chain, LLMChain): raise ValueError(f"Expected LLMChain, got {llm_chain}") if "document_prompt" in config: prompt_config = config.pop("document_prompt") document_prompt = load_prompt_from_config(prompt_config) elif "document_prompt_path" in config: document_prompt = load_prompt(config.pop("document_prompt_path")) else: raise ValueError( "One of `document_prompt` or `document_prompt_path` must be present." ) return StuffDocumentsChain( llm_chain=llm_chain, document_prompt=document_prompt, **config ) def _load_map_reduce_documents_chain( config: dict, **kwargs: Any ) -> MapReduceDocumentsChain: if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-3
llm_chain = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.") if not isinstance(llm_chain, LLMChain): raise ValueError(f"Expected LLMChain, got {llm_chain}") if "combine_document_chain" in config: combine_document_chain_config = config.pop("combine_document_chain") combine_document_chain = load_chain_from_config(combine_document_chain_config) elif "combine_document_chain_path" in config: combine_document_chain = load_chain(config.pop("combine_document_chain_path")) else: raise ValueError( "One of `combine_document_chain` or " "`combine_document_chain_path` must be present." ) if "collapse_document_chain" in config: collapse_document_chain_config = config.pop("collapse_document_chain") if collapse_document_chain_config is None: collapse_document_chain = None else: collapse_document_chain = load_chain_from_config( collapse_document_chain_config ) elif "collapse_document_chain_path" in config: collapse_document_chain = load_chain(config.pop("collapse_document_chain_path")) return MapReduceDocumentsChain( llm_chain=llm_chain, combine_document_chain=combine_document_chain, collapse_document_chain=collapse_document_chain, **config, ) def _load_llm_bash_chain(config: dict, **kwargs: Any) -> LLMBashChain: if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-4
elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) elif "prompt_path" in config: prompt = load_prompt(config.pop("prompt_path")) return LLMBashChain(llm=llm, prompt=prompt, **config) def _load_llm_checker_chain(config: dict, **kwargs: Any) -> LLMCheckerChain: if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "create_draft_answer_prompt" in config: create_draft_answer_prompt_config = config.pop("create_draft_answer_prompt") create_draft_answer_prompt = load_prompt_from_config( create_draft_answer_prompt_config ) elif "create_draft_answer_prompt_path" in config: create_draft_answer_prompt = load_prompt( config.pop("create_draft_answer_prompt_path") ) if "list_assertions_prompt" in config: list_assertions_prompt_config = config.pop("list_assertions_prompt") list_assertions_prompt = load_prompt_from_config(list_assertions_prompt_config) elif "list_assertions_prompt_path" in config: list_assertions_prompt = load_prompt(config.pop("list_assertions_prompt_path"))
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-5
list_assertions_prompt = load_prompt(config.pop("list_assertions_prompt_path")) if "check_assertions_prompt" in config: check_assertions_prompt_config = config.pop("check_assertions_prompt") check_assertions_prompt = load_prompt_from_config( check_assertions_prompt_config ) elif "check_assertions_prompt_path" in config: check_assertions_prompt = load_prompt( config.pop("check_assertions_prompt_path") ) if "revised_answer_prompt" in config: revised_answer_prompt_config = config.pop("revised_answer_prompt") revised_answer_prompt = load_prompt_from_config(revised_answer_prompt_config) elif "revised_answer_prompt_path" in config: revised_answer_prompt = load_prompt(config.pop("revised_answer_prompt_path")) return LLMCheckerChain( llm=llm, create_draft_answer_prompt=create_draft_answer_prompt, list_assertions_prompt=list_assertions_prompt, check_assertions_prompt=check_assertions_prompt, revised_answer_prompt=revised_answer_prompt, **config, ) def _load_llm_math_chain(config: dict, **kwargs: Any) -> LLMMathChain: if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) elif "prompt_path" in config:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-6
prompt = load_prompt_from_config(prompt_config) elif "prompt_path" in config: prompt = load_prompt(config.pop("prompt_path")) return LLMMathChain(llm=llm, prompt=prompt, **config) def _load_map_rerank_documents_chain( config: dict, **kwargs: Any ) -> MapRerankDocumentsChain: if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` or `llm_chain_config` must be present.") return MapRerankDocumentsChain(llm_chain=llm_chain, **config) def _load_pal_chain(config: dict, **kwargs: Any) -> PALChain: if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) elif "prompt_path" in config: prompt = load_prompt(config.pop("prompt_path")) else: raise ValueError("One of `prompt` or `prompt_path` must be present.") return PALChain(llm=llm, prompt=prompt, **config)
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-7
return PALChain(llm=llm, prompt=prompt, **config) def _load_refine_documents_chain(config: dict, **kwargs: Any) -> RefineDocumentsChain: if "initial_llm_chain" in config: initial_llm_chain_config = config.pop("initial_llm_chain") initial_llm_chain = load_chain_from_config(initial_llm_chain_config) elif "initial_llm_chain_path" in config: initial_llm_chain = load_chain(config.pop("initial_llm_chain_path")) else: raise ValueError( "One of `initial_llm_chain` or `initial_llm_chain_config` must be present." ) if "refine_llm_chain" in config: refine_llm_chain_config = config.pop("refine_llm_chain") refine_llm_chain = load_chain_from_config(refine_llm_chain_config) elif "refine_llm_chain_path" in config: refine_llm_chain = load_chain(config.pop("refine_llm_chain_path")) else: raise ValueError( "One of `refine_llm_chain` or `refine_llm_chain_config` must be present." ) if "document_prompt" in config: prompt_config = config.pop("document_prompt") document_prompt = load_prompt_from_config(prompt_config) elif "document_prompt_path" in config: document_prompt = load_prompt(config.pop("document_prompt_path")) return RefineDocumentsChain( initial_llm_chain=initial_llm_chain, refine_llm_chain=refine_llm_chain, document_prompt=document_prompt, **config, ) def _load_qa_with_sources_chain(config: dict, **kwargs: Any) -> QAWithSourcesChain:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-8
if "combine_documents_chain" in config: combine_documents_chain_config = config.pop("combine_documents_chain") combine_documents_chain = load_chain_from_config(combine_documents_chain_config) elif "combine_documents_chain_path" in config: combine_documents_chain = load_chain(config.pop("combine_documents_chain_path")) else: raise ValueError( "One of `combine_documents_chain` or " "`combine_documents_chain_path` must be present." ) return QAWithSourcesChain(combine_documents_chain=combine_documents_chain, **config) def _load_sql_database_chain(config: dict, **kwargs: Any) -> SQLDatabaseChain: if "database" in kwargs: database = kwargs.pop("database") else: raise ValueError("`database` must be present.") if "llm" in config: llm_config = config.pop("llm") llm = load_llm_from_config(llm_config) elif "llm_path" in config: llm = load_llm(config.pop("llm_path")) else: raise ValueError("One of `llm` or `llm_path` must be present.") if "prompt" in config: prompt_config = config.pop("prompt") prompt = load_prompt_from_config(prompt_config) return SQLDatabaseChain(database=database, llm=llm, prompt=prompt, **config) def _load_vector_db_qa_with_sources_chain( config: dict, **kwargs: Any ) -> VectorDBQAWithSourcesChain: if "vectorstore" in kwargs: vectorstore = kwargs.pop("vectorstore") else: raise ValueError("`vectorstore` must be present.") if "combine_documents_chain" in config:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-9
if "combine_documents_chain" in config: combine_documents_chain_config = config.pop("combine_documents_chain") combine_documents_chain = load_chain_from_config(combine_documents_chain_config) elif "combine_documents_chain_path" in config: combine_documents_chain = load_chain(config.pop("combine_documents_chain_path")) else: raise ValueError( "One of `combine_documents_chain` or " "`combine_documents_chain_path` must be present." ) return VectorDBQAWithSourcesChain( combine_documents_chain=combine_documents_chain, vectorstore=vectorstore, **config, ) def _load_vector_db_qa(config: dict, **kwargs: Any) -> VectorDBQA: if "vectorstore" in kwargs: vectorstore = kwargs.pop("vectorstore") else: raise ValueError("`vectorstore` must be present.") if "combine_documents_chain" in config: combine_documents_chain_config = config.pop("combine_documents_chain") combine_documents_chain = load_chain_from_config(combine_documents_chain_config) elif "combine_documents_chain_path" in config: combine_documents_chain = load_chain(config.pop("combine_documents_chain_path")) else: raise ValueError( "One of `combine_documents_chain` or " "`combine_documents_chain_path` must be present." ) return VectorDBQA( combine_documents_chain=combine_documents_chain, vectorstore=vectorstore, **config, ) def _load_api_chain(config: dict, **kwargs: Any) -> APIChain: if "api_request_chain" in config: api_request_chain_config = config.pop("api_request_chain") api_request_chain = load_chain_from_config(api_request_chain_config)
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-10
api_request_chain = load_chain_from_config(api_request_chain_config) elif "api_request_chain_path" in config: api_request_chain = load_chain(config.pop("api_request_chain_path")) else: raise ValueError( "One of `api_request_chain` or `api_request_chain_path` must be present." ) if "api_answer_chain" in config: api_answer_chain_config = config.pop("api_answer_chain") api_answer_chain = load_chain_from_config(api_answer_chain_config) elif "api_answer_chain_path" in config: api_answer_chain = load_chain(config.pop("api_answer_chain_path")) else: raise ValueError( "One of `api_answer_chain` or `api_answer_chain_path` must be present." ) if "requests_wrapper" in kwargs: requests_wrapper = kwargs.pop("requests_wrapper") else: raise ValueError("`requests_wrapper` must be present.") return APIChain( api_request_chain=api_request_chain, api_answer_chain=api_answer_chain, requests_wrapper=requests_wrapper, **config, ) def _load_llm_requests_chain(config: dict, **kwargs: Any) -> LLMRequestsChain: if "llm_chain" in config: llm_chain_config = config.pop("llm_chain") llm_chain = load_chain_from_config(llm_chain_config) elif "llm_chain_path" in config: llm_chain = load_chain(config.pop("llm_chain_path")) else: raise ValueError("One of `llm_chain` or `llm_chain_path` must be present.") if "requests_wrapper" in kwargs: requests_wrapper = kwargs.pop("requests_wrapper")
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-11
if "requests_wrapper" in kwargs: requests_wrapper = kwargs.pop("requests_wrapper") return LLMRequestsChain( llm_chain=llm_chain, requests_wrapper=requests_wrapper, **config ) else: return LLMRequestsChain(llm_chain=llm_chain, **config) type_to_loader_dict = { "api_chain": _load_api_chain, "hyde_chain": _load_hyde_chain, "llm_chain": _load_llm_chain, "llm_bash_chain": _load_llm_bash_chain, "llm_checker_chain": _load_llm_checker_chain, "llm_math_chain": _load_llm_math_chain, "llm_requests_chain": _load_llm_requests_chain, "pal_chain": _load_pal_chain, "qa_with_sources_chain": _load_qa_with_sources_chain, "stuff_documents_chain": _load_stuff_documents_chain, "map_reduce_documents_chain": _load_map_reduce_documents_chain, "map_rerank_documents_chain": _load_map_rerank_documents_chain, "refine_documents_chain": _load_refine_documents_chain, "sql_database_chain": _load_sql_database_chain, "vector_db_qa_with_sources_chain": _load_vector_db_qa_with_sources_chain, "vector_db_qa": _load_vector_db_qa, } def load_chain_from_config(config: dict, **kwargs: Any) -> Chain: """Load chain from Config Dict.""" if "_type" not in config: raise ValueError("Must specify a chain Type in config") config_type = config.pop("_type") if config_type not in type_to_loader_dict:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-12
if config_type not in type_to_loader_dict: raise ValueError(f"Loading {config_type} chain not supported") chain_loader = type_to_loader_dict[config_type] return chain_loader(config, **kwargs) [docs]def load_chain(path: Union[str, Path], **kwargs: Any) -> Chain: """Unified method for loading a chain from LangChainHub or local fs.""" if hub_result := try_load_from_hub( path, _load_chain_from_file, "chains", {"json", "yaml"}, **kwargs ): return hub_result else: return _load_chain_from_file(path, **kwargs) def _load_chain_from_file(file: Union[str, Path], **kwargs: Any) -> Chain: """Load chain from file.""" # Convert file to Path object. if isinstance(file, str): file_path = Path(file) else: file_path = file # Load from either json or yaml. if file_path.suffix == ".json": with open(file_path) as f: config = json.load(f) elif file_path.suffix == ".yaml": with open(file_path, "r") as f: config = yaml.safe_load(f) else: raise ValueError("File type must be json or yaml") # Override default 'verbose' and 'memory' for the chain if "verbose" in kwargs: config["verbose"] = kwargs.pop("verbose") if "memory" in kwargs: config["memory"] = kwargs.pop("memory") # Load the chain from the config now. return load_chain_from_config(config, **kwargs) By Harrison Chase © Copyright 2023, Harrison Chase.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
9fb40d9611b5-13
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/loading.html" }
4a30c78e81f0-0
Source code for langchain.chains.llm """Chain that just formats a prompt and calls an LLM.""" from __future__ import annotations from typing import Any, Dict, List, Optional, Sequence, Tuple, Union from pydantic import Extra from langchain.chains.base import Chain from langchain.input import get_colored_text from langchain.prompts.base import BasePromptTemplate from langchain.prompts.prompt import PromptTemplate from langchain.schema import BaseLanguageModel, LLMResult, PromptValue [docs]class LLMChain(Chain): """Chain to run queries against LLMs. Example: .. code-block:: python from langchain import LLMChain, OpenAI, PromptTemplate prompt_template = "Tell me a {adjective} joke" prompt = PromptTemplate( input_variables=["adjective"], template=prompt_template ) llm = LLMChain(llm=OpenAI(), prompt=prompt) """ prompt: BasePromptTemplate """Prompt object to use.""" llm: BaseLanguageModel output_key: str = "text" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Will be whatever keys the prompt expects. :meta private: """ return self.prompt.input_variables @property def output_keys(self) -> List[str]: """Will always return text key. :meta private: """ return [self.output_key] def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]: return self.apply([inputs])[0]
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html" }
4a30c78e81f0-1
return self.apply([inputs])[0] [docs] def generate(self, input_list: List[Dict[str, Any]]) -> LLMResult: """Generate LLM result from inputs.""" prompts, stop = self.prep_prompts(input_list) return self.llm.generate_prompt(prompts, stop) [docs] async def agenerate(self, input_list: List[Dict[str, Any]]) -> LLMResult: """Generate LLM result from inputs.""" prompts, stop = await self.aprep_prompts(input_list) return await self.llm.agenerate_prompt(prompts, stop) [docs] def prep_prompts( self, input_list: List[Dict[str, Any]] ) -> Tuple[List[PromptValue], Optional[List[str]]]: """Prepare prompts from inputs.""" stop = None if "stop" in input_list[0]: stop = input_list[0]["stop"] prompts = [] for inputs in input_list: selected_inputs = {k: inputs[k] for k in self.prompt.input_variables} prompt = self.prompt.format_prompt(**selected_inputs) _colored_text = get_colored_text(prompt.to_string(), "green") _text = "Prompt after formatting:\n" + _colored_text self.callback_manager.on_text(_text, end="\n", verbose=self.verbose) if "stop" in inputs and inputs["stop"] != stop: raise ValueError( "If `stop` is present in any inputs, should be present in all." ) prompts.append(prompt) return prompts, stop [docs] async def aprep_prompts( self, input_list: List[Dict[str, Any]]
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html" }
4a30c78e81f0-2
self, input_list: List[Dict[str, Any]] ) -> Tuple[List[PromptValue], Optional[List[str]]]: """Prepare prompts from inputs.""" stop = None if "stop" in input_list[0]: stop = input_list[0]["stop"] prompts = [] for inputs in input_list: selected_inputs = {k: inputs[k] for k in self.prompt.input_variables} prompt = self.prompt.format_prompt(**selected_inputs) _colored_text = get_colored_text(prompt.to_string(), "green") _text = "Prompt after formatting:\n" + _colored_text if self.callback_manager.is_async: await self.callback_manager.on_text( _text, end="\n", verbose=self.verbose ) else: self.callback_manager.on_text(_text, end="\n", verbose=self.verbose) if "stop" in inputs and inputs["stop"] != stop: raise ValueError( "If `stop` is present in any inputs, should be present in all." ) prompts.append(prompt) return prompts, stop [docs] def apply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]: """Utilize the LLM generate method for speed gains.""" response = self.generate(input_list) return self.create_outputs(response) [docs] async def aapply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]: """Utilize the LLM generate method for speed gains.""" response = await self.agenerate(input_list) return self.create_outputs(response) [docs] def create_outputs(self, response: LLMResult) -> List[Dict[str, str]]:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html" }
4a30c78e81f0-3
"""Create outputs from response.""" return [ # Get the text of the top generated string. {self.output_key: generation[0].text} for generation in response.generations ] async def _acall(self, inputs: Dict[str, Any]) -> Dict[str, str]: return (await self.aapply([inputs]))[0] [docs] def predict(self, **kwargs: Any) -> str: """Format prompt with kwargs and pass to LLM. Args: **kwargs: Keys to pass to prompt template. Returns: Completion from LLM. Example: .. code-block:: python completion = llm.predict(adjective="funny") """ return self(kwargs)[self.output_key] [docs] async def apredict(self, **kwargs: Any) -> str: """Format prompt with kwargs and pass to LLM. Args: **kwargs: Keys to pass to prompt template. Returns: Completion from LLM. Example: .. code-block:: python completion = llm.predict(adjective="funny") """ return (await self.acall(kwargs))[self.output_key] [docs] def predict_and_parse(self, **kwargs: Any) -> Union[str, List[str], Dict[str, str]]: """Call predict and then parse the results.""" result = self.predict(**kwargs) if self.prompt.output_parser is not None: return self.prompt.output_parser.parse(result) else: return result [docs] async def apredict_and_parse( self, **kwargs: Any ) -> Union[str, List[str], Dict[str, str]]:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html" }
4a30c78e81f0-4
) -> Union[str, List[str], Dict[str, str]]: """Call apredict and then parse the results.""" result = await self.apredict(**kwargs) if self.prompt.output_parser is not None: return self.prompt.output_parser.parse(result) else: return result [docs] def apply_and_parse( self, input_list: List[Dict[str, Any]] ) -> Sequence[Union[str, List[str], Dict[str, str]]]: """Call apply and then parse the results.""" result = self.apply(input_list) return self._parse_result(result) def _parse_result( self, result: List[Dict[str, str]] ) -> Sequence[Union[str, List[str], Dict[str, str]]]: if self.prompt.output_parser is not None: return [ self.prompt.output_parser.parse(res[self.output_key]) for res in result ] else: return result [docs] async def aapply_and_parse( self, input_list: List[Dict[str, Any]] ) -> Sequence[Union[str, List[str], Dict[str, str]]]: """Call apply and then parse the results.""" result = await self.aapply(input_list) return self._parse_result(result) @property def _chain_type(self) -> str: return "llm_chain" [docs] @classmethod def from_string(cls, llm: BaseLanguageModel, template: str) -> Chain: """Create LLMChain from LLM and template.""" prompt_template = PromptTemplate.from_template(template) return cls(llm=llm, prompt=prompt_template) By Harrison Chase © Copyright 2023, Harrison Chase.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html" }
4a30c78e81f0-5
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm.html" }
af1e9faccc4a-0
Source code for langchain.chains.moderation """Pass input through a moderation endpoint.""" from typing import Any, Dict, List, Optional from pydantic import root_validator from langchain.chains.base import Chain from langchain.utils import get_from_dict_or_env [docs]class OpenAIModerationChain(Chain): """Pass input through a moderation endpoint. To use, you should have the ``openai`` python package installed, and the environment variable ``OPENAI_API_KEY`` set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example: .. code-block:: python from langchain.chains import OpenAIModerationChain moderation = OpenAIModerationChain() """ client: Any #: :meta private: model_name: Optional[str] = None """Moderation model name to use.""" error: bool = False """Whether or not to error if bad content was found.""" input_key: str = "input" #: :meta private: output_key: str = "output" #: :meta private: openai_api_key: Optional[str] = None openai_organization: Optional[str] = None @root_validator() def validate_environment(cls, values: Dict) -> Dict: """Validate that api key and python package exists in environment.""" openai_api_key = get_from_dict_or_env( values, "openai_api_key", "OPENAI_API_KEY" ) openai_organization = get_from_dict_or_env( values, "openai_organization", "OPENAI_ORGANIZATION", default="", )
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/moderation.html" }
af1e9faccc4a-1
"OPENAI_ORGANIZATION", default="", ) try: import openai openai.api_key = openai_api_key if openai_organization: openai.organization = openai_organization values["client"] = openai.Moderation except ImportError: raise ValueError( "Could not import openai python package. " "Please it install it with `pip install openai`." ) return values @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return [self.output_key] def _moderate(self, text: str, results: dict) -> str: if results["flagged"]: error_str = "Text was found that violates OpenAI's content policy." if self.error: raise ValueError(error_str) else: return error_str return text def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: text = inputs[self.input_key] results = self.client.create(text) output = self._moderate(text, results["results"][0]) return {self.output_key: output} By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/moderation.html" }
95a84755fc8a-0
Source code for langchain.chains.llm_bash.base """Chain that interprets a prompt and executes bash code to perform bash operations.""" from typing import Dict, List from pydantic import Extra from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.chains.llm_bash.prompt import PROMPT from langchain.prompts.base import BasePromptTemplate from langchain.schema import BaseLanguageModel from langchain.utilities.bash import BashProcess [docs]class LLMBashChain(Chain): """Chain that interprets a prompt and executes bash code to perform bash operations. Example: .. code-block:: python from langchain import LLMBashChain, OpenAI llm_bash = LLMBashChain(llm=OpenAI()) """ llm: BaseLanguageModel """LLM wrapper to use.""" input_key: str = "question" #: :meta private: output_key: str = "answer" #: :meta private: prompt: BasePromptTemplate = PROMPT class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Expect output key. :meta private: """ return [self.output_key] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: llm_executor = LLMChain(prompt=self.prompt, llm=self.llm) bash_executor = BashProcess()
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html" }
95a84755fc8a-1
bash_executor = BashProcess() self.callback_manager.on_text(inputs[self.input_key], verbose=self.verbose) t = llm_executor.predict(question=inputs[self.input_key]) self.callback_manager.on_text(t, color="green", verbose=self.verbose) t = t.strip() if t.startswith("```bash"): # Split the string into a list of substrings command_list = t.split("\n") print(command_list) # Remove the first and last substrings command_list = [s for s in command_list[1:-1]] output = bash_executor.run(command_list) self.callback_manager.on_text("\nAnswer: ", verbose=self.verbose) self.callback_manager.on_text(output, color="yellow", verbose=self.verbose) else: raise ValueError(f"unknown format from LLM: {t}") return {self.output_key: output} @property def _chain_type(self) -> str: return "llm_bash_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_bash/base.html" }
ef5929377f72-0
Source code for langchain.chains.llm_checker.base """Chain for question-answering with self-verification.""" from typing import Dict, List from pydantic import Extra from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.chains.llm_checker.prompt import ( CHECK_ASSERTIONS_PROMPT, CREATE_DRAFT_ANSWER_PROMPT, LIST_ASSERTIONS_PROMPT, REVISED_ANSWER_PROMPT, ) from langchain.chains.sequential import SequentialChain from langchain.llms.base import BaseLLM from langchain.prompts import PromptTemplate [docs]class LLMCheckerChain(Chain): """Chain for question-answering with self-verification. Example: .. code-block:: python from langchain import OpenAI, LLMCheckerChain llm = OpenAI(temperature=0.7) checker_chain = LLMCheckerChain(llm=llm) """ llm: BaseLLM """LLM wrapper to use.""" create_draft_answer_prompt: PromptTemplate = CREATE_DRAFT_ANSWER_PROMPT list_assertions_prompt: PromptTemplate = LIST_ASSERTIONS_PROMPT check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT revised_answer_prompt: PromptTemplate = REVISED_ANSWER_PROMPT """Prompt to use when questioning the documents.""" input_key: str = "query" #: :meta private: output_key: str = "result" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Return the singular input key.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html" }
ef5929377f72-1
def input_keys(self) -> List[str]: """Return the singular input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return the singular output key. :meta private: """ return [self.output_key] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: question = inputs[self.input_key] create_draft_answer_chain = LLMChain( llm=self.llm, prompt=self.create_draft_answer_prompt, output_key="statement" ) list_assertions_chain = LLMChain( llm=self.llm, prompt=self.list_assertions_prompt, output_key="assertions" ) check_assertions_chain = LLMChain( llm=self.llm, prompt=self.check_assertions_prompt, output_key="checked_assertions", ) revised_answer_chain = LLMChain( llm=self.llm, prompt=self.revised_answer_prompt, output_key="revised_statement", ) chains = [ create_draft_answer_chain, list_assertions_chain, check_assertions_chain, revised_answer_chain, ] question_to_checked_assertions_chain = SequentialChain( chains=chains, input_variables=["question"], output_variables=["revised_statement"], verbose=True, ) output = question_to_checked_assertions_chain({"question": question}) return {self.output_key: output["revised_statement"]} @property def _chain_type(self) -> str: return "llm_checker_chain" By Harrison Chase
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html" }
ef5929377f72-2
return "llm_checker_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_checker/base.html" }
cb7223ad8eb7-0
Source code for langchain.chains.sql_database.base """Chain for interacting with SQL Database.""" from __future__ import annotations from typing import Any, Dict, List from pydantic import Extra, Field from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.chains.sql_database.prompt import DECIDER_PROMPT, PROMPT from langchain.prompts.base import BasePromptTemplate from langchain.schema import BaseLanguageModel from langchain.sql_database import SQLDatabase [docs]class SQLDatabaseChain(Chain): """Chain for interacting with SQL Database. Example: .. code-block:: python from langchain import SQLDatabaseChain, OpenAI, SQLDatabase db = SQLDatabase(...) db_chain = SQLDatabaseChain(llm=OpenAI(), database=db) """ llm: BaseLanguageModel """LLM wrapper to use.""" database: SQLDatabase = Field(exclude=True) """SQL Database to connect to.""" prompt: BasePromptTemplate = PROMPT """Prompt to use to translate natural language to SQL.""" top_k: int = 5 """Number of results to return from the query""" input_key: str = "query" #: :meta private: output_key: str = "result" #: :meta private: return_intermediate_steps: bool = False """Whether or not to return the intermediate steps along with the final answer.""" return_direct: bool = False """Whether or not to return the result of querying the SQL table directly.""" class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html" }
cb7223ad8eb7-1
@property def input_keys(self) -> List[str]: """Return the singular input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return the singular output key. :meta private: """ if not self.return_intermediate_steps: return [self.output_key] else: return [self.output_key, "intermediate_steps"] def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]: llm_chain = LLMChain(llm=self.llm, prompt=self.prompt) input_text = f"{inputs[self.input_key]} \nSQLQuery:" self.callback_manager.on_text(input_text, verbose=self.verbose) # If not present, then defaults to None which is all tables. table_names_to_use = inputs.get("table_names_to_use") table_info = self.database.get_table_info(table_names=table_names_to_use) llm_inputs = { "input": input_text, "top_k": self.top_k, "dialect": self.database.dialect, "table_info": table_info, "stop": ["\nSQLResult:"], } intermediate_steps = [] sql_cmd = llm_chain.predict(**llm_inputs) intermediate_steps.append(sql_cmd) self.callback_manager.on_text(sql_cmd, color="green", verbose=self.verbose) result = self.database.run(sql_cmd) intermediate_steps.append(result) self.callback_manager.on_text("\nSQLResult: ", verbose=self.verbose) self.callback_manager.on_text(result, color="yellow", verbose=self.verbose)
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html" }
cb7223ad8eb7-2
self.callback_manager.on_text(result, color="yellow", verbose=self.verbose) # If return direct, we just set the final result equal to the sql query if self.return_direct: final_result = result else: self.callback_manager.on_text("\nAnswer:", verbose=self.verbose) input_text += f"{sql_cmd}\nSQLResult: {result}\nAnswer:" llm_inputs["input"] = input_text final_result = llm_chain.predict(**llm_inputs) self.callback_manager.on_text( final_result, color="green", verbose=self.verbose ) chain_result: Dict[str, Any] = {self.output_key: final_result} if self.return_intermediate_steps: chain_result["intermediate_steps"] = intermediate_steps return chain_result @property def _chain_type(self) -> str: return "sql_database_chain" [docs]class SQLDatabaseSequentialChain(Chain): """Chain for querying SQL database that is a sequential chain. The chain is as follows: 1. Based on the query, determine which tables to use. 2. Based on those tables, call the normal SQL database chain. This is useful in cases where the number of tables in the database is large. """ return_intermediate_steps: bool = False [docs] @classmethod def from_llm( cls, llm: BaseLanguageModel, database: SQLDatabase, query_prompt: BasePromptTemplate = PROMPT, decider_prompt: BasePromptTemplate = DECIDER_PROMPT, **kwargs: Any, ) -> SQLDatabaseSequentialChain: """Load the necessary chains.""" sql_chain = SQLDatabaseChain(
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html" }
cb7223ad8eb7-3
"""Load the necessary chains.""" sql_chain = SQLDatabaseChain( llm=llm, database=database, prompt=query_prompt, **kwargs ) decider_chain = LLMChain( llm=llm, prompt=decider_prompt, output_key="table_names" ) return cls(sql_chain=sql_chain, decider_chain=decider_chain, **kwargs) decider_chain: LLMChain sql_chain: SQLDatabaseChain input_key: str = "query" #: :meta private: output_key: str = "result" #: :meta private: @property def input_keys(self) -> List[str]: """Return the singular input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return the singular output key. :meta private: """ if not self.return_intermediate_steps: return [self.output_key] else: return [self.output_key, "intermediate_steps"] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: _table_names = self.sql_chain.database.get_usable_table_names() table_names = ", ".join(_table_names) llm_inputs = { "query": inputs[self.input_key], "table_names": table_names, } table_names_to_use = self.decider_chain.predict_and_parse(**llm_inputs) self.callback_manager.on_text( "Table names to use:", end="\n", verbose=self.verbose ) self.callback_manager.on_text( str(table_names_to_use), color="yellow", verbose=self.verbose )
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html" }
cb7223ad8eb7-4
str(table_names_to_use), color="yellow", verbose=self.verbose ) new_inputs = { self.sql_chain.input_key: inputs[self.input_key], "table_names_to_use": table_names_to_use, } return self.sql_chain(new_inputs, return_only_outputs=True) @property def _chain_type(self) -> str: return "sql_database_sequential_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/sql_database/base.html" }
128832f32c23-0
Source code for langchain.chains.pal.base """Implements Program-Aided Language Models. As in https://arxiv.org/pdf/2211.10435.pdf. """ from __future__ import annotations from typing import Any, Dict, List, Optional from pydantic import Extra from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.chains.pal.colored_object_prompt import COLORED_OBJECT_PROMPT from langchain.chains.pal.math_prompt import MATH_PROMPT from langchain.prompts.base import BasePromptTemplate from langchain.python import PythonREPL from langchain.schema import BaseLanguageModel [docs]class PALChain(Chain): """Implements Program-Aided Language Models.""" llm: BaseLanguageModel prompt: BasePromptTemplate stop: str = "\n\n" get_answer_expr: str = "print(solution())" python_globals: Optional[Dict[str, Any]] = None python_locals: Optional[Dict[str, Any]] = None output_key: str = "result" #: :meta private: return_intermediate_steps: bool = False class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Return the singular input key. :meta private: """ return self.prompt.input_variables @property def output_keys(self) -> List[str]: """Return the singular output key. :meta private: """ if not self.return_intermediate_steps: return [self.output_key] else: return [self.output_key, "intermediate_steps"]
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html" }
128832f32c23-1
else: return [self.output_key, "intermediate_steps"] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: llm_chain = LLMChain(llm=self.llm, prompt=self.prompt) code = llm_chain.predict(stop=[self.stop], **inputs) self.callback_manager.on_text( code, color="green", end="\n", verbose=self.verbose ) repl = PythonREPL(_globals=self.python_globals, _locals=self.python_locals) res = repl.run(code + f"\n{self.get_answer_expr}") output = {self.output_key: res.strip()} if self.return_intermediate_steps: output["intermediate_steps"] = code return output [docs] @classmethod def from_math_prompt(cls, llm: BaseLanguageModel, **kwargs: Any) -> PALChain: """Load PAL from math prompt.""" return cls( llm=llm, prompt=MATH_PROMPT, stop="\n\n", get_answer_expr="print(solution())", **kwargs, ) [docs] @classmethod def from_colored_object_prompt( cls, llm: BaseLanguageModel, **kwargs: Any ) -> PALChain: """Load PAL from colored object prompt.""" return cls( llm=llm, prompt=COLORED_OBJECT_PROMPT, stop="\n\n\n", get_answer_expr="print(answer)", **kwargs, ) @property def _chain_type(self) -> str: return "pal_chain" By Harrison Chase © Copyright 2023, Harrison Chase.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html" }
128832f32c23-2
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/pal/base.html" }
ea6a213198d6-0
Source code for langchain.chains.api.base """Chain that makes API calls and summarizes the responses to answer a question.""" from __future__ import annotations from typing import Any, Dict, List, Optional from pydantic import Field, root_validator from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.prompts import BasePromptTemplate from langchain.requests import TextRequestsWrapper from langchain.schema import BaseLanguageModel [docs]class APIChain(Chain): """Chain that makes API calls and summarizes the responses to answer a question.""" api_request_chain: LLMChain api_answer_chain: LLMChain requests_wrapper: TextRequestsWrapper = Field(exclude=True) api_docs: str question_key: str = "question" #: :meta private: output_key: str = "output" #: :meta private: @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.question_key] @property def output_keys(self) -> List[str]: """Expect output key. :meta private: """ return [self.output_key] @root_validator(pre=True) def validate_api_request_prompt(cls, values: Dict) -> Dict: """Check that api request prompt expects the right variables.""" input_vars = values["api_request_chain"].prompt.input_variables expected_vars = {"question", "api_docs"} if set(input_vars) != expected_vars: raise ValueError( f"Input variables should be {expected_vars}, got {input_vars}" ) return values
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/base.html" }
ea6a213198d6-1
) return values @root_validator(pre=True) def validate_api_answer_prompt(cls, values: Dict) -> Dict: """Check that api answer prompt expects the right variables.""" input_vars = values["api_answer_chain"].prompt.input_variables expected_vars = {"question", "api_docs", "api_url", "api_response"} if set(input_vars) != expected_vars: raise ValueError( f"Input variables should be {expected_vars}, got {input_vars}" ) return values def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: question = inputs[self.question_key] api_url = self.api_request_chain.predict( question=question, api_docs=self.api_docs ) self.callback_manager.on_text( api_url, color="green", end="\n", verbose=self.verbose ) api_response = self.requests_wrapper.get(api_url) self.callback_manager.on_text( api_response, color="yellow", end="\n", verbose=self.verbose ) answer = self.api_answer_chain.predict( question=question, api_docs=self.api_docs, api_url=api_url, api_response=api_response, ) return {self.output_key: answer} [docs] @classmethod def from_llm_and_api_docs( cls, llm: BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: BasePromptTemplate = API_URL_PROMPT, api_response_prompt: BasePromptTemplate = API_RESPONSE_PROMPT, **kwargs: Any, ) -> APIChain: """Load chain from just an LLM and the api docs."""
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/base.html" }
ea6a213198d6-2
"""Load chain from just an LLM and the api docs.""" get_request_chain = LLMChain(llm=llm, prompt=api_url_prompt) requests_wrapper = TextRequestsWrapper(headers=headers) get_answer_chain = LLMChain(llm=llm, prompt=api_response_prompt) return cls( api_request_chain=get_request_chain, api_answer_chain=get_answer_chain, requests_wrapper=requests_wrapper, api_docs=api_docs, **kwargs, ) @property def _chain_type(self) -> str: return "api_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/base.html" }
88fac1bbf963-0
Source code for langchain.chains.api.openapi.chain """Chain that makes API calls and summarizes the responses to answer a question.""" from __future__ import annotations import json from typing import Dict, List, NamedTuple, Optional, cast from pydantic import BaseModel, Field from requests import Response from langchain.chains.api.openapi.requests_chain import APIRequesterChain from langchain.chains.api.openapi.response_chain import APIResponderChain from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.llms.base import BaseLLM from langchain.requests import Requests from langchain.tools.openapi.utils.api_models import APIOperation class _ParamMapping(NamedTuple): """Mapping from parameter name to parameter value.""" query_params: List[str] body_params: List[str] path_params: List[str] [docs]class OpenAPIEndpointChain(Chain, BaseModel): """Chain interacts with an OpenAPI endpoint using natural language.""" api_request_chain: LLMChain api_response_chain: LLMChain api_operation: APIOperation requests: Requests = Field(exclude=True, default_factory=Requests) param_mapping: _ParamMapping = Field(alias="param_mapping") return_intermediate_steps: bool = False instructions_key: str = "instructions" #: :meta private: output_key: str = "output" #: :meta private: @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.instructions_key] @property def output_keys(self) -> List[str]: """Expect output key. :meta private: """ if not self.return_intermediate_steps:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html" }
88fac1bbf963-1
:meta private: """ if not self.return_intermediate_steps: return [self.output_key] else: return [self.output_key, "intermediate_steps"] def _construct_path(self, args: Dict[str, str]) -> str: """Construct the path from the deserialized input.""" path = self.api_operation.base_url + self.api_operation.path for param in self.param_mapping.path_params: path = path.replace(f"{{{param}}}", args.pop(param, "")) return path def _extract_query_params(self, args: Dict[str, str]) -> Dict[str, str]: """Extract the query params from the deserialized input.""" query_params = {} for param in self.param_mapping.query_params: if param in args: query_params[param] = args.pop(param) return query_params def _extract_body_params(self, args: Dict[str, str]) -> Optional[Dict[str, str]]: """Extract the request body params from the deserialized input.""" body_params = None if self.param_mapping.body_params: body_params = {} for param in self.param_mapping.body_params: if param in args: body_params[param] = args.pop(param) return body_params [docs] def deserialize_json_input(self, serialized_args: str) -> dict: """Use the serialized typescript dictionary. Resolve the path, query params dict, and optional requestBody dict. """ args: dict = json.loads(serialized_args) path = self._construct_path(args) body_params = self._extract_body_params(args) query_params = self._extract_query_params(args) return { "url": path,
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html" }
88fac1bbf963-2
return { "url": path, "data": body_params, "params": query_params, } def _get_output(self, output: str, intermediate_steps: dict) -> dict: """Return the output from the API call.""" if self.return_intermediate_steps: return { self.output_key: output, "intermediate_steps": intermediate_steps, } else: return {self.output_key: output} def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: intermediate_steps = {} instructions = inputs[self.instructions_key] _api_arguments = self.api_request_chain.predict_and_parse( instructions=instructions ) api_arguments = cast(str, _api_arguments) intermediate_steps["request_args"] = api_arguments self.callback_manager.on_text( api_arguments, color="green", end="\n", verbose=self.verbose ) if api_arguments.startswith("ERROR"): return self._get_output(api_arguments, intermediate_steps) elif api_arguments.startswith("MESSAGE:"): return self._get_output( api_arguments[len("MESSAGE:") :], intermediate_steps ) try: request_args = self.deserialize_json_input(api_arguments) method = getattr(self.requests, self.api_operation.method.value) api_response: Response = method(**request_args) if api_response.status_code != 200: method_str = str(self.api_operation.method.value) response_text = ( f"{api_response.status_code}: {api_response.reason}" + f"\nFor {method_str.upper()} {request_args['url']}\n" + f"Called with args: {request_args['params']}" ) else:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html" }
88fac1bbf963-3
) else: response_text = api_response.text except Exception as e: response_text = f"Error with message {str(e)}" intermediate_steps["response_text"] = response_text self.callback_manager.on_text( response_text, color="blue", end="\n", verbose=self.verbose ) _answer = self.api_response_chain.predict_and_parse( response=response_text, instructions=instructions, ) answer = cast(str, _answer) self.callback_manager.on_text( answer, color="yellow", end="\n", verbose=self.verbose ) return self._get_output(answer, intermediate_steps) [docs] @classmethod def from_url_and_method( cls, spec_url: str, path: str, method: str, llm: BaseLLM, requests: Optional[Requests] = None, return_intermediate_steps: bool = False, # TODO: Handle async ) -> "OpenAPIEndpointChain": """Create an OpenAPIEndpoint from a spec at the specified url.""" operation = APIOperation.from_openapi_url(spec_url, path, method) return cls.from_api_operation( operation, requests=requests, llm=llm, return_intermediate_steps=return_intermediate_steps, ) [docs] @classmethod def from_api_operation( cls, operation: APIOperation, llm: BaseLLM, requests: Optional[Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False, # TODO: Handle async ) -> "OpenAPIEndpointChain":
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html" }
88fac1bbf963-4
# TODO: Handle async ) -> "OpenAPIEndpointChain": """Create an OpenAPIEndpointChain from an operation and a spec.""" param_mapping = _ParamMapping( query_params=operation.query_params, body_params=operation.body_params, path_params=operation.path_params, ) requests_chain = APIRequesterChain.from_llm_and_typescript( llm, typescript_definition=operation.to_typescript(), verbose=verbose ) response_chain = APIResponderChain.from_llm(llm, verbose=verbose) _requests = requests or Requests() return cls( api_request_chain=requests_chain, api_response_chain=response_chain, api_operation=operation, requests=_requests, param_mapping=param_mapping, verbose=verbose, return_intermediate_steps=return_intermediate_steps, ) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/api/openapi/chain.html" }
5b83f569d67a-0
Source code for langchain.chains.constitutional_ai.base """Chain for applying constitutional principles to the outputs of another chain.""" from typing import Any, Dict, List, Optional from langchain.chains.base import Chain from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple from langchain.chains.constitutional_ai.principles import PRINCIPLES from langchain.chains.constitutional_ai.prompts import CRITIQUE_PROMPT, REVISION_PROMPT from langchain.chains.llm import LLMChain from langchain.prompts.base import BasePromptTemplate from langchain.schema import BaseLanguageModel [docs]class ConstitutionalChain(Chain): """Chain for applying constitutional principles. Example: .. code-block:: python from langchain.llms import OpenAI from langchain.chains import LLMChain, ConstitutionalChain qa_prompt = PromptTemplate( template="Q: {question} A:", input_variables=["question"], ) qa_chain = LLMChain(llm=OpenAI(), prompt=qa_prompt) constitutional_chain = ConstitutionalChain.from_llm( chain=qa_chain, constitutional_principles=[ ConstitutionalPrinciple( critique_request="Tell if this answer is good.", revision_request="Give a better answer.", ) ], ) constitutional_chain.run(question="What is the meaning of life?") """ chain: LLMChain constitutional_principles: List[ConstitutionalPrinciple] critique_chain: LLMChain revision_chain: LLMChain [docs] @classmethod def get_principles( cls, names: Optional[List[str]] = None ) -> List[ConstitutionalPrinciple]: if names is None:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html" }
5b83f569d67a-1
) -> List[ConstitutionalPrinciple]: if names is None: return list(PRINCIPLES.values()) else: return [PRINCIPLES[name] for name in names] [docs] @classmethod def from_llm( cls, llm: BaseLanguageModel, chain: LLMChain, critique_prompt: BasePromptTemplate = CRITIQUE_PROMPT, revision_prompt: BasePromptTemplate = REVISION_PROMPT, **kwargs: Any, ) -> "ConstitutionalChain": """Create a chain from an LLM.""" critique_chain = LLMChain(llm=llm, prompt=critique_prompt) revision_chain = LLMChain(llm=llm, prompt=revision_prompt) return cls( chain=chain, critique_chain=critique_chain, revision_chain=revision_chain, **kwargs, ) @property def input_keys(self) -> List[str]: """Defines the input keys.""" return self.chain.input_keys @property def output_keys(self) -> List[str]: """Defines the output keys.""" return ["output"] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: response = self.chain.run(**inputs) input_prompt = self.chain.prompt.format(**inputs) self.callback_manager.on_text( text="Initial response: " + response + "\n\n", verbose=self.verbose, color="yellow", ) for constitutional_principle in self.constitutional_principles: # Do critique raw_critique = self.critique_chain.run( input_prompt=input_prompt, output_from_model=response,
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html" }
5b83f569d67a-2
input_prompt=input_prompt, output_from_model=response, critique_request=constitutional_principle.critique_request, ) critique = self._parse_critique( output_string=raw_critique, ).strip() # Do revision revision = self.revision_chain.run( input_prompt=input_prompt, output_from_model=response, critique_request=constitutional_principle.critique_request, critique=critique, revision_request=constitutional_principle.revision_request, ).strip() response = revision self.callback_manager.on_text( text=f"Applying {constitutional_principle.name}..." + "\n\n", verbose=self.verbose, color="green", ) self.callback_manager.on_text( text="Critique: " + critique + "\n\n", verbose=self.verbose, color="blue", ) self.callback_manager.on_text( text="Updated response: " + revision + "\n\n", verbose=self.verbose, color="yellow", ) return {"output": response} @staticmethod def _parse_critique(output_string: str) -> str: if "Revision request:" not in output_string: return output_string output_string = output_string.split("Revision request:")[0] if "\n\n" in output_string: output_string = output_string.split("\n\n")[0] return output_string By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/constitutional_ai/base.html" }
e13c07afcf37-0
Source code for langchain.chains.qa_generation.base from __future__ import annotations import json from typing import Any, Dict, List, Optional from pydantic import Field from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.chains.qa_generation.prompt import PROMPT_SELECTOR from langchain.prompts.base import BasePromptTemplate from langchain.schema import BaseLanguageModel from langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter [docs]class QAGenerationChain(Chain): llm_chain: LLMChain text_splitter: TextSplitter = Field( default=RecursiveCharacterTextSplitter(chunk_overlap=500) ) input_key: str = "text" output_key: str = "questions" k: Optional[int] = None [docs] @classmethod def from_llm( cls, llm: BaseLanguageModel, prompt: Optional[BasePromptTemplate] = None, **kwargs: Any, ) -> QAGenerationChain: _prompt = prompt or PROMPT_SELECTOR.get_prompt(llm) chain = LLMChain(llm=llm, prompt=_prompt) return cls(llm_chain=chain, **kwargs) @property def _chain_type(self) -> str: raise NotImplementedError @property def input_keys(self) -> List[str]: return [self.input_key] @property def output_keys(self) -> List[str]: return [self.output_key] def _call(self, inputs: Dict[str, str]) -> Dict[str, Any]: docs = self.text_splitter.create_documents([inputs[self.input_key]])
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_generation/base.html" }
e13c07afcf37-1
docs = self.text_splitter.create_documents([inputs[self.input_key]]) results = self.llm_chain.generate([{"text": d.page_content} for d in docs]) qa = [json.loads(res[0].text) for res in results.generations] return {self.output_key: qa} async def _acall(self, inputs: Dict[str, str]) -> Dict[str, str]: raise NotImplementedError By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_generation/base.html" }
07eb4b90e32e-0
Source code for langchain.chains.combine_documents.base """Base interface for chains combining documents.""" from abc import ABC, abstractmethod from typing import Any, Dict, List, Optional, Tuple from pydantic import Field from langchain.chains.base import Chain from langchain.docstore.document import Document from langchain.text_splitter import RecursiveCharacterTextSplitter, TextSplitter class BaseCombineDocumentsChain(Chain, ABC): """Base interface for chains combining documents.""" input_key: str = "input_documents" #: :meta private: output_key: str = "output_text" #: :meta private: @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return [self.output_key] def prompt_length(self, docs: List[Document], **kwargs: Any) -> Optional[int]: """Return the prompt length given the documents passed in. Returns None if the method does not depend on the prompt length. """ return None @abstractmethod def combine_docs(self, docs: List[Document], **kwargs: Any) -> Tuple[str, dict]: """Combine documents into a single string.""" @abstractmethod async def acombine_docs( self, docs: List[Document], **kwargs: Any ) -> Tuple[str, dict]: """Combine documents into a single string asynchronously.""" def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]: docs = inputs[self.input_key] # Other keys are assumed to be needed for LLM prediction
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html" }
07eb4b90e32e-1
# Other keys are assumed to be needed for LLM prediction other_keys = {k: v for k, v in inputs.items() if k != self.input_key} output, extra_return_dict = self.combine_docs(docs, **other_keys) extra_return_dict[self.output_key] = output return extra_return_dict async def _acall(self, inputs: Dict[str, Any]) -> Dict[str, str]: docs = inputs[self.input_key] # Other keys are assumed to be needed for LLM prediction other_keys = {k: v for k, v in inputs.items() if k != self.input_key} output, extra_return_dict = await self.acombine_docs(docs, **other_keys) extra_return_dict[self.output_key] = output return extra_return_dict [docs]class AnalyzeDocumentChain(Chain): """Chain that splits documents, then analyzes it in pieces.""" input_key: str = "input_document" #: :meta private: output_key: str = "output_text" #: :meta private: text_splitter: TextSplitter = Field(default_factory=RecursiveCharacterTextSplitter) combine_docs_chain: BaseCombineDocumentsChain @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ return [self.output_key] def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]: document = inputs[self.input_key] docs = self.text_splitter.create_documents([document]) # Other keys are assumed to be needed for LLM prediction
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html" }
07eb4b90e32e-2
# Other keys are assumed to be needed for LLM prediction other_keys = {k: v for k, v in inputs.items() if k != self.input_key} other_keys[self.combine_docs_chain.input_key] = docs return self.combine_docs_chain(other_keys, return_only_outputs=True) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/combine_documents/base.html" }
cabca3f3108c-0
Source code for langchain.chains.conversational_retrieval.base """Chain for chatting with a vector database.""" from __future__ import annotations import warnings from abc import abstractmethod from pathlib import Path from typing import Any, Callable, Dict, List, Optional, Tuple, Union from pydantic import Extra, Field, root_validator from langchain.chains.base import Chain from langchain.chains.combine_documents.base import BaseCombineDocumentsChain from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT from langchain.chains.llm import LLMChain from langchain.chains.question_answering import load_qa_chain from langchain.prompts.base import BasePromptTemplate from langchain.schema import BaseLanguageModel, BaseRetriever, Document from langchain.vectorstores.base import VectorStore def _get_chat_history(chat_history: List[Tuple[str, str]]) -> str: buffer = "" for human_s, ai_s in chat_history: human = "Human: " + human_s ai = "Assistant: " + ai_s buffer += "\n" + "\n".join([human, ai]) return buffer class BaseConversationalRetrievalChain(Chain): """Chain for chatting with an index.""" combine_docs_chain: BaseCombineDocumentsChain question_generator: LLMChain output_key: str = "answer" return_source_documents: bool = False get_chat_history: Optional[Callable[[Tuple[str, str]], str]] = None """Return the source documents.""" class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html" }
cabca3f3108c-1
extra = Extra.forbid arbitrary_types_allowed = True allow_population_by_field_name = True @property def input_keys(self) -> List[str]: """Input keys.""" return ["question", "chat_history"] @property def output_keys(self) -> List[str]: """Return the output keys. :meta private: """ _output_keys = [self.output_key] if self.return_source_documents: _output_keys = _output_keys + ["source_documents"] return _output_keys @abstractmethod def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: """Get docs.""" def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]: question = inputs["question"] get_chat_history = self.get_chat_history or _get_chat_history chat_history_str = get_chat_history(inputs["chat_history"]) if chat_history_str: new_question = self.question_generator.run( question=question, chat_history=chat_history_str ) else: new_question = question docs = self._get_docs(new_question, inputs) new_inputs = inputs.copy() new_inputs["question"] = new_question new_inputs["chat_history"] = chat_history_str answer, _ = self.combine_docs_chain.combine_docs(docs, **new_inputs) if self.return_source_documents: return {self.output_key: answer, "source_documents": docs} else: return {self.output_key: answer} @abstractmethod async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: """Get docs."""
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html" }
cabca3f3108c-2
"""Get docs.""" async def _acall(self, inputs: Dict[str, Any]) -> Dict[str, Any]: question = inputs["question"] get_chat_history = self.get_chat_history or _get_chat_history chat_history_str = get_chat_history(inputs["chat_history"]) if chat_history_str: new_question = await self.question_generator.arun( question=question, chat_history=chat_history_str ) else: new_question = question docs = await self._aget_docs(new_question, inputs) new_inputs = inputs.copy() new_inputs["question"] = new_question new_inputs["chat_history"] = chat_history_str answer, _ = await self.combine_docs_chain.acombine_docs(docs, **new_inputs) if self.return_source_documents: return {self.output_key: answer, "source_documents": docs} else: return {self.output_key: answer} def save(self, file_path: Union[Path, str]) -> None: if self.get_chat_history: raise ValueError("Chain not savable when `get_chat_history` is not None.") super().save(file_path) [docs]class ConversationalRetrievalChain(BaseConversationalRetrievalChain): """Chain for chatting with an index.""" retriever: BaseRetriever """Index to connect to.""" max_tokens_limit: Optional[int] = None """If set, restricts the docs to return from store based on tokens, enforced only for StuffDocumentChain""" def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]: num_docs = len(docs) if self.max_tokens_limit and isinstance( self.combine_docs_chain, StuffDocumentsChain ):
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html" }
cabca3f3108c-3
self.combine_docs_chain, StuffDocumentsChain ): tokens = [ self.combine_docs_chain.llm_chain.llm.get_num_tokens(doc.page_content) for doc in docs ] token_count = sum(tokens[:num_docs]) while token_count > self.max_tokens_limit: num_docs -= 1 token_count -= tokens[num_docs] return docs[:num_docs] def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: docs = self.retriever.get_relevant_documents(question) return self._reduce_tokens_below_limit(docs) async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: docs = await self.retriever.aget_relevant_documents(question) return self._reduce_tokens_below_limit(docs) [docs] @classmethod def from_llm( cls, llm: BaseLanguageModel, retriever: BaseRetriever, condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT, qa_prompt: Optional[BasePromptTemplate] = None, chain_type: str = "stuff", **kwargs: Any, ) -> BaseConversationalRetrievalChain: """Load chain from LLM.""" doc_chain = load_qa_chain( llm, chain_type=chain_type, prompt=qa_prompt, ) condense_question_chain = LLMChain(llm=llm, prompt=condense_question_prompt) return cls( retriever=retriever, combine_docs_chain=doc_chain, question_generator=condense_question_chain, **kwargs, )
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html" }
cabca3f3108c-4
question_generator=condense_question_chain, **kwargs, ) [docs]class ChatVectorDBChain(BaseConversationalRetrievalChain): """Chain for chatting with a vector database.""" vectorstore: VectorStore = Field(alias="vectorstore") top_k_docs_for_context: int = 4 search_kwargs: dict = Field(default_factory=dict) @property def _chain_type(self) -> str: return "chat-vector-db" @root_validator() def raise_deprecation(cls, values: Dict) -> Dict: warnings.warn( "`ChatVectorDBChain` is deprecated - " "please use `from langchain.chains import ConversationalRetrievalChain`" ) return values def _get_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: vectordbkwargs = inputs.get("vectordbkwargs", {}) full_kwargs = {**self.search_kwargs, **vectordbkwargs} return self.vectorstore.similarity_search( question, k=self.top_k_docs_for_context, **full_kwargs ) async def _aget_docs(self, question: str, inputs: Dict[str, Any]) -> List[Document]: raise NotImplementedError("ChatVectorDBChain does not support async") [docs] @classmethod def from_llm( cls, llm: BaseLanguageModel, vectorstore: VectorStore, condense_question_prompt: BasePromptTemplate = CONDENSE_QUESTION_PROMPT, qa_prompt: Optional[BasePromptTemplate] = None, chain_type: str = "stuff", **kwargs: Any, ) -> BaseConversationalRetrievalChain:
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html" }
cabca3f3108c-5
**kwargs: Any, ) -> BaseConversationalRetrievalChain: """Load chain from LLM.""" doc_chain = load_qa_chain( llm, chain_type=chain_type, prompt=qa_prompt, ) condense_question_chain = LLMChain(llm=llm, prompt=condense_question_prompt) return cls( vectorstore=vectorstore, combine_docs_chain=doc_chain, question_generator=condense_question_chain, **kwargs, ) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/conversational_retrieval/base.html" }
2cd7624bd958-0
Source code for langchain.chains.qa_with_sources.retrieval """Question-answering with sources over an index.""" from typing import Any, Dict, List from pydantic import Field from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains.qa_with_sources.base import BaseQAWithSourcesChain from langchain.docstore.document import Document from langchain.schema import BaseRetriever [docs]class RetrievalQAWithSourcesChain(BaseQAWithSourcesChain): """Question-answering with sources over an index.""" retriever: BaseRetriever = Field(exclude=True) """Index to connect to.""" reduce_k_below_max_tokens: bool = False """Reduce the number of results to return from store based on tokens limit""" max_tokens_limit: int = 3375 """Restrict the docs to return from store based on tokens, enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true""" def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]: num_docs = len(docs) if self.reduce_k_below_max_tokens and isinstance( self.combine_documents_chain, StuffDocumentsChain ): tokens = [ self.combine_documents_chain.llm_chain.llm.get_num_tokens( doc.page_content ) for doc in docs ] token_count = sum(tokens[:num_docs]) while token_count > self.max_tokens_limit: num_docs -= 1 token_count -= tokens[num_docs] return docs[:num_docs] def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]: question = inputs[self.question_key] docs = self.retriever.get_relevant_documents(question)
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/retrieval.html" }
2cd7624bd958-1
docs = self.retriever.get_relevant_documents(question) return self._reduce_tokens_below_limit(docs) async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]: question = inputs[self.question_key] docs = await self.retriever.aget_relevant_documents(question) return self._reduce_tokens_below_limit(docs) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/retrieval.html" }
c8db6f639d38-0
Source code for langchain.chains.qa_with_sources.vector_db """Question-answering with sources over a vector database.""" import warnings from typing import Any, Dict, List from pydantic import Field, root_validator from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains.qa_with_sources.base import BaseQAWithSourcesChain from langchain.docstore.document import Document from langchain.vectorstores.base import VectorStore [docs]class VectorDBQAWithSourcesChain(BaseQAWithSourcesChain): """Question-answering with sources over a vector database.""" vectorstore: VectorStore = Field(exclude=True) """Vector Database to connect to.""" k: int = 4 """Number of results to return from store""" reduce_k_below_max_tokens: bool = False """Reduce the number of results to return from store based on tokens limit""" max_tokens_limit: int = 3375 """Restrict the docs to return from store based on tokens, enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true""" search_kwargs: Dict[str, Any] = Field(default_factory=dict) """Extra search args.""" def _reduce_tokens_below_limit(self, docs: List[Document]) -> List[Document]: num_docs = len(docs) if self.reduce_k_below_max_tokens and isinstance( self.combine_documents_chain, StuffDocumentsChain ): tokens = [ self.combine_documents_chain.llm_chain.llm.get_num_tokens( doc.page_content ) for doc in docs ] token_count = sum(tokens[:num_docs]) while token_count > self.max_tokens_limit: num_docs -= 1 token_count -= tokens[num_docs]
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/vector_db.html" }
c8db6f639d38-1
num_docs -= 1 token_count -= tokens[num_docs] return docs[:num_docs] def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]: question = inputs[self.question_key] docs = self.vectorstore.similarity_search( question, k=self.k, **self.search_kwargs ) return self._reduce_tokens_below_limit(docs) async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]: raise NotImplementedError("VectorDBQAWithSourcesChain does not support async") @root_validator() def raise_deprecation(cls, values: Dict) -> Dict: warnings.warn( "`VectorDBQAWithSourcesChain` is deprecated - " "please use `from langchain.chains import RetrievalQAWithSourcesChain`" ) return values @property def _chain_type(self) -> str: return "vector_db_qa_with_sources_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/vector_db.html" }
1177bfad52dc-0
Source code for langchain.chains.qa_with_sources.base """Question answering with sources over documents.""" from __future__ import annotations import re from abc import ABC, abstractmethod from typing import Any, Dict, List, Optional from pydantic import Extra, root_validator from langchain.chains.base import Chain from langchain.chains.combine_documents.base import BaseCombineDocumentsChain from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.chains.llm import LLMChain from langchain.chains.qa_with_sources.loading import load_qa_with_sources_chain from langchain.chains.qa_with_sources.map_reduce_prompt import ( COMBINE_PROMPT, EXAMPLE_PROMPT, QUESTION_PROMPT, ) from langchain.docstore.document import Document from langchain.prompts.base import BasePromptTemplate from langchain.schema import BaseLanguageModel class BaseQAWithSourcesChain(Chain, ABC): """Question answering with sources over documents.""" combine_documents_chain: BaseCombineDocumentsChain """Chain to use to combine documents.""" question_key: str = "question" #: :meta private: input_docs_key: str = "docs" #: :meta private: answer_key: str = "answer" #: :meta private: sources_answer_key: str = "sources" #: :meta private: return_source_documents: bool = False """Return the source documents.""" @classmethod def from_llm( cls, llm: BaseLanguageModel, document_prompt: BasePromptTemplate = EXAMPLE_PROMPT, question_prompt: BasePromptTemplate = QUESTION_PROMPT, combine_prompt: BasePromptTemplate = COMBINE_PROMPT,
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html" }
1177bfad52dc-1
combine_prompt: BasePromptTemplate = COMBINE_PROMPT, **kwargs: Any, ) -> BaseQAWithSourcesChain: """Construct the chain from an LLM.""" llm_question_chain = LLMChain(llm=llm, prompt=question_prompt) llm_combine_chain = LLMChain(llm=llm, prompt=combine_prompt) combine_results_chain = StuffDocumentsChain( llm_chain=llm_combine_chain, document_prompt=document_prompt, document_variable_name="summaries", ) combine_document_chain = MapReduceDocumentsChain( llm_chain=llm_question_chain, combine_document_chain=combine_results_chain, document_variable_name="context", ) return cls( combine_documents_chain=combine_document_chain, **kwargs, ) @classmethod def from_chain_type( cls, llm: BaseLanguageModel, chain_type: str = "stuff", chain_type_kwargs: Optional[dict] = None, **kwargs: Any, ) -> BaseQAWithSourcesChain: """Load chain from chain type.""" _chain_kwargs = chain_type_kwargs or {} combine_document_chain = load_qa_with_sources_chain( llm, chain_type=chain_type, **_chain_kwargs ) return cls(combine_documents_chain=combine_document_chain, **kwargs) class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.question_key] @property
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html" }
1177bfad52dc-2
:meta private: """ return [self.question_key] @property def output_keys(self) -> List[str]: """Return output key. :meta private: """ _output_keys = [self.answer_key, self.sources_answer_key] if self.return_source_documents: _output_keys = _output_keys + ["source_documents"] return _output_keys @root_validator(pre=True) def validate_naming(cls, values: Dict) -> Dict: """Fix backwards compatability in naming.""" if "combine_document_chain" in values: values["combine_documents_chain"] = values.pop("combine_document_chain") return values @abstractmethod def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]: """Get docs to run questioning over.""" def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]: docs = self._get_docs(inputs) answer, _ = self.combine_documents_chain.combine_docs(docs, **inputs) if re.search(r"SOURCES:\s", answer): answer, sources = re.split(r"SOURCES:\s", answer) else: sources = "" result: Dict[str, Any] = { self.answer_key: answer, self.sources_answer_key: sources, } if self.return_source_documents: result["source_documents"] = docs return result @abstractmethod async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]: """Get docs to run questioning over.""" async def _acall(self, inputs: Dict[str, Any]) -> Dict[str, Any]: docs = await self._aget_docs(inputs)
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html" }
1177bfad52dc-3
docs = await self._aget_docs(inputs) answer, _ = await self.combine_documents_chain.acombine_docs(docs, **inputs) if re.search(r"SOURCES:\s", answer): answer, sources = re.split(r"SOURCES:\s", answer) else: sources = "" result: Dict[str, Any] = { self.answer_key: answer, self.sources_answer_key: sources, } if self.return_source_documents: result["source_documents"] = docs return result [docs]class QAWithSourcesChain(BaseQAWithSourcesChain): """Question answering with sources over documents.""" input_docs_key: str = "docs" #: :meta private: @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_docs_key, self.question_key] def _get_docs(self, inputs: Dict[str, Any]) -> List[Document]: return inputs.pop(self.input_docs_key) async def _aget_docs(self, inputs: Dict[str, Any]) -> List[Document]: return inputs.pop(self.input_docs_key) @property def _chain_type(self) -> str: return "qa_with_sources_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/qa_with_sources/base.html" }
f444626db7da-0
Source code for langchain.chains.llm_math.base """Chain that interprets a prompt and executes python code to do math.""" from typing import Dict, List from pydantic import Extra from langchain.chains.base import Chain from langchain.chains.llm import LLMChain from langchain.chains.llm_math.prompt import PROMPT from langchain.prompts.base import BasePromptTemplate from langchain.python import PythonREPL from langchain.schema import BaseLanguageModel [docs]class LLMMathChain(Chain): """Chain that interprets a prompt and executes python code to do math. Example: .. code-block:: python from langchain import LLMMathChain, OpenAI llm_math = LLMMathChain(llm=OpenAI()) """ llm: BaseLanguageModel """LLM wrapper to use.""" prompt: BasePromptTemplate = PROMPT """Prompt to use to translate to python if neccessary.""" input_key: str = "question" #: :meta private: output_key: str = "answer" #: :meta private: class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Expect input key. :meta private: """ return [self.input_key] @property def output_keys(self) -> List[str]: """Expect output key. :meta private: """ return [self.output_key] def _process_llm_result(self, t: str) -> Dict[str, str]: python_executor = PythonREPL()
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html" }
f444626db7da-1
python_executor = PythonREPL() self.callback_manager.on_text(t, color="green", verbose=self.verbose) t = t.strip() if t.startswith("```python"): code = t[9:-4] output = python_executor.run(code) self.callback_manager.on_text("\nAnswer: ", verbose=self.verbose) self.callback_manager.on_text(output, color="yellow", verbose=self.verbose) answer = "Answer: " + output elif t.startswith("Answer:"): answer = t elif "Answer:" in t: answer = "Answer: " + t.split("Answer:")[-1] else: raise ValueError(f"unknown format from LLM: {t}") return {self.output_key: answer} def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: llm_executor = LLMChain( prompt=self.prompt, llm=self.llm, callback_manager=self.callback_manager ) self.callback_manager.on_text(inputs[self.input_key], verbose=self.verbose) t = llm_executor.predict(question=inputs[self.input_key], stop=["```output"]) return self._process_llm_result(t) async def _acall(self, inputs: Dict[str, str]) -> Dict[str, str]: llm_executor = LLMChain( prompt=self.prompt, llm=self.llm, callback_manager=self.callback_manager ) self.callback_manager.on_text(inputs[self.input_key], verbose=self.verbose) t = await llm_executor.apredict( question=inputs[self.input_key], stop=["```output"] ) return self._process_llm_result(t) @property def _chain_type(self) -> str: return "llm_math_chain"
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html" }
f444626db7da-2
def _chain_type(self) -> str: return "llm_math_chain" By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/llm_math/base.html" }
5f851c4d5d1a-0
Source code for langchain.chains.hyde.base """Hypothetical Document Embeddings. https://arxiv.org/abs/2212.10496 """ from __future__ import annotations from typing import Dict, List import numpy as np from pydantic import Extra from langchain.chains.base import Chain from langchain.chains.hyde.prompts import PROMPT_MAP from langchain.chains.llm import LLMChain from langchain.embeddings.base import Embeddings from langchain.llms.base import BaseLLM [docs]class HypotheticalDocumentEmbedder(Chain, Embeddings): """Generate hypothetical document for query, and then embed that. Based on https://arxiv.org/abs/2212.10496 """ base_embeddings: Embeddings llm_chain: LLMChain class Config: """Configuration for this pydantic object.""" extra = Extra.forbid arbitrary_types_allowed = True @property def input_keys(self) -> List[str]: """Input keys for Hyde's LLM chain.""" return self.llm_chain.input_keys @property def output_keys(self) -> List[str]: """Output keys for Hyde's LLM chain.""" return self.llm_chain.output_keys [docs] def embed_documents(self, texts: List[str]) -> List[List[float]]: """Call the base embeddings.""" return self.base_embeddings.embed_documents(texts) [docs] def combine_embeddings(self, embeddings: List[List[float]]) -> List[float]: """Combine embeddings into final embeddings.""" return list(np.array(embeddings).mean(axis=0)) [docs] def embed_query(self, text: str) -> List[float]: """Generate a hypothetical document and embedded it."""
{ "url": "https://python.langchain.com/en/latest/_modules/langchain/chains/hyde/base.html" }