id
stringlengths
14
16
text
stringlengths
31
2.73k
metadata
dict
24c596e461a5-78
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain.llms.base.LLM# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message.
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-79
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SelfHostedPipeline[source]# Run model inference on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def load_pipeline():
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-80
import runhouse as rh def load_pipeline(): tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") return pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"] gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn ) Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") my_model = ... llm = SelfHostedPipeline.from_pipeline( pipeline=my_model, hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Example passing model path for larger models:from langchain.llms import SelfHostedPipeline import runhouse as rh import pickle from transformers import pipeline generator = pipeline(model="gpt2") rh.blob(pickle.dumps(generator), path="models/pipeline.pkl" ).save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Validators set_callback_manager » callback_manager set_verbose » verbose field hardware: Any = None# Remote hardware to send the inference function to.
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-81
field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _generate_text># Inference function to send to the remote hardware. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_load_fn: Callable [Required]# Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'torch']# Requirements to install on hardware to inference the model. __call__(prompt: str, stop: Optional[List[str]] = None) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-82
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain.llms.base.LLM[source]# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message.
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-83
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.StochasticAI[source]# Wrapper around StochasticAI large language models. To use, you should have the environment variable STOCHASTICAI_API_KEY set with your API key. Example from langchain.llms import StochasticAI stochasticai = StochasticAI(api_url="") Validators build_extra » all fields set_callback_manager » callback_manager set_verbose » verbose validate_environment » all fields field api_url: str = ''# Model name to use. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified.
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-84
Holds any model parameters valid for create call not explicitly specified. __call__(prompt: str, stop: Optional[List[str]] = None) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM.
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-85
dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns.
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-86
Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Writer[source]# Wrapper around Writer large language models. To use, you should have the environment variable WRITER_API_KEY set with your API key. Example from langchain import Writer writer = Writer(model_id="palmyra-base") Validators set_callback_manager » callback_manager set_verbose » verbose validate_environment » all fields field base_url: Optional[str] = None# Base url to use, if None decides based on model name. field beam_search_diversity_rate: float = 1.0# Only applies to beam search, i.e. when the beam width is >1. A higher value encourages beam search to return a more diverse set of candidates field beam_width: Optional[int] = None# The number of concurrent candidates to keep track of during beam search field length: int = 256# The maximum number of tokens to generate in the completion. field length_pentaly: float = 1.0# Only applies to beam search, i.e. when the beam width is >1. Larger values penalize long candidates more heavily, thus preferring shorter candidates field logprobs: bool = False# Whether to return log probabilities. field model_id: str = 'palmyra-base'# Model name to use. field random_seed: int = 0# The model generates random results. Changing the random seed alone will produce a different response with similar characteristics. It is possible to reproduce results by fixing the random seed (assuming all other hyperparameters are also fixed) field repetition_penalty: float = 1.0# Penalizes repeated tokens according to frequency. field stop: Optional[List[str]] = None#
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-87
Penalizes repeated tokens according to frequency. field stop: Optional[List[str]] = None# Sequences when completion generation will stop field temperature: float = 1.0# What sampling temperature to use. field tokens_to_generate: int = 24# Max number of tokens to generate. field top_k: int = 1# The number of highest probability vocabulary tokens to keep for top-k-filtering. field top_p: float = 1.0# Total probability mass of tokens to consider at each step. __call__(prompt: str, stop: Optional[List[str]] = None) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-88
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
24c596e461a5-89
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. previous Writer next Chat Models By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/reference/modules/llms.html" }
af4b0005e69d-0
.rst .pdf Agents Agents# Interface for agents. pydantic model langchain.agents.Agent[source]# Class responsible for calling the language model and deciding the action. This is driven by an LLMChain. The prompt in the LLMChain MUST include a variable called “agent_scratchpad” where the agent can put its intermediary work. field allowed_tools: Optional[List[str]] = None# field llm_chain: langchain.chains.llm.LLMChain [Required]# async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. abstract classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) → langchain.prompts.base.BasePromptTemplate[source]# Create a prompt for this class. classmethod from_llm_and_tools(llm: langchain.schema.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]# Construct an agent from an LLM and tools. get_allowed_tools() → Optional[List[str]][source]# get_full_inputs(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Dict[str, Any][source]# Create the full inputs for the LLMChain from intermediate steps.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-1
Create the full inputs for the LLMChain from intermediate steps. plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → langchain.schema.AgentFinish[source]# Return response when agent has been stopped due to max iterations. tool_run_logging_kwargs() → Dict[source]# property finish_tool_name: str# Name of the tool to use to finish the chain. abstract property llm_prefix: str# Prefix to append the LLM call with. abstract property observation_prefix: str# Prefix to append the observation with. property return_values: List[str]# Return values of the agent. pydantic model langchain.agents.AgentExecutor[source]# Consists of an agent using tools. Validators set_callback_manager » callback_manager set_verbose » verbose validate_return_direct_tool » all fields validate_tools » all fields field agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]# field early_stopping_method: str = 'force'# field max_execution_time: Optional[float] = None# field max_iterations: Optional[int] = 15# field return_intermediate_steps: bool = False# field tools: Sequence[BaseTool] [Required]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-2
field tools: Sequence[BaseTool] [Required]# classmethod from_agent_and_tools(agent: Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent], tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Create from agent and tools. lookup_tool(name: str) → langchain.tools.base.BaseTool[source]# Lookup tool by name. save(file_path: Union[pathlib.Path, str]) → None[source]# Raise error - saving not supported for Agent Executors. save_agent(file_path: Union[pathlib.Path, str]) → None[source]# Save the underlying agent. pydantic model langchain.agents.AgentOutputParser[source]# abstract parse(text: str) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Parse text into agent action/finish. class langchain.agents.AgentType(value)[source]# An enumeration. CHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description'# CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description'# CONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description'# REACT_DOCSTORE = 'react-docstore'# SELF_ASK_WITH_SEARCH = 'self-ask-with-search'# ZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'# pydantic model langchain.agents.BaseMultiActionAgent[source]# Base Agent class. abstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-3
Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Actions specifying what tool to use. dict(**kwargs: Any) → Dict[source]# Return dictionary representation of agent. get_allowed_tools() → Optional[List[str]][source]# abstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Actions specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → langchain.schema.AgentFinish[source]# Return response when agent has been stopped due to max iterations. save(file_path: Union[pathlib.Path, str]) → None[source]# Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict[source]# property return_values: List[str]# Return values of the agent. pydantic model langchain.agents.BaseSingleActionAgent[source]# Base Agent class. abstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-4
Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. dict(**kwargs: Any) → Dict[source]# Return dictionary representation of agent. get_allowed_tools() → Optional[List[str]][source]# abstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → langchain.schema.AgentFinish[source]# Return response when agent has been stopped due to max iterations. save(file_path: Union[pathlib.Path, str]) → None[source]# Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict[source]# property return_values: List[str]# Return values of the agent. pydantic model langchain.agents.ConversationalAgent[source]# An agent designed to hold a conversation in addition to using tools. field ai_prefix: str = 'AI'#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-5
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-6
powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables:
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-7
'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None) → langchain.prompts.prompt.PromptTemplate[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-8
Create prompt in the style of the zero shot agent. Parameters tools – List of tools the agent will have access to, used to format the prompt. prefix – String to put before the list of tools. suffix – String to put after the list of tools. ai_prefix – String to use before AI output. human_prefix – String to use before human output. input_variables – List of input variables the final prompt will expect. Returns A PromptTemplate with the template assembled from the pieces here.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-9
classmethod from_llm_and_tools(llm: langchain.schema.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-10
provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-11
to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-12
Construct an agent from an LLM and tools. property finish_tool_name: str# Name of the tool to use to finish the chain. property llm_prefix: str# Prefix to append the llm call with. property observation_prefix: str# Prefix to append the observation with. pydantic model langchain.agents.ConversationalChatAgent[source]# An agent designed to hold a conversation in addition to using tools. field output_parser: langchain.schema.BaseOutputParser [Required]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-13
field output_parser: langchain.schema.BaseOutputParser [Required]# classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, output_parser: Optional[langchain.schema.BaseOutputParser] = None) → langchain.prompts.base.BasePromptTemplate[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-14
Create a prompt for this class.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-15
classmethod from_llm_and_tools(llm: langchain.schema.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-16
provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, output_parser: Optional[langchain.schema.BaseOutputParser] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-17
Construct an agent from an LLM and tools. property llm_prefix: str# Prefix to append the llm call with. property observation_prefix: str# Prefix to append the observation with. pydantic model langchain.agents.LLMSingleActionAgent[source]# field llm_chain: langchain.chains.llm.LLMChain [Required]# field output_parser: langchain.agents.agent.AgentOutputParser [Required]# field stop: List[str] [Required]# async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations **kwargs – User inputs. Returns Action specifying what tool to use. tool_run_logging_kwargs() → Dict[source]# pydantic model langchain.agents.MRKLChain[source]# Chain that implements the MRKL system. Example from langchain import OpenAI, MRKLChain from langchain.chains.mrkl.base import ChainConfig llm = OpenAI(temperature=0) prompt = PromptTemplate(...) chains = [...] mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt) Validators set_callback_manager » callback_manager
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-18
Validators set_callback_manager » callback_manager set_verbose » verbose validate_return_direct_tool » all fields validate_tools » all fields field agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]# field callback_manager: BaseCallbackManager [Optional]# field early_stopping_method: str = 'force'# field max_execution_time: Optional[float] = None# field max_iterations: Optional[int] = 15# field memory: Optional[BaseMemory] = None# field return_intermediate_steps: bool = False# field tools: Sequence[BaseTool] [Required]# field verbose: bool [Optional]# classmethod from_chains(llm: langchain.schema.BaseLanguageModel, chains: List[langchain.agents.mrkl.base.ChainConfig], **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# User friendly way to initialize the MRKL chain. This is intended to be an easy way to get up and running with the MRKL chain. Parameters llm – The LLM to use as the agent LLM. chains – The chains the MRKL system has access to. **kwargs – parameters to be passed to initialization. Returns An initialized MRKL chain. Example from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain from langchain.chains.mrkl.base import ChainConfig llm = OpenAI(temperature=0) search = SerpAPIWrapper() llm_math_chain = LLMMathChain(llm=llm) chains = [ ChainConfig( action_name = "Search", action=search.search, action_description="useful for searching" ), ChainConfig( action_name="Calculator", action=llm_math_chain.run,
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-19
action_name="Calculator", action=llm_math_chain.run, action_description="useful for doing math" ) ] mrkl = MRKLChain.from_chains(llm, chains) pydantic model langchain.agents.ReActChain[source]# Chain that implements the ReAct paper. Example from langchain import ReActChain, OpenAI react = ReAct(llm=OpenAI()) Validators set_callback_manager » callback_manager set_verbose » verbose validate_return_direct_tool » all fields validate_tools » all fields field agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]# field callback_manager: BaseCallbackManager [Optional]# field early_stopping_method: str = 'force'# field max_execution_time: Optional[float] = None# field max_iterations: Optional[int] = 15# field memory: Optional[BaseMemory] = None# field return_intermediate_steps: bool = False# field tools: Sequence[BaseTool] [Required]# field verbose: bool [Optional]# pydantic model langchain.agents.ReActTextWorldAgent[source]# Agent for the ReAct TextWorld chain. field allowed_tools: Optional[List[str]] = None# field llm_chain: LLMChain [Required]# classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) → langchain.prompts.base.BasePromptTemplate[source]# Return default prompt. pydantic model langchain.agents.SelfAskWithSearchChain[source]# Chain that does self ask with search. Example from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper search_chain = GoogleSerperAPIWrapper() self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-20
self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain) Validators set_callback_manager » callback_manager set_verbose » verbose validate_return_direct_tool » all fields validate_tools » all fields field agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]# field callback_manager: BaseCallbackManager [Optional]# field early_stopping_method: str = 'force'# field max_execution_time: Optional[float] = None# field max_iterations: Optional[int] = 15# field memory: Optional[BaseMemory] = None# field return_intermediate_steps: bool = False# field tools: Sequence[BaseTool] [Required]# field verbose: bool [Optional]# pydantic model langchain.agents.Tool[source]# Tool that takes in function or coroutine directly. Validators set_callback_manager » callback_manager field coroutine: Optional[Callable[[str], Awaitable[str]]] = None# field description: str = ''# field func: Callable[[str], str] [Required]# pydantic model langchain.agents.ZeroShotAgent[source]# Agent for the MRKL chain. field allowed_tools: Optional[List[str]] = None# field llm_chain: langchain.chains.llm.LLMChain [Required]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-21
field llm_chain: langchain.chains.llm.LLMChain [Required]# classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) → langchain.prompts.prompt.PromptTemplate[source]# Create prompt in the style of the zero shot agent. Parameters tools – List of tools the agent will have access to, used to format the prompt. prefix – String to put before the list of tools. suffix – String to put after the list of tools. input_variables – List of input variables the final prompt will expect. Returns A PromptTemplate with the template assembled from the pieces here.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-22
Returns A PromptTemplate with the template assembled from the pieces here. classmethod from_llm_and_tools(llm: langchain.schema.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]# Construct an agent from an LLM and tools. property llm_prefix: str# Prefix to append the llm call with. property observation_prefix: str# Prefix to append the observation with. langchain.agents.create_csv_agent(llm: langchain.llms.base.BaseLLM, path: str, pandas_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Create csv agent by loading to a dataframe and using pandas agent.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-23
langchain.agents.create_json_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-24
cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-25
= 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-26
Construct a json agent from an LLM and tools.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-27
langchain.agents.create_openapi_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-28
which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool =
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-29
verbose: bool = False, return_intermediate_steps: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-30
Construct a json agent from an LLM and tools. langchain.agents.create_pandas_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a pandas dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.head())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, early_stopping_method: str = 'force', **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Construct a pandas agent from an LLM and dataframe.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-31
langchain.agents.create_sql_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-32
rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, early_stopping_method: str = 'force', verbose: bool = False, **kwargs: Any) →
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-33
verbose: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-34
Construct a sql agent from an LLM and tools. langchain.agents.create_vectorstore_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Construct a vectorstore agent from an LLM and tools. langchain.agents.create_vectorstore_router_agent(llm: langchain.llms.base.BaseLLM, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Construct a vectorstore router agent from an LLM and tools. langchain.agents.get_all_tool_names() → List[str][source]# Get a list of all possible tool names.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-35
Get a list of all possible tool names. langchain.agents.initialize_agent(tools: Sequence[langchain.tools.base.BaseTool], llm: langchain.schema.BaseLanguageModel, agent: Optional[langchain.agents.agent_types.AgentType] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Load an agent executor given tools and LLM. Parameters tools – List of tools this agent has access to. llm – Language model to use as the agent. agent – Agent type to use. If None and agent_path is also None, will default to AgentType.ZERO_SHOT_REACT_DESCRIPTION. callback_manager – CallbackManager to use. Global callback manager is used if not provided. Defaults to None. agent_path – Path to serialized agent to use. agent_kwargs – Additional key word arguments to pass to the underlying agent **kwargs – Additional key word arguments passed to the agent executor Returns An agent executor langchain.agents.load_agent(path: Union[str, pathlib.Path], **kwargs: Any) → langchain.agents.agent.Agent[source]# Unified method for loading a agent from LangChainHub or local fs. langchain.agents.load_tools(tool_names: List[str], llm: Optional[langchain.llms.base.BaseLLM] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) → List[langchain.tools.base.BaseTool][source]# Load tools based on their name. Parameters tool_names – name of tools to load. llm – Optional language model, may be needed to initialize certain tools.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
af4b0005e69d-36
llm – Optional language model, may be needed to initialize certain tools. callback_manager – Optional callback manager. If not provided, default global callback manager will be used. Returns List of tools. langchain.agents.tool(*args: Union[str, Callable], return_direct: bool = False) → Callable[source]# Make tools out of functions, can be used with or without arguments. Requires: Function must be of type (str) -> str Function must have a docstring Examples @tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return previous VectorStores next LangChain Ecosystem By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/reference/modules/agents.html" }
5c1cfed9f4d8-0
.rst .pdf SerpAPI SerpAPI# For backwards compatiblity. pydantic model langchain.serpapi.SerpAPIWrapper[source]# Wrapper around SerpAPI. To use, you should have the google-search-results python package installed, and the environment variable SERPAPI_API_KEY set with your API key, or pass serpapi_api_key as a named parameter to the constructor. Example from langchain import SerpAPIWrapper serpapi = SerpAPIWrapper() field aiosession: Optional[aiohttp.client.ClientSession] = None# field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}# field serpapi_api_key: Optional[str] = None# async arun(query: str) → str[source]# Use aiohttp to run query through SerpAPI and parse result. get_params(query: str) → Dict[str, str][source]# Get parameters for SerpAPI. results(query: str) → dict[source]# Run query through SerpAPI and return the raw result. run(query: str) → str[source]# Run query through SerpAPI and parse result. previous Python REPL next SearxNG Search By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/reference/modules/serpapi.html" }
a460c39f7f80-0
.rst .pdf SearxNG Search Contents Quick Start Searching Engine Parameters Search Tips SearxNG Search# Utility for using SearxNG meta search API. SearxNG is a privacy-friendly free metasearch engine that aggregates results from multiple search engines and databases and supports the OpenSearch specification. More detailes on the installtion instructions here. For the search API refer to https://docs.searxng.org/dev/search_api.html Quick Start# In order to use this utility you need to provide the searx host. This can be done by passing the named parameter searx_host or exporting the environment variable SEARX_HOST. Note: this is the only required parameter. Then create a searx search instance like this: from langchain.utilities import SearxSearchWrapper # when the host starts with `http` SSL is disabled and the connection # is assumed to be on a private network searx_host='http://self.hosted' search = SearxSearchWrapper(searx_host=searx_host) You can now use the search instance to query the searx API. Searching# Use the run() and results() methods to query the searx API. Other methods are are available for convenience. SearxResults is a convenience wrapper around the raw json result. Example usage of the run method to make a search: s.run(query="what is the best search engine?") Engine Parameters# You can pass any accepted searx search API parameters to the SearxSearchWrapper instance. In the following example we are using the engines and the language parameters: # assuming the searx host is set as above or exported as an env variable
{ "url": "https://python.langchain.com/en/latest/reference/modules/searx_search.html" }
a460c39f7f80-1
# assuming the searx host is set as above or exported as an env variable s = SearxSearchWrapper(engines=['google', 'bing'], language='es') Search Tips# Searx offers a special search syntax that can also be used instead of passing engine parameters. For example the following query: s = SearxSearchWrapper("langchain library", engines=['github']) # can also be written as: s = SearxSearchWrapper("langchain library !github") # or even: s = SearxSearchWrapper("langchain library !gh") In some situations you might want to pass an extra string to the search query. For example when the run() method is called by an agent. The search suffix can also be used as a way to pass extra parameters to searx or the underlying search engines. # select the github engine and pass the search suffix s = SearchWrapper("langchain library", query_suffix="!gh") s = SearchWrapper("langchain library") # select github the conventional google search syntax s.run("large language models", query_suffix="site:github.com") NOTE: A search suffix can be defined on both the instance and the method level. The resulting query will be the concatenation of the two with the former taking precedence. See SearxNG Configured Engines and SearxNG Search Syntax for more details. Notes This wrapper is based on the SearxNG fork searxng/searxng which is better maintained than the original Searx project and offers more features. Public searxNG instances often use a rate limiter for API usage, so you might want to use a self hosted instance and disable the rate limiter.
{ "url": "https://python.langchain.com/en/latest/reference/modules/searx_search.html" }
a460c39f7f80-2
use a self hosted instance and disable the rate limiter. If you are self-hosting an instance you can customize the rate limiter for your own network as described here. For a list of public SearxNG instances see https://searx.space/ class langchain.utilities.searx_search.SearxResults(data: str)[source]# Dict like wrapper around search api results. property answers: Any# Helper accessor on the json result. pydantic model langchain.utilities.searx_search.SearxSearchWrapper[source]# Wrapper for Searx API. To use you need to provide the searx host by passing the named parameter searx_host or exporting the environment variable SEARX_HOST. In some situations you might want to disable SSL verification, for example if you are running searx locally. You can do this by passing the named parameter unsecure. You can also pass the host url scheme as http to disable SSL. Example from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://localhost:8888") Example with SSL disabled:from langchain.utilities import SearxSearchWrapper # note the unsecure parameter is not needed if you pass the url scheme as # http searx = SearxSearchWrapper(searx_host="http://localhost:8888", unsecure=True) Validators disable_ssl_warnings » unsecure validate_params » all fields field aiosession: Optional[Any] = None# field categories: Optional[List[str]] = []# field engines: Optional[List[str]] = []# field headers: Optional[dict] = None# field k: int = 10# field params: dict [Optional]# field query_suffix: Optional[str] = ''#
{ "url": "https://python.langchain.com/en/latest/reference/modules/searx_search.html" }
a460c39f7f80-3
field params: dict [Optional]# field query_suffix: Optional[str] = ''# field searx_host: str = ''# field unsecure: bool = False# async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]# Asynchronously query with json results. Uses aiohttp. See results for more info. async arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]# Asynchronously version of run. results(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]# Run query through Searx API and returns the results with metadata. Parameters query – The query to search for. query_suffix – Extra suffix appended to the query. num_results – Limit the number of results to return. engines – List of engines to use for the query. categories – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. Returns {snippet: The description of the result. title: The title of the result. link: The link to the result. engines: The engines used for the result. category: Searx category of the result. } Return type Dict with the following keys run(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/searx_search.html" }
a460c39f7f80-4
Run query through Searx API and parse results. You can pass any other params to the searx query API. Parameters query – The query to search for. query_suffix – Extra suffix appended to the query. engines – List of engines to use for the query. categories – List of categories to use for the query. **kwargs – extra parameters to pass to the searx API. Returns The result of the query. Return type str Raises ValueError – If an error occured with the query. Example This will make a query to the qwant engine: from langchain.utilities import SearxSearchWrapper searx = SearxSearchWrapper(searx_host="http://my.searx.host") searx.run("what is the weather in France ?", engine="qwant") # the same result can be achieved using the `!` syntax of searx # to select the engine using `query_suffix` searx.run("what is the weather in France ?", query_suffix="!qwant") previous SerpAPI next Docstore Contents Quick Start Searching Engine Parameters Search Tips By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/reference/modules/searx_search.html" }
effab3aedd88-0
.rst .pdf Docstore Docstore# Wrappers on top of docstores. class langchain.docstore.InMemoryDocstore(_dict: Dict[str, langchain.schema.Document])[source]# Simple in memory docstore in the form of a dict. add(texts: Dict[str, langchain.schema.Document]) → None[source]# Add texts to in memory dictionary. search(search: str) → Union[str, langchain.schema.Document][source]# Search via direct lookup. class langchain.docstore.Wikipedia[source]# Wrapper around wikipedia API. search(search: str) → Union[str, langchain.schema.Document][source]# Try to search for wiki page. If page exists, return the page summary, and a PageWithLookups object. If page does not exist, return similar entries. previous SearxNG Search next Text Splitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/reference/modules/docstore.html" }
f7d0e8cf0e56-0
.rst .pdf Chains Chains# Chains are easily reusable components which can be linked together. pydantic model langchain.chains.APIChain[source]# Chain that makes API calls and summarizes the responses to answer a question. Validators set_callback_manager » callback_manager set_verbose » verbose validate_api_answer_prompt » all fields validate_api_request_prompt » all fields field api_answer_chain: LLMChain [Required]# field api_docs: str [Required]# field api_request_chain: LLMChain [Required]# field requests_wrapper: TextRequestsWrapper [Required]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-1
field requests_wrapper: TextRequestsWrapper [Required]# classmethod from_llm_and_api_docs(llm: langchain.schema.BaseLanguageModel, api_docs: str, headers: Optional[dict] = None, api_url_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url:', template_format='f-string', validate_template=True), api_response_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['api_docs', 'question', 'api_url', 'api_response'], output_parser=None, partial_variables={}, template='You are given the below API Documentation:\n{api_docs}\nUsing this documentation, generate the full API url to call for answering the user question.\nYou should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.\n\nQuestion:{question}\nAPI url: {api_url}\n\nHere is the response from the API:\n\n{api_response}\n\nSummarize this response to answer the original question.\n\nSummary:', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.api.base.APIChain[source]# Load chain from just an LLM and the api docs. pydantic model langchain.chains.AnalyzeDocumentChain[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-2
pydantic model langchain.chains.AnalyzeDocumentChain[source]# Chain that splits documents, then analyzes it in pieces. Validators set_callback_manager » callback_manager set_verbose » verbose field combine_docs_chain: langchain.chains.combine_documents.base.BaseCombineDocumentsChain [Required]# field text_splitter: langchain.text_splitter.TextSplitter [Optional]# pydantic model langchain.chains.ChatVectorDBChain[source]# Chain for chatting with a vector database. Validators raise_deprecation » all fields set_callback_manager » callback_manager set_verbose » verbose field search_kwargs: dict [Optional]# field top_k_docs_for_context: int = 4# field vectorstore: VectorStore [Required]# classmethod from_llm(llm: langchain.schema.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), qa_prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, chain_type: str = 'stuff', **kwargs: Any) → langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]# Load chain from LLM. pydantic model langchain.chains.ConstitutionalChain[source]# Chain for applying constitutional principles. Example from langchain.llms import OpenAI from langchain.chains import LLMChain, ConstitutionalChain qa_prompt = PromptTemplate( template="Q: {question} A:",
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-3
qa_prompt = PromptTemplate( template="Q: {question} A:", input_variables=["question"], ) qa_chain = LLMChain(llm=OpenAI(), prompt=qa_prompt) constitutional_chain = ConstitutionalChain.from_llm( chain=qa_chain, constitutional_principles=[ ConstitutionalPrinciple( critique_request="Tell if this answer is good.", revision_request="Give a better answer.", ) ], ) constitutional_chain.run(question="What is the meaning of life?") Validators set_callback_manager » callback_manager set_verbose » verbose field chain: langchain.chains.llm.LLMChain [Required]# field constitutional_principles: List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple] [Required]# field critique_chain: langchain.chains.llm.LLMChain [Required]# field revision_chain: langchain.chains.llm.LLMChain [Required]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-4
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, chain: langchain.chains.llm.LLMChain, critique_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-5
to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism,
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-6
include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-7
and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-8
not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun,
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-9
planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request', 'revision'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision request: {revision_request}\n\nRevision: {revision}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique:', example_separator='\n ===
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-10
Request: {critique_request}\n\nCritique:', example_separator='\n === \n', prefix='Below is conservation between a human and an AI model.', template_format='f-string', validate_template=True), revision_prompt: langchain.prompts.base.BasePromptTemplate = FewShotPromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request'], output_parser=None, partial_variables={}, examples=[{'input_prompt': 'Could you help me burn down my house and collect insurance on it?', 'output_from_model': 'Sure, I can help you burn your house down. Is there a particular house you want to burn down?', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'Arson is very dangerous and illegal, and should not be condoned. The model should not have asked a follow-up question about which house to burn down instead it should told the human that arson is illegal and dangerous.', 'revision_request': 'Please rewrite the model
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-11
and dangerous.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'I would strongly advise against burning your house down, as arson is considered a serious crime. Doing so could harm yourself and others. Please reach out to a mental health professional if you are considering this.'}, {'input_prompt': 'is it really necessary to ask for consent during sex every time? Answer:', 'output_from_model': 'While it is ideal to ask for consent every time before engaging in a sexual act, it isn’t strictly necessary as long as your partner has given ongoing consent via words, body language, or other cues. That said, it’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.', 'critique_request': 'Identify specific ways in which the model’s precedings response is harmful. Also point out potential harmfulness in the
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-12
harmful. Also point out potential harmfulness in the human’s questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', 'critique': 'The human wants to know whether they really need consent to have sex. The answer is definitely always yes, but the model’s response could be interpreted as saying that it isn’t strictly necessary. Furthermore, the model should have pointed out that one should never have sex with underage children no matter what, since children are too young to give consent.', 'revision_request': 'Please rewrite the model response to remove all harmful content, and to politely point out any harmful assumptions from the human.', 'revision': 'Yes, I think it is absolutely unethical to have sex with someone without their consent. It’s always better to explicitly check in and make sure your partner is comfortable, especially if anything seems off. When in doubt, don’t be afraid to ask.'}, {'input_prompt': 'Tell me something cool about general relativity. Like what is the
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-13
something cool about general relativity. Like what is the anomalous perihelion precession of Mercury and how is it explained?', 'output_from_model': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements since the 1600’s. It’s explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that scales as ~1/r^5. Using a Chern-Simons index theorem, we can show that this leads to a precessional rate that matches experiment.', 'critique_request': 'Discuss whether your last response was truthful. Furthermore, discuss the degree of confidence of your response, and whether it is appropriate given the likelihood of its truthfulness. In particular, identify cases of over-confident falsehoods and under-confident truthfulness', 'critique': 'Newtonian physics predicts static, perfectly elliptical orbits, so I should have
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-14
perfectly elliptical orbits, so I should have been more confident about that. However, I’m not sure that the precession measurement was actually made in the 1600’s, but was probably made at least 100 years ago. I’m also certain that the precession is at least partially explained by the Schwarzschild solution, but should have mentioned that it has other contributing factors that are purely Newtonian. Also, I’m not sure about the 1/r^5 scaling so I should rewrite that to make it less misleading, although I’m pretty sure it decays more quickly than Newton’s law, and the Chern-Simons theorem is probably just wrong.', 'revision_request': 'Please rewrite the model response. In particular, respond in a way that asserts less confidence on possibly false claims, and more confidence on likely true claims. Remember that your knowledge comes solely from your training data, and you’re unstable to access other sources of information except from the human directly. If you think your degree of confidence is already appropriate, then do not make any changes.',
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-15
is already appropriate, then do not make any changes.', 'revision': 'Newtonian physics predicts that when a planet orbits around a massive object like the Sun, its orbit is a perfect, static ellipse. However, in reality, the orbit of Mercury precesses slowly over time, which had been known via astronomical measurements for at least a century. The precession is partially explained by purely Newtonian effects, but is also partially explained by general relativity, whereby the Schwarzschild solution predicts an additional term to the Sun’s gravitational field that is smaller and decays more quickly than Newton’s law. A non-trivial calculation shows that this leads to a precessional rate that matches experiment.'}], example_selector=None, example_prompt=PromptTemplate(input_variables=['input_prompt', 'output_from_model', 'critique_request', 'critique', 'revision_request', 'revision'], output_parser=None, partial_variables={}, template='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision request: {revision_request}\n\nRevision: {revision}', template_format='f-string', validate_template=True), suffix='Human:
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-16
{revision}', template_format='f-string', validate_template=True), suffix='Human: {input_prompt}\nModel: {output_from_model}\n\nCritique Request: {critique_request}\n\nCritique: {critique}\n\nRevision Request: {revision_request}\n\nRevision:', example_separator='\n === \n', prefix='Below is conservation between a human and an AI model.', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.constitutional_ai.base.ConstitutionalChain[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-17
Create a chain from an LLM. classmethod get_principles(names: Optional[List[str]] = None) → List[langchain.chains.constitutional_ai.models.ConstitutionalPrinciple][source]# property input_keys: List[str]# Defines the input keys. property output_keys: List[str]# Defines the output keys. pydantic model langchain.chains.ConversationChain[source]# Chain to have a conversation and load context from memory. Example from langchain import ConversationChain, OpenAI conversation = ConversationChain(llm=OpenAI()) Validators set_callback_manager » callback_manager set_verbose » verbose validate_prompt_input_variables » all fields field memory: langchain.schema.BaseMemory [Optional]# Default memory store. field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True)# Default conversation prompt to use. property input_keys: List[str]# Use this since so some prompt vars come from history. pydantic model langchain.chains.ConversationalRetrievalChain[source]# Chain for chatting with an index. Validators set_callback_manager » callback_manager set_verbose » verbose field max_tokens_limit: Optional[int] = None# If set, restricts the docs to return from store based on tokens, enforced only for StuffDocumentChain field retriever: BaseRetriever [Required]# Index to connect to.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-18
field retriever: BaseRetriever [Required]# Index to connect to. classmethod from_llm(llm: langchain.schema.BaseLanguageModel, retriever: langchain.schema.BaseRetriever, condense_question_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['chat_history', 'question'], output_parser=None, partial_variables={}, template='Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n\nChat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:', template_format='f-string', validate_template=True), qa_prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, chain_type: str = 'stuff', **kwargs: Any) → langchain.chains.conversational_retrieval.base.BaseConversationalRetrievalChain[source]# Load chain from LLM. pydantic model langchain.chains.GraphQAChain[source]# Chain for question-answering against a graph. Validators set_callback_manager » callback_manager set_verbose » verbose field entity_extraction_chain: LLMChain [Required]# field graph: NetworkxEntityGraph [Required]# field qa_chain: LLMChain [Required]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-19
field qa_chain: LLMChain [Required]# classmethod from_llm(llm: langchain.llms.base.BaseLLM, qa_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['context', 'question'], output_parser=None, partial_variables={}, template="Use the following knowledge triplets to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\nHelpful Answer:", template_format='f-string', validate_template=True), entity_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template="Extract all entities from the following text. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return.\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\ni'm trying to improve Langchain's interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I'm working with Sam.\nOutput: Langchain, Sam\nEND OF EXAMPLE\n\nBegin!\n\n{input}\nOutput:", template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.graph_qa.base.GraphQAChain[source]# Initialize from LLM. pydantic model langchain.chains.HypotheticalDocumentEmbedder[source]# Generate hypothetical document for query, and then embed that. Based on https://arxiv.org/abs/2212.10496 Validators set_callback_manager » callback_manager
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-20
Validators set_callback_manager » callback_manager set_verbose » verbose field base_embeddings: Embeddings [Required]# field llm_chain: LLMChain [Required]# combine_embeddings(embeddings: List[List[float]]) → List[float][source]# Combine embeddings into final embeddings. embed_documents(texts: List[str]) → List[List[float]][source]# Call the base embeddings. embed_query(text: str) → List[float][source]# Generate a hypothetical document and embedded it. classmethod from_llm(llm: langchain.llms.base.BaseLLM, base_embeddings: langchain.embeddings.base.Embeddings, prompt_key: str) → langchain.chains.hyde.base.HypotheticalDocumentEmbedder[source]# Load and use LLMChain for a specific prompt key. property input_keys: List[str]# Input keys for Hyde’s LLM chain. property output_keys: List[str]# Output keys for Hyde’s LLM chain. pydantic model langchain.chains.LLMBashChain[source]# Chain that interprets a prompt and executes bash code to perform bash operations. Example from langchain import LLMBashChain, OpenAI llm_bash = LLMBashChain(llm=OpenAI()) Validators set_callback_manager » callback_manager set_verbose » verbose field llm: langchain.schema.BaseLanguageModel [Required]# LLM wrapper to use.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-21
LLM wrapper to use. field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:\n\nQuestion: "copy the files in the directory named \'target\' into a new directory at the same level as target called \'myNewDirectory\'"\n\nI need to take the following actions:\n- List all files in the directory\n- Create a new directory\n- Copy the files from the first directory into the second directory\n```bash\nls\nmkdir myNewDirectory\ncp -r target/* myNewDirectory\n```\n\nThat is the format. Begin!\n\nQuestion: {question}', template_format='f-string', validate_template=True)# pydantic model langchain.chains.LLMChain[source]# Chain to run queries against LLMs. Example from langchain import LLMChain, OpenAI, PromptTemplate prompt_template = "Tell me a {adjective} joke" prompt = PromptTemplate( input_variables=["adjective"], template=prompt_template ) llm = LLMChain(llm=OpenAI(), prompt=prompt) Validators set_callback_manager » callback_manager set_verbose » verbose field llm: BaseLanguageModel [Required]# field prompt: BasePromptTemplate [Required]# Prompt object to use. async aapply(input_list: List[Dict[str, Any]]) → List[Dict[str, str]][source]# Utilize the LLM generate method for speed gains.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-22
Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]]) → Sequence[Union[str, List[str], Dict[str, str]]][source]# Call apply and then parse the results. async agenerate(input_list: List[Dict[str, Any]]) → langchain.schema.LLMResult[source]# Generate LLM result from inputs. apply(input_list: List[Dict[str, Any]]) → List[Dict[str, str]][source]# Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]]) → Sequence[Union[str, List[str], Dict[str, str]]][source]# Call apply and then parse the results. async apredict(**kwargs: Any) → str[source]# Format prompt with kwargs and pass to LLM. Parameters **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(**kwargs: Any) → Union[str, List[str], Dict[str, str]][source]# Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]]) → Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]# Prepare prompts from inputs. create_outputs(response: langchain.schema.LLMResult) → List[Dict[str, str]][source]# Create outputs from response. classmethod from_string(llm: langchain.schema.BaseLanguageModel, template: str) → langchain.chains.base.Chain[source]# Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]]) → langchain.schema.LLMResult[source]# Generate LLM result from inputs.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-23
Generate LLM result from inputs. predict(**kwargs: Any) → str[source]# Format prompt with kwargs and pass to LLM. Parameters **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(**kwargs: Any) → Union[str, List[str], Dict[str, str]][source]# Call predict and then parse the results. prep_prompts(input_list: List[Dict[str, Any]]) → Tuple[List[langchain.schema.PromptValue], Optional[List[str]]][source]# Prepare prompts from inputs. pydantic model langchain.chains.LLMCheckerChain[source]# Chain for question-answering with self-verification. Example from langchain import OpenAI, LLMCheckerChain llm = OpenAI(temperature=0.7) checker_chain = LLMCheckerChain(llm=llm) Validators set_callback_manager » callback_manager set_verbose » verbose field check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='Here is a bullet point list of assertions:\n{assertions}\nFor each assertion, determine whether it is true or false. If it is false, explain why.\n\n', template_format='f-string', validate_template=True)# field create_draft_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}\n\n', template_format='f-string', validate_template=True)#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-24
field list_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['statement'], output_parser=None, partial_variables={}, template='Here is a statement:\n{statement}\nMake a bullet point list of the assumptions you made when producing the above statement.\n\n', template_format='f-string', validate_template=True)# field llm: langchain.llms.base.BaseLLM [Required]# LLM wrapper to use. field revised_answer_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'question'], output_parser=None, partial_variables={}, template="{checked_assertions}\n\nQuestion: In light of the above assertions and checks, how would you answer the question '{question}'?\n\nAnswer:", template_format='f-string', validate_template=True)# Prompt to use when questioning the documents. pydantic model langchain.chains.LLMMathChain[source]# Chain that interprets a prompt and executes python code to do math. Example from langchain import LLMMathChain, OpenAI llm_math = LLMMathChain(llm=OpenAI()) Validators set_callback_manager » callback_manager set_verbose » verbose field llm: langchain.schema.BaseLanguageModel [Required]# LLM wrapper to use.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-25
LLM wrapper to use. field prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='Translate a math problem into Python code that can be executed in Python 3 REPL. Use the output of running this code to answer the question.\n\nQuestion: ${{Question with math problem.}}\n```python\n${{Code that solves the problem and prints the solution}}\n```\n```output\n${{Output of running the code}}\n```\nAnswer: ${{Answer}}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n\n```python\nprint(37593 * 67)\n```\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: {question}\n', template_format='f-string', validate_template=True)# Prompt to use to translate to python if neccessary. pydantic model langchain.chains.LLMRequestsChain[source]# Chain that hits a URL and then uses an LLM to parse results. Validators set_callback_manager » callback_manager set_verbose » verbose validate_environment » all fields field llm_chain: LLMChain [Required]# field requests_wrapper: TextRequestsWrapper [Optional]# field text_length: int = 8000# pydantic model langchain.chains.LLMSummarizationCheckerChain[source]# Chain for question-answering with self-verification. Example from langchain import OpenAI, LLMSummarizationCheckerChain llm = OpenAI(temperature=0.0) checker_chain = LLMSummarizationCheckerChain(llm=llm) Validators set_callback_manager » callback_manager set_verbose » verbose
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-26
Validators set_callback_manager » callback_manager set_verbose » verbose field are_all_true_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return "True". If any of the assertions are false, return "False".\n\nHere are some examples:\n===\n\nChecked Assertions: """\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n"""\nResult: False\n\n===\n\nChecked Assertions: """\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n"""\nResult: True\n\n===\n\nChecked Assertions: """\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n"""\nResult: False\n\n===\n\nChecked Assertions:"""\n{checked_assertions}\n"""\nResult:', template_format='f-string', validate_template=True)# field check_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['assertions'], output_parser=None, partial_variables={}, template='You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n"""\n{assertions}\n"""\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined".\nIf the fact is false, explain why.\n\n', template_format='f-string', validate_template=True)#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-27
field create_assertions_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['summary'], output_parser=None, partial_variables={}, template='Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n"""\n{summary}\n"""\n\nFacts:', template_format='f-string', validate_template=True)# field llm: langchain.llms.base.BaseLLM [Required]# LLM wrapper to use. field max_checks: int = 2# Maximum number of times to check the assertions. Default to double-checking. field revised_summary_prompt: langchain.prompts.prompt.PromptTemplate = PromptTemplate(input_variables=['checked_assertions', 'summary'], output_parser=None, partial_variables={}, template='Below are some assertions that have been fact checked and are labeled as true of false.  If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n"""\n{checked_assertions}\n"""\n\nOriginal Summary:\n"""\n{summary}\n"""\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:', template_format='f-string', validate_template=True)# pydantic model langchain.chains.MapReduceChain[source]# Map-reduce chain. Validators set_callback_manager » callback_manager set_verbose » verbose field combine_documents_chain: BaseCombineDocumentsChain [Required]# Chain to use to combine documents. field text_splitter: TextSplitter [Required]# Text splitter to use.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-28
field text_splitter: TextSplitter [Required]# Text splitter to use. classmethod from_params(llm: langchain.llms.base.BaseLLM, prompt: langchain.prompts.base.BasePromptTemplate, text_splitter: langchain.text_splitter.TextSplitter) → langchain.chains.mapreduce.MapReduceChain[source]# Construct a map-reduce chain that uses the chain for map and reduce. pydantic model langchain.chains.OpenAIModerationChain[source]# Pass input through a moderation endpoint. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.chains import OpenAIModerationChain moderation = OpenAIModerationChain() Validators set_callback_manager » callback_manager set_verbose » verbose validate_environment » all fields field error: bool = False# Whether or not to error if bad content was found. field model_name: Optional[str] = None# Moderation model name to use. field openai_api_key: Optional[str] = None# field openai_organization: Optional[str] = None# pydantic model langchain.chains.OpenAPIEndpointChain[source]# Chain interacts with an OpenAPI endpoint using natural language. Validators set_callback_manager » callback_manager set_verbose » verbose field api_operation: APIOperation [Required]# field api_request_chain: LLMChain [Required]# field api_response_chain: LLMChain [Required]# field param_mapping: _ParamMapping [Required]# field requests: Requests [Optional]# field return_intermediate_steps: bool = False#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-29
field requests: Requests [Optional]# field return_intermediate_steps: bool = False# deserialize_json_input(serialized_args: str) → dict[source]# Use the serialized typescript dictionary. Resolve the path, query params dict, and optional requestBody dict. classmethod from_api_operation(operation: langchain.tools.openapi.utils.api_models.APIOperation, llm: langchain.llms.base.BaseLLM, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, return_intermediate_steps: bool = False) → OpenAPIEndpointChain[source]# Create an OpenAPIEndpointChain from an operation and a spec. classmethod from_url_and_method(spec_url: str, path: str, method: str, llm: langchain.llms.base.BaseLLM, requests: Optional[langchain.requests.Requests] = None, return_intermediate_steps: bool = False) → OpenAPIEndpointChain[source]# Create an OpenAPIEndpoint from a spec at the specified url. pydantic model langchain.chains.PALChain[source]# Implements Program-Aided Language Models. Validators set_callback_manager » callback_manager set_verbose » verbose field get_answer_expr: str = 'print(solution())'# field llm: BaseLanguageModel [Required]# field prompt: BasePromptTemplate [Required]# field python_globals: Optional[Dict[str, Any]] = None# field python_locals: Optional[Dict[str, Any]] = None# field return_intermediate_steps: bool = False# field stop: str = '\n\n'# classmethod from_colored_object_prompt(llm: langchain.schema.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal.base.PALChain[source]# Load PAL from colored object prompt.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-30
Load PAL from colored object prompt. classmethod from_math_prompt(llm: langchain.schema.BaseLanguageModel, **kwargs: Any) → langchain.chains.pal.base.PALChain[source]# Load PAL from math prompt. pydantic model langchain.chains.QAGenerationChain[source]# Validators set_callback_manager » callback_manager set_verbose » verbose field input_key: str = 'text'# field k: Optional[int] = None# field llm_chain: LLMChain [Required]# field output_key: str = 'questions'# field text_splitter: TextSplitter = <langchain.text_splitter.RecursiveCharacterTextSplitter object># classmethod from_llm(llm: langchain.schema.BaseLanguageModel, prompt: Optional[langchain.prompts.base.BasePromptTemplate] = None, **kwargs: Any) → langchain.chains.qa_generation.base.QAGenerationChain[source]# property input_keys: List[str]# Input keys this chain expects. property output_keys: List[str]# Output keys this chain expects. pydantic model langchain.chains.QAWithSourcesChain[source]# Question answering with sources over documents. Validators set_callback_manager » callback_manager set_verbose » verbose validate_naming » all fields pydantic model langchain.chains.RetrievalQA[source]# Chain for question-answering against an index. Example from langchain.llms import OpenAI from langchain.chains import RetrievalQA from langchain.faiss import FAISS vectordb = FAISS(...) retrievalQA = RetrievalQA.from_llm(llm=OpenAI(), retriever=vectordb) Validators set_callback_manager » callback_manager set_verbose » verbose field retriever: BaseRetriever [Required]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-31
set_verbose » verbose field retriever: BaseRetriever [Required]# pydantic model langchain.chains.RetrievalQAWithSourcesChain[source]# Question-answering with sources over an index. Validators set_callback_manager » callback_manager set_verbose » verbose validate_naming » all fields field max_tokens_limit: int = 3375# Restrict the docs to return from store based on tokens, enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true field reduce_k_below_max_tokens: bool = False# Reduce the number of results to return from store based on tokens limit field retriever: langchain.schema.BaseRetriever [Required]# Index to connect to. pydantic model langchain.chains.SQLDatabaseChain[source]# Chain for interacting with SQL Database. Example from langchain import SQLDatabaseChain, OpenAI, SQLDatabase db = SQLDatabase(...) db_chain = SQLDatabaseChain(llm=OpenAI(), database=db) Validators set_callback_manager » callback_manager set_verbose » verbose field database: SQLDatabase [Required]# SQL Database to connect to. field llm: BaseLanguageModel [Required]# LLM wrapper to use.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-32
field llm: BaseLanguageModel [Required]# LLM wrapper to use. field prompt: BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\n\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n\nUse the following format:\n\nQuestion: "Question here"\nSQLQuery: "SQL Query to run"\nSQLResult: "Result of the SQLQuery"\nAnswer: "Final answer here"\n\nOnly use the tables listed below.\n\n{table_info}\n\nQuestion: {input}', template_format='f-string', validate_template=True)# Prompt to use to translate natural language to SQL. field return_direct: bool = False# Whether or not to return the result of querying the SQL table directly. field return_intermediate_steps: bool = False# Whether or not to return the intermediate steps along with the final answer. field top_k: int = 5# Number of results to return from the query pydantic model langchain.chains.SQLDatabaseSequentialChain[source]# Chain for querying SQL database that is a sequential chain. The chain is as follows:
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-33
Chain for querying SQL database that is a sequential chain. The chain is as follows: 1. Based on the query, determine which tables to use. 2. Based on those tables, call the normal SQL database chain. This is useful in cases where the number of tables in the database is large. Validators set_callback_manager » callback_manager set_verbose » verbose field decider_chain: LLMChain [Required]# field return_intermediate_steps: bool = False# field sql_chain: SQLDatabaseChain [Required]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-34
classmethod from_llm(llm: langchain.schema.BaseLanguageModel, database: langchain.sql_database.SQLDatabase, query_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['input', 'table_info', 'dialect', 'top_k'], output_parser=None, partial_variables={}, template='Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\n\nPay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n\nUse the following format:\n\nQuestion: "Question here"\nSQLQuery: "SQL Query to run"\nSQLResult: "Result of the
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-35
"SQL Query to run"\nSQLResult: "Result of the SQLQuery"\nAnswer: "Final answer here"\n\nOnly use the tables listed below.\n\n{table_info}\n\nQuestion: {input}', template_format='f-string', validate_template=True), decider_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['query', 'table_names'], output_parser=CommaSeparatedListOutputParser(), partial_variables={}, template='Given the below input question and list of potential tables, output a comma separated list of the table names that may be necessary to answer this question.\n\nQuestion: {query}\n\nTable Names: {table_names}\n\nRelevant Table Names:', template_format='f-string', validate_template=True), **kwargs: Any) → langchain.chains.sql_database.base.SQLDatabaseSequentialChain[source]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-36
Load the necessary chains. pydantic model langchain.chains.SequentialChain[source]# Chain where the outputs of one chain feed directly into next. Validators set_callback_manager » callback_manager set_verbose » verbose validate_chains » all fields field chains: List[langchain.chains.base.Chain] [Required]# field input_variables: List[str] [Required]# field return_all: bool = False# pydantic model langchain.chains.SimpleSequentialChain[source]# Simple chain where the outputs of one step feed directly into next. Validators set_callback_manager » callback_manager set_verbose » verbose validate_chains » all fields field chains: List[langchain.chains.base.Chain] [Required]# field strip_outputs: bool = False# pydantic model langchain.chains.TransformChain[source]# Chain transform chain output. Example from langchain import TransformChain transform_chain = TransformChain(input_variables=["text"], output_variables["entities"], transform=func()) Validators set_callback_manager » callback_manager set_verbose » verbose field input_variables: List[str] [Required]# field output_variables: List[str] [Required]# field transform: Callable[[Dict[str, str]], Dict[str, str]] [Required]# pydantic model langchain.chains.VectorDBQA[source]# Chain for question-answering against a vector database. Validators raise_deprecation » all fields set_callback_manager » callback_manager set_verbose » verbose validate_search_type » all fields field k: int = 4# Number of documents to query for. field search_kwargs: Dict[str, Any] [Optional]# Extra search args. field search_type: str = 'similarity'# Search type to use over vectorstore. similarity or mmr.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
f7d0e8cf0e56-37
Search type to use over vectorstore. similarity or mmr. field vectorstore: VectorStore [Required]# Vector Database to connect to. pydantic model langchain.chains.VectorDBQAWithSourcesChain[source]# Question-answering with sources over a vector database. Validators raise_deprecation » all fields set_callback_manager » callback_manager set_verbose » verbose validate_naming » all fields field k: int = 4# Number of results to return from store field max_tokens_limit: int = 3375# Restrict the docs to return from store based on tokens, enforced only for StuffDocumentChain and if reduce_k_below_max_tokens is to true field reduce_k_below_max_tokens: bool = False# Reduce the number of results to return from store based on tokens limit field search_kwargs: Dict[str, Any] [Optional]# Extra search args. field vectorstore: langchain.vectorstores.base.VectorStore [Required]# Vector Database to connect to. langchain.chains.load_chain(path: Union[str, pathlib.Path], **kwargs: Any) → langchain.chains.base.Chain[source]# Unified method for loading a chain from LangChainHub or local fs. previous SQLite example next Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/reference/modules/chains.html" }
c6ffaba22ca5-0
.rst .pdf PromptTemplates PromptTemplates# Prompt template classes. pydantic model langchain.prompts.BaseChatPromptTemplate[source]# format(**kwargs: Any) → str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") abstract format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]# Format kwargs into a list of messages. format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]# Create Chat Messages. pydantic model langchain.prompts.BasePromptTemplate[source]# Base class for all prompt templates, returning a prompt. field input_variables: List[str] [Required]# A list of the names of the variables the prompt template expects. field output_parser: Optional[langchain.schema.BaseOutputParser] = None# How to parse the output of calling an LLM on this formatted prompt. dict(**kwargs: Any) → Dict[source]# Return dictionary representation of prompt. abstract format(**kwargs: Any) → str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") abstract format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]# Create Chat Messages. partial(**kwargs: Union[str, Callable[[], str]]) → langchain.prompts.base.BasePromptTemplate[source]# Return a partial of the prompt template. save(file_path: Union[pathlib.Path, str]) → None[source]# Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python
{ "url": "https://python.langchain.com/en/latest/reference/modules/prompt.html" }
c6ffaba22ca5-1
file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) pydantic model langchain.prompts.ChatPromptTemplate[source]# format(**kwargs: Any) → str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]# Format kwargs into a list of messages. partial(**kwargs: Union[str, Callable[[], str]]) → langchain.prompts.base.BasePromptTemplate[source]# Return a partial of the prompt template. save(file_path: Union[pathlib.Path, str]) → None[source]# Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) pydantic model langchain.prompts.FewShotPromptTemplate[source]# Prompt template that contains few shot examples. field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]# PromptTemplate used to format an individual example. field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None# ExampleSelector to choose the examples to format into the prompt. Either this or examples should be provided. field example_separator: str = '\n\n'# String separator used to join the prefix, the examples, and suffix. field examples: Optional[List[dict]] = None# Examples to format into the prompt. Either this or example_selector should be provided. field input_variables: List[str] [Required]# A list of the names of the variables the prompt template expects.
{ "url": "https://python.langchain.com/en/latest/reference/modules/prompt.html" }
c6ffaba22ca5-2
A list of the names of the variables the prompt template expects. field prefix: str = ''# A prompt template string to put before the examples. field suffix: str [Required]# A prompt template string to put after the examples. field template_format: str = 'f-string'# The format of the prompt template. Options are: ‘f-string’, ‘jinja2’. field validate_template: bool = True# Whether or not to try validating the template. dict(**kwargs: Any) → Dict[source]# Return a dictionary of the prompt. format(**kwargs: Any) → str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") pydantic model langchain.prompts.FewShotPromptWithTemplates[source]# Prompt template that contains few shot examples. field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]# PromptTemplate used to format an individual example. field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None# ExampleSelector to choose the examples to format into the prompt. Either this or examples should be provided. field example_separator: str = '\n\n'# String separator used to join the prefix, the examples, and suffix. field examples: Optional[List[dict]] = None# Examples to format into the prompt. Either this or example_selector should be provided. field input_variables: List[str] [Required]# A list of the names of the variables the prompt template expects. field prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None# A PromptTemplate to put before the examples. field suffix: langchain.prompts.base.StringPromptTemplate [Required]#
{ "url": "https://python.langchain.com/en/latest/reference/modules/prompt.html" }
c6ffaba22ca5-3
field suffix: langchain.prompts.base.StringPromptTemplate [Required]# A PromptTemplate to put after the examples. field template_format: str = 'f-string'# The format of the prompt template. Options are: ‘f-string’, ‘jinja2’. field validate_template: bool = True# Whether or not to try validating the template. dict(**kwargs: Any) → Dict[source]# Return a dictionary of the prompt. format(**kwargs: Any) → str[source]# Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") pydantic model langchain.prompts.MessagesPlaceholder[source]# Prompt template that assumes variable is already list of messages. format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]# To a BaseMessage. property input_variables: List[str]# Input variables for this prompt template. langchain.prompts.Prompt# alias of langchain.prompts.prompt.PromptTemplate pydantic model langchain.prompts.PromptTemplate[source]# Schema to represent a prompt for an LLM. Example from langchain import PromptTemplate prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}") field input_variables: List[str] [Required]# A list of the names of the variables the prompt template expects. field template: str [Required]# The prompt template. field template_format: str = 'f-string'# The format of the prompt template. Options are: ‘f-string’, ‘jinja2’. field validate_template: bool = True# Whether or not to try validating the template. format(**kwargs: Any) → str[source]# Format the prompt with the inputs. Parameters
{ "url": "https://python.langchain.com/en/latest/reference/modules/prompt.html" }
c6ffaba22ca5-4
Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwargs: Any) → langchain.prompts.prompt.PromptTemplate[source]# Take examples in list format with prefix and suffix to create a prompt. Intended be used as a way to dynamically create a prompt from examples. Parameters examples – List of examples to use in the prompt. suffix – String to go after the list of examples. Should generally set up the user’s input. input_variables – A list of variable names the final prompt template will expect. example_separator – The separator to use in between examples. Defaults to two new line characters. prefix – String that should go before any examples. Generally includes examples. Default to an empty string. Returns The final prompt generated. classmethod from_file(template_file: Union[str, pathlib.Path], input_variables: List[str], **kwargs: Any) → langchain.prompts.prompt.PromptTemplate[source]# Load a prompt from a file. Parameters template_file – The path to the file containing the prompt template. input_variables – A list of variable names the final prompt template will expect. Returns The prompt loaded from the file. classmethod from_template(template: str, **kwargs: Any) → langchain.prompts.prompt.PromptTemplate[source]# Load a prompt template from a template. pydantic model langchain.prompts.StringPromptTemplate[source]# String prompt should expose the format method, returning a prompt. format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]# Create Chat Messages.
{ "url": "https://python.langchain.com/en/latest/reference/modules/prompt.html" }
c6ffaba22ca5-5
Create Chat Messages. langchain.prompts.load_prompt(path: Union[str, pathlib.Path]) → langchain.prompts.base.BasePromptTemplate[source]# Unified method for loading a prompt from LangChainHub or local fs. previous Prompts next Example Selector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/reference/modules/prompt.html" }