id
stringlengths 14
16
| text
stringlengths 31
3.14k
| source
stringlengths 58
124
|
---|---|---|
da02d52f8f8f-2 | Code Understanding: If you want to understand how to use LLMs to query source code from github, you should read this page.
Interacting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.
Extraction: Extract structured information from text.
Summarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.
Evaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
Reference Docs#
All of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.
Reference Documentation
LangChain Ecosystem#
Guides for how other companies/products can be used with LangChain
LangChain Ecosystem
Additional Resources#
Additional collection of resources we think may be useful as you develop your application!
LangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.
Glossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!
Gallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.
Deployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
Tracing: A guide on using tracing in LangChain to visualize the execution of chains and agents. | /content/https://python.langchain.com/en/latest/index.html |
da02d52f8f8f-3 | Model Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
Discord: Join us on our Discord to discuss all things LangChain!
YouTube: A collection of the LangChain tutorials and videos.
Production Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.
next
Quickstart Guide
Contents
Getting Started
Modules
Use Cases
Reference Docs
LangChain Ecosystem
Additional Resources
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/index.html |
700e7a5a2edc-0 | .rst
.pdf
LangChain Ecosystem
Contents
Groups
Companies / Products
LangChain Ecosystem#
Guides for how other companies/products can be used with LangChain
Groups#
LangChain provides integration with many LLMs and systems:
LLM Providers
Chat Model Providers
Text Embedding Model Providers
Document Loader Integrations
Text Splitter Integrations
Vectorstore Providers
Retriever Providers
Tool Providers
Toolkit Integrations
Companies / Products#
AI21 Labs
Aim
AnalyticDB
Apify
AtlasDB
Banana
CerebriumAI
Chroma
ClearML Integration
Cohere
Comet
Databerry
DeepInfra
Deep Lake
ForefrontAI
Google Search Wrapper
Google Serper Wrapper
GooseAI
GPT4All
Graphsignal
Hazy Research
Helicone
Hugging Face
Jina
Llama.cpp
Milvus
Modal
MyScale
NLPCloud
OpenAI
OpenSearch
Petals
PGVector
Pinecone
Prediction Guard
PromptLayer
Qdrant
Replicate
Runhouse
RWKV-4
SearxNG Search API
SerpAPI
StochasticAI
Unstructured
Weights & Biases
Weaviate
Wolfram Alpha Wrapper
Writer
Yeager.ai
Zilliz
previous
Experimental Modules
next
AI21 Labs
Contents
Groups
Companies / Products
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/ecosystem.html |
82171e19819a-0 | Search
Error
Please activate JavaScript to enable the search functionality.
Ctrl+K
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/search.html |
a3550559966f-0 | .md
.pdf
Tracing
Contents
Tracing Walkthrough
Changing Sessions
Tracing#
By enabling tracing in your LangChain runs, you’ll be able to more effectively visualize, step through, and debug your chains and agents.
First, you should install tracing and set up your environment properly.
You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha).
If you’re interested in using the hosted platform, please fill out the form here.
Locally Hosted Setup
Cloud Hosted Setup
Tracing Walkthrough#
When you first access the UI, you should see a page with your tracing sessions.
An initial one “default” should already be created for you.
A session is just a way to group traces together.
If you click on a session, it will take you to a page with no recorded traces that says “No Runs.”
You can create a new session with the new session form.
If we click on the default session, we can see that to start we have no traces stored.
If we now start running chains and agents with tracing enabled, we will see data show up here.
To do so, we can run this notebook as an example.
After running it, we will see an initial trace show up.
From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
We can keep on clicking further and further down to explore deeper and deeper.
We can also click on the “Explore” button of the top level run to dive even deeper.
Here, we can see the inputs and outputs in full, as well as all the nested traces.
We can keep on exploring each of these nested traces in more detail. | /content/https://python.langchain.com/en/latest/tracing.html |
a3550559966f-1 | We can keep on exploring each of these nested traces in more detail.
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.
Changing Sessions#
To initially record traces to a session other than "default", you can set the LANGCHAIN_SESSION environment variable to the name of the session you want to record to:
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually exists. You can create a new session in the UI.
To switch sessions mid-script or mid-notebook, do NOT set the LANGCHAIN_SESSION environment variable. Instead: langchain.set_tracing_callback_manager(session_name="my_session")
previous
Deployments
next
YouTube
Contents
Tracing Walkthrough
Changing Sessions
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/tracing.html |
91d3a5254df7-0 | .rst
.pdf
API References
API References#
All of LangChain’s reference documentation, in one place.
Full documentation on all methods, classes, and APIs in LangChain.
Models
Prompts
Indexes
Memory
Chains
Agents
Utilities
Experimental Modules
previous
Integrations
next
Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference.html |
fed1ddfa1af0-0 | .md
.pdf
Deployments
Contents
Streamlit
Gradio (on Hugging Face)
Beam
Vercel
Digitalocean App Platform
Google Cloud Run
SteamShip
Langchain-serve
BentoML
Databutton
Deployments#
So you’ve made a really cool chain - now what? How do you deploy it and make it easily sharable with the world?
This section covers several options for that.
Note that these are meant as quick deployment options for prototypes and demos, and not for production systems.
If you are looking for help with deployment of a production system, please contact us directly.
What follows is a list of template GitHub repositories aimed that are intended to be
very easy to fork and modify to use your chain.
This is far from an exhaustive list of options, and we are EXTREMELY open to contributions here.
Streamlit#
This repo serves as a template for how to deploy a LangChain with Streamlit.
It implements a chatbot interface.
It also contains instructions for how to deploy this app on the Streamlit platform.
Gradio (on Hugging Face)#
This repo serves as a template for how deploy a LangChain with Gradio.
It implements a chatbot interface, with a “Bring-Your-Own-Token” approach (nice for not wracking up big bills).
It also contains instructions for how to deploy this app on the Hugging Face platform.
This is heavily influenced by James Weaver’s excellent examples.
Beam#
This repo serves as a template for how deploy a LangChain with Beam. | /content/https://python.langchain.com/en/latest/deployments.html |
fed1ddfa1af0-1 | This repo serves as a template for how deploy a LangChain with Beam.
It implements a Question Answering app and contains instructions for deploying the app as a serverless REST API.
Vercel#
A minimal example on how to run LangChain on Vercel using Flask.
Digitalocean App Platform#
A minimal example on how to deploy LangChain to DigitalOcean App Platform.
Google Cloud Run#
A minimal example on how to deploy LangChain to Google Cloud Run.
SteamShip#
This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship.
This includes: production ready endpoints, horizontal scaling across dependencies, persistant storage of app state, multi-tenancy support, etc.
Langchain-serve#
This repository allows users to serve local chains and agents as RESTful, gRPC, or Websocket APIs thanks to Jina. Deploy your chains & agents with ease and enjoy independent scaling, serverless and autoscaling APIs, as well as a Streamlit playground on Jina AI Cloud.
BentoML#
This repository provides an example of how to deploy a LangChain application with BentoML. BentoML is a framework that enables the containerization of machine learning applications as standard OCI images. BentoML also allows for the automatic generation of OpenAPI and gRPC endpoints. With BentoML, you can integrate models from all popular ML frameworks and deploy them as microservices running on the most optimal hardware and scaling independently.
Databutton# | /content/https://python.langchain.com/en/latest/deployments.html |
fed1ddfa1af0-2 | Databutton#
These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include Chatbot interface with conversational memory, Personal search engine, and a starter template for LangChain apps. Deploying and sharing is one click.
previous
LangChain Gallery
next
Tracing
Contents
Streamlit
Gradio (on Hugging Face)
Beam
Vercel
Digitalocean App Platform
Google Cloud Run
SteamShip
Langchain-serve
BentoML
Databutton
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/deployments.html |
2cf4f627b73f-0 | .ipynb
.pdf
Model Comparison
Model Comparison#
Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way.
LangChain provides the concept of a ModelLaboratory to test out and try different models.
from langchain import LLMChain, OpenAI, Cohere, HuggingFaceHub, PromptTemplate
from langchain.model_laboratory import ModelLaboratory
llms = [
OpenAI(temperature=0),
Cohere(model="command-xlarge-20221108", max_tokens=20, temperature=0),
HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":1})
]
model_lab = ModelLaboratory.from_llms(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
Flamingos are pink.
Cohere | /content/https://python.langchain.com/en/latest/model_laboratory.html |
2cf4f627b73f-1 | Flamingos are pink.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
Pink
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
pink
prompt = PromptTemplate(template="What is the capital of {state}?", input_variables=["state"])
model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt)
model_lab_with_prompt.compare("New York")
Input:
New York
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
The capital of New York is Albany.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
The capital of New York is Albany.
HuggingFaceHub | /content/https://python.langchain.com/en/latest/model_laboratory.html |
2cf4f627b73f-2 | The capital of New York is Albany.
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
st john s
from langchain import SelfAskWithSearchChain, SerpAPIWrapper
open_ai_llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
self_ask_with_search_openai = SelfAskWithSearchChain(llm=open_ai_llm, search_chain=search, verbose=True)
cohere_llm = Cohere(temperature=0, model="command-xlarge-20221108")
search = SerpAPIWrapper()
self_ask_with_search_cohere = SelfAskWithSearchChain(llm=cohere_llm, search_chain=search, verbose=True)
chains = [self_ask_with_search_openai, self_ask_with_search_cohere]
names = [str(open_ai_llm), str(cohere_llm)]
model_lab = ModelLaboratory(chains, names=names)
model_lab.compare("What is the hometown of the reigning men's U.S. Open champion?")
Input:
What is the hometown of the reigning men's U.S. Open champion?
OpenAI | /content/https://python.langchain.com/en/latest/model_laboratory.html |
2cf4f627b73f-3 | OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain.
So the final answer is: El Palmar, Spain
> Finished chain.
So the final answer is: El Palmar, Spain
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 256, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
So the final answer is:
Carlos Alcaraz
> Finished chain.
So the final answer is:
Carlos Alcaraz
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/model_laboratory.html |
f4790f7d1be2-0 | .md
.pdf
Glossary
Contents
Chain of Thought Prompting
Action Plan Generation
ReAct Prompting
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
Glossary#
This is a collection of terminology commonly used when developing LLM applications.
It contains reference to external papers or sources where the concept was first introduced,
as well as to places in LangChain where the concept is used.
Chain of Thought Prompting#
A prompting technique used to encourage the model to generate a series of intermediate reasoning steps.
A less formal way to induce this behavior is to include “Let’s think step-by-step” in the prompt.
Resources:
Chain-of-Thought Paper
Step-by-Step Paper
Action Plan Generation#
A prompt usage that uses a language model to generate actions to take.
The results of these actions can then be fed back into the language model to generate a subsequent action.
Resources:
WebGPT Paper
SayCan Paper
ReAct Prompting#
A prompting technique that combines Chain-of-Thought prompting with action plan generation.
This induces the to model to think about what action to take, then take it.
Resources:
Paper
LangChain Example
Self-ask#
A prompting method that builds on top of chain-of-thought prompting.
In this method, the model explicitly asks itself follow-up questions, which are then answered by an external search engine.
Resources:
Paper
LangChain Example
Prompt Chaining#
Combining multiple LLM calls together, with the output of one-step being the input to the next.
Resources:
PromptChainer Paper
Language Model Cascades
ICE Primer Book
Socratic Models | /content/https://python.langchain.com/en/latest/glossary.html |
f4790f7d1be2-1 | PromptChainer Paper
Language Model Cascades
ICE Primer Book
Socratic Models
Memetic Proxy#
Encouraging the LLM to respond in a certain way framing the discussion in a context that the model knows of and that will result in that type of response. For example, as a conversation between a student and a teacher.
Resources:
Paper
Self Consistency#
A decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
Is most effective when combined with Chain-of-thought prompting.
Resources:
Paper
Inception#
Also called “First Person Instruction”.
Encouraging the model to think a certain way by including the start of the model’s response in the prompt.
Resources:
Example
MemPrompt#
MemPrompt maintains a memory of errors and user feedback, and uses them to prevent repetition of mistakes.
Resources:
Paper
previous
Zilliz
next
LangChain Gallery
Contents
Chain of Thought Prompting
Action Plan Generation
ReAct Prompting
Self-ask
Prompt Chaining
Memetic Proxy
Self Consistency
Inception
MemPrompt
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/glossary.html |
53afbe900bf4-0 | .rst
.pdf
LangChain Gallery
Contents
Open Source
Misc. Colab Notebooks
Proprietary
LangChain Gallery#
Lots of people have built some pretty awesome stuff with LangChain.
This is a collection of our favorites.
If you see any other demos that you think we should highlight, be sure to let us know!
Open Source#
HowDoI.ai
This is an experiment in building a large-language-model-backed chatbot. It can hold a conversation, remember previous comments/questions,
and answer all types of queries (history, web search, movie data, weather, news, and more).
YouTube Transcription QA with Sources
An end-to-end example of doing question answering on YouTube transcripts, returning the timestamps as sources to legitimize the answer.
QA Slack Bot
This application is a Slack Bot that uses Langchain and OpenAI’s GPT3 language model to provide domain specific answers. You provide the documents.
ThoughtSource
A central, open resource and community around data and tools related to chain-of-thought reasoning in large language models.
LLM Strategy
This Python package adds a decorator llm_strategy that connects to an LLM (such as OpenAI’s GPT-3) and uses the LLM to “implement” abstract methods in interface classes. It does this by forwarding requests to the LLM and converting the responses back to Python data using Python’s @dataclasses.
Zero-Shot Corporate Lobbyist
A notebook showing how to use GPT to help with the work of a corporate lobbyist.
Dagster Documentation ChatBot
A jupyter notebook demonstrating how you could create a semantic search engine on documents in one of your Google Folders
Google Folder Semantic Search | /content/https://python.langchain.com/en/latest/gallery.html |
53afbe900bf4-1 | Google Folder Semantic Search
Build a GitHub support bot with GPT3, LangChain, and Python.
Talk With Wind
Record sounds of anything (birds, wind, fire, train station) and chat with it.
ChatGPT LangChain
This simple application demonstrates a conversational agent implemented with OpenAI GPT-3.5 and LangChain. When necessary, it leverages tools for complex math, searching the internet, and accessing news and weather.
GPT Math Techniques
A Hugging Face spaces project showing off the benefits of using PAL for math problems.
GPT Political Compass
Measure the political compass of GPT.
Notion Database Question-Answering Bot
Open source GitHub project shows how to use LangChain to create a chatbot that can answer questions about an arbitrary Notion database.
LlamaIndex
LlamaIndex (formerly GPT Index) is a project consisting of a set of data structures that are created using GPT-3 and can be traversed using GPT-3 in order to answer queries.
Grover’s Algorithm
Leveraging Qiskit, OpenAI and LangChain to demonstrate Grover’s algorithm
QNimGPT
A chat UI to play Nim, where a player can select an opponent, either a quantum computer or an AI
ReAct TextWorld
Leveraging the ReActTextWorldAgent to play TextWorld with an LLM!
Fact Checker
This repo is a simple demonstration of using LangChain to do fact-checking with prompt chaining.
DocsGPT
Answer questions about the documentation of any project
Misc. Colab Notebooks#
Wolfram Alpha in Conversational Agent
Give ChatGPT a WolframAlpha neural implant
Tool Updates in Agents
Agent improvements (6th Jan 2023) | /content/https://python.langchain.com/en/latest/gallery.html |
53afbe900bf4-2 | Tool Updates in Agents
Agent improvements (6th Jan 2023)
Conversational Agent with Tools (Langchain AGI)
Langchain AGI (23rd Dec 2022)
Proprietary#
Daimon
A chat-based AI personal assistant with long-term memory about you.
Summarize any file with AI
Summarize not only long docs, interview audio or video files quickly, but also entire websites and YouTube videos. Share or download your generated summaries to collaborate with others, or revisit them at any time! Bonus: @anysummary on Twitter will also summarize any thread it is tagged in.
AI Assisted SQL Query Generator
An app to write SQL using natural language, and execute against real DB.
Clerkie
Stack Tracing QA Bot to help debug complex stack tracing (especially the ones that go multi-function/file deep).
Sales Email Writer
By Raza Habib, this demo utilizes LangChain + SerpAPI + HumanLoop to write sales emails. Give it a company name and a person, this application will use Google Search (via SerpAPI) to get more information on the company and the person, and then write them a sales message.
Question-Answering on a Web Browser
By Zahid Khawaja, this demo utilizes question answering to answer questions about a given website. A followup added this for YouTube videos, and then another followup added it for Wikipedia.
Mynd
A journaling app for self-care that uses AI to uncover insights and patterns over time.
previous
Glossary
next
Deployments
Contents
Open Source
Misc. Colab Notebooks
Proprietary
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/gallery.html |
e7bebeed6bb8-0 | .rst
.pdf
Indexes
Indexes#
Indexes refer to ways to structure documents so that LLMs can best interact with them.
LangChain has a number of modules that help you load, structure, store, and retrieve documents.
Docstore
Text Splitter
Document Loaders
Vector Stores
Retrievers
Document Compressors
Document Transformers
previous
Embeddings
next
Docstore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/indexes.html |
cc93ad1b54e9-0 | .rst
.pdf
Agents
Agents#
Reference guide for Agents and associated abstractions.
Agents
Tools
Agent Toolkits
previous
Memory
next
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/agents.html |
12141d07c8ac-0 | .md
.pdf
Integrations
Integrations#
Besides the installation of this python package, you will also need to install packages and set environment variables depending on which chains you want to use.
Note: the reason these packages are not included in the dependencies by default is that as we imagine scaling this package, we do not want to force dependencies that are not needed.
The following use cases require specific installs and api keys:
OpenAI:
Install requirements with pip install openai
Get an OpenAI api key and either set it as an environment variable (OPENAI_API_KEY) or pass it to the LLM constructor as openai_api_key.
Cohere:
Install requirements with pip install cohere
Get a Cohere api key and either set it as an environment variable (COHERE_API_KEY) or pass it to the LLM constructor as cohere_api_key.
GooseAI:
Install requirements with pip install openai
Get an GooseAI api key and either set it as an environment variable (GOOSEAI_API_KEY) or pass it to the LLM constructor as gooseai_api_key.
Hugging Face Hub
Install requirements with pip install huggingface_hub
Get a Hugging Face Hub api token and either set it as an environment variable (HUGGINGFACEHUB_API_TOKEN) or pass it to the LLM constructor as huggingfacehub_api_token.
Petals:
Install requirements with pip install petals
Get an GooseAI api key and either set it as an environment variable (HUGGINGFACE_API_KEY) or pass it to the LLM constructor as huggingface_api_key.
CerebriumAI:
Install requirements with pip install cerebrium | /content/https://python.langchain.com/en/latest/reference/integrations.html |
12141d07c8ac-1 | CerebriumAI:
Install requirements with pip install cerebrium
Get a Cerebrium api key and either set it as an environment variable (CEREBRIUMAI_API_KEY) or pass it to the LLM constructor as cerebriumai_api_key.
PromptLayer:
Install requirements with pip install promptlayer (be sure to be on version 0.1.62 or higher)
Get an API key from promptlayer.com and set it using promptlayer.api_key=<API KEY>
SerpAPI:
Install requirements with pip install google-search-results
Get a SerpAPI api key and either set it as an environment variable (SERPAPI_API_KEY) or pass it to the LLM constructor as serpapi_api_key.
GoogleSearchAPI:
Install requirements with pip install google-api-python-client
Get a Google api key and either set it as an environment variable (GOOGLE_API_KEY) or pass it to the LLM constructor as google_api_key. You will also need to set the GOOGLE_CSE_ID environment variable to your custom search engine id. You can pass it to the LLM constructor as google_cse_id as well.
WolframAlphaAPI:
Install requirements with pip install wolframalpha
Get a Wolfram Alpha api key and either set it as an environment variable (WOLFRAM_ALPHA_APPID) or pass it to the LLM constructor as wolfram_alpha_appid.
NatBot:
Install requirements with pip install playwright
Wikipedia:
Install requirements with pip install wikipedia
Elasticsearch:
Install requirements with pip install elasticsearch
Set up Elasticsearch backend. If you want to do locally, this is a good guide.
FAISS: | /content/https://python.langchain.com/en/latest/reference/integrations.html |
12141d07c8ac-2 | FAISS:
Install requirements with pip install faiss for Python 3.7 and pip install faiss-cpu for Python 3.10+.
MyScale
Install requirements with pip install clickhouse-connect. For documentations, please refer to this document.
Manifest:
Install requirements with pip install manifest-ml (Note: this is only available in Python 3.8+ currently).
OpenSearch:
Install requirements with pip install opensearch-py
If you want to set up OpenSearch on your local, here
DeepLake:
Install requirements with pip install deeplake
LlamaCpp:
Install requirements with pip install llama-cpp-python
Download model and convert following llama.cpp instructions
Milvus:
Install requirements with pip install pymilvus
In order to setup a local cluster, take a look here.
Zilliz:
Install requirements with pip install pymilvus
To get up and running, take a look here.
If you are using the NLTKTextSplitter or the SpacyTextSplitter, you will also need to install the appropriate models. For example, if you want to use the SpacyTextSplitter, you will need to install the en_core_web_sm model with python -m spacy download en_core_web_sm. Similarly, if you want to use the NLTKTextSplitter, you will need to install the punkt model with python -m nltk.downloader punkt.
previous
Installation
next
API References
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/integrations.html |
fc5427caa65f-0 | .md
.pdf
Installation
Contents
Official Releases
Installing from source
Installation#
Official Releases#
LangChain is available on PyPi, so to it is easily installable with:
pip install langchain
That will install the bare minimum requirements of LangChain.
A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc.
By default, the dependencies needed to do that are NOT installed.
However, there are two other ways to install LangChain that do bring in those dependencies.
To install modules needed for the common LLM providers, run:
pip install langchain[llms]
To install all modules needed for all integrations, run:
pip install langchain[all]
Note that if you are using zsh, you’ll need to quote square brackets when passing them as an argument to a command, for example:
pip install 'langchain[all]'
Installing from source#
If you want to install from source, you can do so by cloning the repo and running:
pip install -e .
previous
SQL Question Answering Benchmarking: Chinook
next
Integrations
Contents
Official Releases
Installing from source
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/installation.html |
bd58b76e4cf9-0 | .rst
.pdf
Models
Models#
LangChain provides interfaces and integrations for a number of different types of models.
LLMs
Chat Models
Embeddings
previous
API References
next
Chat Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/models.html |
d46340b4417e-0 | .rst
.pdf
Prompts
Prompts#
The reference guides here all relate to objects for working with Prompts.
PromptTemplates
Example Selector
Output Parsers
previous
How to serialize prompts
next
PromptTemplates
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/prompts.html |
291fe73ffa4f-0 | .rst
.pdf
Memory
Memory#
pydantic model langchain.memory.ChatMessageHistory[source]#
field messages: List[langchain.schema.BaseMessage] = []#
add_ai_message(message: str) → None[source]#
Add an AI message to the store
add_user_message(message: str) → None[source]#
Add a user message to the store
clear() → None[source]#
Remove all messages from the store
pydantic model langchain.memory.CombinedMemory[source]#
Class for combining multiple memories’ data together.
field memories: List[langchain.schema.BaseMemory] [Required]#
For tracking all the memories that should be accessed.
clear() → None[source]#
Clear context from this session for every memory.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Load all vars from sub-memories.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this session for every memory.
property memory_variables: List[str]#
All the memory variables that this instance provides.
pydantic model langchain.memory.ConversationBufferMemory[source]#
Buffer for storing conversation memory.
field ai_prefix: str = 'AI'#
field human_prefix: str = 'Human'#
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-1 | Return history buffer.
property buffer: Any#
String buffer of memory.
pydantic model langchain.memory.ConversationBufferWindowMemory[source]#
Buffer for storing conversation memory.
field ai_prefix: str = 'AI'#
field human_prefix: str = 'Human'#
field k: int = 5#
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Return history buffer.
property buffer: List[langchain.schema.BaseMessage]#
String buffer of memory.
pydantic model langchain.memory.ConversationEntityMemory[source]#
Entity extractor & summarizer to memory.
field ai_prefix: str = 'AI'#
field chat_history_key: str = 'history'#
field entity_cache: List[str] = []# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-2 | field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-3 | history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-4 | field entity_store: langchain.memory.entity.BaseEntityStore [Optional]#
field entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True)#
field human_prefix: str = 'Human'#
field k: int = 3# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-5 | field human_prefix: str = 'Human'#
field k: int = 3#
field llm: langchain.schema.BaseLanguageModel [Required]#
clear() → None[source]#
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
property buffer: List[langchain.schema.BaseMessage]#
pydantic model langchain.memory.ConversationKGMemory[source]#
Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
field ai_prefix: str = 'AI'# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-6 | field entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-7 | history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-8 | field human_prefix: str = 'Human'#
field k: int = 2#
field kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-9 | field knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-10 | Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True)# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-11 | field llm: langchain.schema.BaseLanguageModel [Required]#
field summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>#
Number of previous utterances to include in the context.
clear() → None[source]#
Clear memory contents.
get_current_entities(input_string: str) → List[str][source]#
get_knowledge_triplets(input_string: str) → List[langchain.graphs.networkx_graph.KnowledgeTriple][source]#
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
pydantic model langchain.memory.ConversationStringBufferMemory[source]#
Buffer for storing conversation memory.
field ai_prefix: str = 'AI'#
Prefix to use for AI generated responses.
field buffer: str = ''#
field human_prefix: str = 'Human'#
field input_key: Optional[str] = None#
field output_key: Optional[str] = None#
clear() → None[source]#
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Return history buffer. | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-12 | Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
property memory_variables: List[str]#
Will always return list of memory variables.
:meta private:
pydantic model langchain.memory.ConversationSummaryBufferMemory[source]#
Buffer with summarizer for storing conversation memory.
field max_token_limit: int = 2000#
field memory_key: str = 'history'#
field moving_summary_buffer: str = ''#
clear() → None[source]#
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
property buffer: List[langchain.schema.BaseMessage]#
pydantic model langchain.memory.ConversationSummaryMemory[source]#
Conversation summarizer to memory.
field buffer: str = ''#
clear() → None[source]#
Clear memory contents.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer. | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-13 | Save context from this conversation to buffer.
pydantic model langchain.memory.ConversationTokenBufferMemory[source]#
Buffer for storing conversation memory.
field ai_prefix: str = 'AI'#
field human_prefix: str = 'Human'#
field llm: langchain.schema.BaseLanguageModel [Required]#
field max_token_limit: int = 2000#
field memory_key: str = 'history'#
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any][source]#
Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer. Pruned.
property buffer: List[langchain.schema.BaseMessage]#
String buffer of memory.
class langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint: str, cosmos_database: str, cosmos_container: str, credential: Any, session_id: str, user_id: str, ttl: Optional[int] = None)[source]#
Chat history backed by Azure CosmosDB.
add_ai_message(message: str) → None[source]#
Add a AI message to the memory.
add_user_message(message: str) → None[source]#
Add a user message to the memory.
clear() → None[source]#
Clear session memory from this memory and cosmos.
load_messages() → None[source]#
Retrieve the messages from Cosmos | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-14 | load_messages() → None[source]#
Retrieve the messages from Cosmos
messages: List[BaseMessage]#
prepare_cosmos() → None[source]#
Prepare the CosmosDB client.
Use this function or the context manager to make sure your database is ready.
upsert_messages(new_message: Optional[langchain.schema.BaseMessage] = None) → None[source]#
Update the cosmosdb item.
class langchain.memory.DynamoDBChatMessageHistory(table_name: str, session_id: str)[source]#
Chat message history that stores history in AWS DynamoDB.
This class expects that a DynamoDB table with name table_name
and a partition Key of SessionId is present.
Parameters
table_name – name of the DynamoDB table
session_id – arbitrary key that is used to store the messages
of a single chat session.
add_ai_message(message: str) → None[source]#
Add an AI message to the store
add_user_message(message: str) → None[source]#
Add a user message to the store
append(message: langchain.schema.BaseMessage) → None[source]#
Append the message to the record in DynamoDB
clear() → None[source]#
Clear session memory from DynamoDB
property messages: List[langchain.schema.BaseMessage]#
Retrieve the messages from DynamoDB
class langchain.memory.InMemoryEntityStore[source]#
Basic in-memory entity store.
clear() → None[source]#
Delete all entities from store. | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-15 | clear() → None[source]#
Delete all entities from store.
delete(key: str) → None[source]#
Delete entity value from store.
exists(key: str) → bool[source]#
Check if entity exists in store.
get(key: str, default: Optional[str] = None) → Optional[str][source]#
Get entity value from store.
set(key: str, value: Optional[str]) → None[source]#
Set entity value in store.
store: Dict[str, Optional[str]] = {}#
class langchain.memory.PostgresChatMessageHistory(session_id: str, connection_string: str = 'postgresql://postgres:mypassword@localhost/chat_history', table_name: str = 'message_store')[source]#
add_ai_message(message: str) → None[source]#
Add an AI message to the store
add_user_message(message: str) → None[source]#
Add a user message to the store
append(message: langchain.schema.BaseMessage) → None[source]#
Append the message to the record in PostgreSQL
clear() → None[source]#
Clear session memory from PostgreSQL
property messages: List[langchain.schema.BaseMessage]#
Retrieve the messages from PostgreSQL
pydantic model langchain.memory.ReadOnlySharedMemory[source]#
A memory wrapper that is read-only and cannot be changed.
field memory: langchain.schema.BaseMemory [Required]# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-16 | field memory: langchain.schema.BaseMemory [Required]#
clear() → None[source]#
Nothing to clear, got a memory like a vault.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Load memory variables from memory.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Nothing should be saved or changed
property memory_variables: List[str]#
Return memory variables.
class langchain.memory.RedisChatMessageHistory(session_id: str, url: str = 'redis://localhost:6379/0', key_prefix: str = 'message_store:', ttl: Optional[int] = None)[source]#
add_ai_message(message: str) → None[source]#
Add an AI message to the store
add_user_message(message: str) → None[source]#
Add a user message to the store
append(message: langchain.schema.BaseMessage) → None[source]#
Append the message to the record in Redis
clear() → None[source]#
Clear session memory from Redis
property key: str#
Construct the record key to use
property messages: List[langchain.schema.BaseMessage]#
Retrieve the messages from Redis | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-17 | Retrieve the messages from Redis
class langchain.memory.RedisEntityStore(session_id: str = 'default', url: str = 'redis://localhost:6379/0', key_prefix: str = 'memory_store', ttl: Optional[int] = 86400, recall_ttl: Optional[int] = 259200, *args: Any, **kwargs: Any)[source]#
Redis-backed Entity store. Entities get a TTL of 1 day by default, and
that TTL is extended by 3 days every time the entity is read back.
clear() → None[source]#
Delete all entities from store.
delete(key: str) → None[source]#
Delete entity value from store.
exists(key: str) → bool[source]#
Check if entity exists in store.
property full_key_prefix: str#
get(key: str, default: Optional[str] = None) → Optional[str][source]#
Get entity value from store.
key_prefix: str = 'memory_store'#
recall_ttl: Optional[int] = 259200#
redis_client: Any#
session_id: str = 'default'#
set(key: str, value: Optional[str]) → None[source]#
Set entity value in store.
ttl: Optional[int] = 86400#
pydantic model langchain.memory.SimpleMemory[source]#
Simple memory for storing context or other bits of information that shouldn’t
ever change between prompts.
field memories: Dict[str, Any] = {}# | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-18 | field memories: Dict[str, Any] = {}#
clear() → None[source]#
Nothing to clear, got a memory like a vault.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]#
Return key-value pairs given the text input to the chain.
If None, return all memories
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Nothing should be saved or changed, my memory is set in stone.
property memory_variables: List[str]#
Input keys this memory class will load dynamically.
pydantic model langchain.memory.VectorStoreRetrieverMemory[source]#
Class for a VectorStore-backed memory object.
field input_key: Optional[str] = None#
Key name to index the inputs to load_memory_variables.
field memory_key: str = 'history'#
Key name to locate the memories in the result of load_memory_variables.
field retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]#
VectorStoreRetriever object to connect to.
field return_docs: bool = False#
Whether or not to return the result of querying the database directly.
clear() → None[source]#
Nothing to clear.
load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Union[List[langchain.schema.Document], str]][source]#
Return history buffer. | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
291fe73ffa4f-19 | Return history buffer.
save_context(inputs: Dict[str, Any], outputs: Dict[str, str]) → None[source]#
Save context from this conversation to buffer.
property memory_variables: List[str]#
The list of keys emitted from the load_memory_variables method.
previous
Document Transformers
next
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/memory.html |
d1da97e27d8e-0 | .rst
.pdf
SearxNG Search
Contents
Quick Start
Searching
Engine Parameters
Search Tips
SearxNG Search#
Utility for using SearxNG meta search API.
SearxNG is a privacy-friendly free metasearch engine that aggregates results from
multiple search engines and databases and
supports the OpenSearch
specification.
More detailes on the installtion instructions here.
For the search API refer to https://docs.searxng.org/dev/search_api.html
Quick Start#
In order to use this utility you need to provide the searx host. This can be done
by passing the named parameter searx_host
or exporting the environment variable SEARX_HOST.
Note: this is the only required parameter.
Then create a searx search instance like this:
from langchain.utilities import SearxSearchWrapper
# when the host starts with `http` SSL is disabled and the connection
# is assumed to be on a private network
searx_host='http://self.hosted'
search = SearxSearchWrapper(searx_host=searx_host)
You can now use the search instance to query the searx API.
Searching#
Use the run() and
results() methods to query the searx API.
Other methods are are available for convenience.
SearxResults is a convenience wrapper around the raw json result.
Example usage of the run method to make a search:
s.run(query="what is the best search engine?")
Engine Parameters#
You can pass any accepted searx search API parameters to the
SearxSearchWrapper instance.
In the following example we are using the
engines and the language parameters: | /content/https://python.langchain.com/en/latest/reference/modules/searx_search.html |
d1da97e27d8e-1 | In the following example we are using the
engines and the language parameters:
# assuming the searx host is set as above or exported as an env variable
s = SearxSearchWrapper(engines=['google', 'bing'],
language='es')
Search Tips#
Searx offers a special
search syntax
that can also be used instead of passing engine parameters.
For example the following query:
s = SearxSearchWrapper("langchain library", engines=['github'])
# can also be written as:
s = SearxSearchWrapper("langchain library !github")
# or even:
s = SearxSearchWrapper("langchain library !gh")
In some situations you might want to pass an extra string to the search query.
For example when the run() method is called by an agent. The search suffix can
also be used as a way to pass extra parameters to searx or the underlying search
engines.
# select the github engine and pass the search suffix
s = SearchWrapper("langchain library", query_suffix="!gh")
s = SearchWrapper("langchain library")
# select github the conventional google search syntax
s.run("large language models", query_suffix="site:github.com")
NOTE: A search suffix can be defined on both the instance and the method level.
The resulting query will be the concatenation of the two with the former taking
precedence.
See SearxNG Configured Engines and
SearxNG Search Syntax
for more details.
Notes | /content/https://python.langchain.com/en/latest/reference/modules/searx_search.html |
d1da97e27d8e-2 | SearxNG Search Syntax
for more details.
Notes
This wrapper is based on the SearxNG fork searxng/searxng which is
better maintained than the original Searx project and offers more features.
Public searxNG instances often use a rate limiter for API usage, so you might want to
use a self hosted instance and disable the rate limiter.
If you are self-hosting an instance you can customize the rate limiter for your
own network as described here.
For a list of public SearxNG instances see https://searx.space/
class langchain.utilities.searx_search.SearxResults(data: str)[source]#
Dict like wrapper around search api results.
property answers: Any#
Helper accessor on the json result.
pydantic model langchain.utilities.searx_search.SearxSearchWrapper[source]#
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http | /content/https://python.langchain.com/en/latest/reference/modules/searx_search.html |
d1da97e27d8e-3 | # note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Validators
disable_ssl_warnings » unsecure
validate_params » all fields
field aiosession: Optional[Any] = None#
field categories: Optional[List[str]] = []#
field engines: Optional[List[str]] = []#
field headers: Optional[dict] = None#
field k: int = 10#
field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False#
async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Asynchronously query with json results.
Uses aiohttp. See results for more info.
async arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Asynchronously version of run. | /content/https://python.langchain.com/en/latest/reference/modules/searx_search.html |
d1da97e27d8e-4 | Asynchronously version of run.
results(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Run query through Searx API and returns the results with metadata.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
num_results – Limit the number of results to return.
engines – List of engines to use for the query.
categories – List of categories to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
engines – List of engines to use for the query.
categories – List of categories to use for the query. | /content/https://python.langchain.com/en/latest/reference/modules/searx_search.html |
d1da97e27d8e-5 | categories – List of categories to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
The result of the query.
Return type
str
Raises
ValueError – If an error occured with the query.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
Contents
Quick Start
Searching
Engine Parameters
Search Tips
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 26, 2023. | /content/https://python.langchain.com/en/latest/reference/modules/searx_search.html |
09c7c6e2de63-0 | .rst
.pdf
LLMs
LLMs#
Wrappers on top of large language models APIs.
pydantic model langchain.llms.AI21[source]#
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to count.
field frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to frequency.
field logitBias: Optional[Dict[str, float]] = None#
Adjust the probability of specific tokens being generated.
field maxTokens: int = 256# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-1 | field maxTokens: int = 256#
The maximum number of tokens to generate in the completion.
field minTokens: int = 0#
The minimum number of tokens to generate in the completion.
field model: str = 'j2-jumbo-instruct'#
Model name to use.
field numResults: int = 1#
How many completions to generate for each prompt.
field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use.
field topP: float = 1.0#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-2 | Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-4 | Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AlephAlpha[source]#
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
Aleph-Alpha/aleph-alpha-client
Example
from langchain.llms import AlephAlpha
alpeh_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field aleph_alpha_api_key: Optional[str] = None#
API key for Aleph Alpha API.
field best_of: Optional[int] = None#
returns the one with the “best of” results
(highest log probability per token)
field completion_bias_exclusion_first_token_only: bool = False#
Only consider the first token for the completion_bias_exclusion.
field contextual_control_threshold: Optional[float] = None# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-5 | field contextual_control_threshold: Optional[float] = None#
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
field control_log_additive: Optional[bool] = True#
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
field echo: bool = False#
Echo the prompt in the completion.
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency.
field log_probs: Optional[int] = None#
Number of top log probabilities to be returned for each generated token.
field logit_bias: Optional[Dict[int, float]] = None#
The logit bias allows to influence the likelihood of generating tokens.
field maximum_tokens: int = 64#
The maximum number of tokens to be generated.
field minimum_tokens: Optional[int] = 0#
Generate at least this number of tokens.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field penalty_bias: Optional[str] = None#
Penalty bias for the completion.
field penalty_exceptions: Optional[List[str]] = None#
List of strings that may be generated without penalty,
regardless of other penalty settings | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-6 | List of strings that may be generated without penalty,
regardless of other penalty settings
field penalty_exceptions_include_stop_sequences: Optional[bool] = None#
Should stop_sequences be included in penalty_exceptions.
field presence_penalty: float = 0.0#
Penalizes repeated tokens.
field raw_completion: bool = False#
Force the raw completion of the model to be returned.
field repetition_penalties_include_completion: bool = True#
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
field repetition_penalties_include_prompt: Optional[bool] = False#
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
field stop_sequences: Optional[List[str]] = None#
Stop sequences to use.
field temperature: float = 0.0#
A non-negative float that tunes the degree of randomness in generation.
field tokens: Optional[bool] = False#
return tokens of completion.
field top_k: int = 0#
Number of most likely tokens to consider at each step.
field top_p: float = 0.0#
Total probability mass of tokens to consider at each step.
field use_multiplicative_presence_penalty: Optional[bool] = False#
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-7 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-8 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-9 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Anthropic[source]#
Wrapper around Anthropic’s large language models.
To use, you should have the anthropic python package installed, and the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-10 | To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
Validators
raise_warning » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to Anthropic Completion API. Default is 600 seconds.
field max_tokens_to_sample: int = 256#
Denotes the number of tokens to predict per generation.
field model: str = 'claude-v1'#
Model name to use.
field streaming: bool = False#
Whether to stream the results.
field temperature: Optional[float] = None#
A non-negative float that tunes the degree of randomness in generation.
field top_k: Optional[int] = None#
Number of most likely tokens to consider at each step.
field top_p: Optional[float] = None#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-11 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-12 | Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-13 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator[source]#
Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompt to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from Anthropic.
Example
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.AzureOpenAI[source]#
Wrapper around Azure-specific OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-14 | Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field allowed_special: Union[Literal['all'], AbstractSet[str]] = {}#
Set of special tokens that are allowed。
field batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
field best_of: int = 1#
Generates best_of completions server-side and returns the “best”.
field deployment_name: str = ''#
Deployment name to use.
field disallowed_special: Union[Literal['all'], Collection[str]] = 'all'#
Set of special tokens that are not allowed。
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field model_kwargs: Dict[str, Any] [Optional]# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-15 | field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'text-davinci-003'#
Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-16 | Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-17 | Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]#
Get the sub prompts for llm call. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-18 | Get the sub prompts for llm call.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) → int#
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-19 | Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Banana[source]#
Wrapper around Banana large language models.
To use, you should have the banana-dev python package installed,
and the environment variable BANANA_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-20 | Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field model_key: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-21 | copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-22 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.CerebriumAI[source]#
Wrapper around CerebriumAI large language models.
To use, you should have the cerebrium python package installed, and the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-23 | To use, you should have the cerebrium python package installed, and the
environment variable CEREBRIUMAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field endpoint_url: str = ''#
model endpoint to use
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-24 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-25 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-26 | pydantic model langchain.llms.Cohere[source]#
Wrapper around Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency. Between 0 and 1.
field k: int = 0#
Number of most likely tokens to consider at each step.
field max_tokens: int = 256#
Denotes the number of tokens to predict per generation.
field model: Optional[str] = None#
Model name to use.
field p: int = 1#
Total probability mass of tokens to consider at each step.
field presence_penalty: float = 0.0#
Penalizes repeated tokens. Between 0 and 1.
field temperature: float = 0.75#
A non-negative float that tunes the degree of randomness in generation.
field truncate: Optional[str] = None#
Specify how the client handles inputs longer than the maximum token
length: Truncate from START, END or NONE
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-27 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-28 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-29 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.DeepInfra[source]#
Wrapper around DeepInfra deployed models.
To use, you should have the requests python package installed, and the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-30 | To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import DeepInfra
di = DeepInfra(model_id="google/flan-t5-xl",
deepinfra_api_token="my-api-key")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-31 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-32 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-33 | pydantic model langchain.llms.ForefrontAI[source]#
Wrapper around ForefrontAI large language models.
To use, you should have the environment variable FOREFRONTAI_API_KEY
set with your API key.
Example
from langchain.llms import ForefrontAI
forefrontai = ForefrontAI(endpoint_url="")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field endpoint_url: str = ''#
Model name to use.
field length: int = 256#
The maximum number of tokens to generate in the completion.
field repetition_penalty: int = 1#
Penalizes repeated tokens according to frequency.
field temperature: float = 0.7#
What sampling temperature to use.
field top_k: int = 40#
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
field top_p: float = 1.0#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-34 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-35 | Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-36 | encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GPT4All[source]#
Wrapper around GPT4All language models.
To use, you should have the pyllamacpp python package installed, the
pre-trained model file, and the model’s config information.
Example
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field echo: Optional[bool] = False#
Whether to echo the prompt.
field embedding: bool = False#
Use embedding mode only.
field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-37 | Return logits for all tokens, not just the last token.
field model: str [Required]#
Path to the pre-trained GPT4All model file.
field n_batch: int = 1#
Batch size for prompt processing.
field n_ctx: int = 512#
Token context window.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_predict: Optional[int] = 256#
The maximum number of tokens to generate.
field n_threads: Optional[int] = 4#
Number of threads to use.
field repeat_last_n: Optional[int] = 64#
Last n tokens to penalize
field repeat_penalty: Optional[float] = 1.3#
The penalty to apply to repeated tokens.
field seed: int = 0#
Seed. If -1, a random seed is used.
field stop: Optional[List[str]] = []#
A list of strings to stop generation when encountered.
field streaming: bool = False#
Whether to stream the results or not.
field temp: Optional[float] = 0.8#
The temperature to use for sampling.
field top_k: Optional[int] = 40#
The top-k value to use for sampling.
field top_p: Optional[float] = 0.95#
The top-p value to use for sampling.
field use_mlock: bool = False#
Force system to keep model in RAM.
field vocab_only: bool = False#
Only load the vocabulary, no weights. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-38 | Only load the vocabulary, no weights.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-39 | Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-40 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.GooseAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-41 | To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
field logit_bias: Optional[Dict[str, float]] [Optional]#
Adjust the probability of specific tokens being generated.
field max_tokens: int = 256#
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
field min_tokens: int = 1#
The minimum number of tokens to generate in the completion.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-neo-20b'#
Model name to use
field n: int = 1#
How many completions to generate for each prompt.
field presence_penalty: float = 0#
Penalizes repeated tokens.
field temperature: float = 0.7#
What sampling temperature to use
field top_p: float = 1#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None) → str# | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-42 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-43 | the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-44 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.HuggingFaceEndpoint[source]#
Wrapper around HuggingFaceHub Inference Endpoints. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-45 | Wrapper around HuggingFaceHub Inference Endpoints.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
endpoint_url=endpoint_url,
huggingfacehub_api_token="my-api-key"
)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field endpoint_url: str = ''#
Endpoint URL to use.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field task: Optional[str] = None#
Task to call the model with. Should be a task that returns generated_text.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |
09c7c6e2de63-46 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | /content/https://python.langchain.com/en/latest/reference/modules/llms.html |