Unnamed: 0
stringlengths 1
178
| link
stringlengths 31
163
| text
stringlengths 18
32.8k
⌀ |
---|---|---|
96 | https://python.langchain.com/docs/use_cases/question_answering/how_to/flare | Question AnsweringHow toRetrieve as you generate with FLAREOn this pageRetrieve as you generate with FLAREThis notebook is an implementation of Forward-Looking Active REtrieval augmented generation (FLARE).Please see the original repo here.The basic idea is:Start answering a questionIf you start generating tokens the model is uncertain about, look up relevant documentsUse those documents to continue generatingRepeat until finishedThere is a lot of cool detail in how the lookup of relevant documents is done.
Basically, the tokens that model is uncertain about are highlighted, and then an LLM is called to generate a question that would lead to that answer. For example, if the generated text is Joe Biden went to Harvard, and the tokens the model was uncertain about was Harvard, then a good generated question would be where did Joe Biden go to college. This generated question is then used in a retrieval step to fetch relevant documents.In order to set up this chain, we will need three things:An LLM to generate the answerAn LLM to generate hypothetical questions to use in retrievalA retriever to use to look up answers forThe LLM that we use to generate the answer needs to return logprobs so we can identify uncertain tokens. For that reason, we HIGHLY recommend that you use the OpenAI wrapper (NB: not the ChatOpenAI wrapper, as that does not return logprobs).The LLM we use to generate hypothetical questions to use in retrieval can be anything. In this notebook we will use ChatOpenAI because it is fast and cheap.The retriever can be anything. In this notebook we will use SERPER search engine, because it is cheap.Other important parameters to understand:max_generation_len: The maximum number of tokens to generate before stopping to check if any are uncertainmin_prob: Any tokens generated with probability below this will be considered uncertainImportsimport osos.environ["SERPER_API_KEY"] = ""os.environ["OPENAI_API_KEY"] = ""import reimport numpy as npfrom langchain.schema import BaseRetrieverfrom langchain.callbacks.manager import ( AsyncCallbackManagerForRetrieverRun, CallbackManagerForRetrieverRun,)from langchain.utilities import GoogleSerperAPIWrapperfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chat_models import ChatOpenAIfrom langchain.llms import OpenAIfrom langchain.schema import Documentfrom typing import Any, ListRetrieverclass SerperSearchRetriever(BaseRetriever): search: GoogleSerperAPIWrapper = None def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun, **kwargs: Any ) -> List[Document]: return [Document(page_content=self.search.run(query))] async def _aget_relevant_documents( self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun, **kwargs: Any, ) -> List[Document]: raise NotImplementedError()retriever = SerperSearchRetriever(search=GoogleSerperAPIWrapper())FLARE Chain# We set this so we can see what exactly is going onimport langchainlangchain.verbose = Truefrom langchain.chains import FlareChainflare = FlareChain.from_llm( ChatOpenAI(temperature=0), retriever=retriever, max_generation_len=164, min_prob=0.3,)query = "explain in great detail the difference between the langchain framework and baby agi"flare.run(query) > Entering new FlareChain chain... Current Response: Prompt after formatting: Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED. >>> CONTEXT: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> RESPONSE: > Entering new QuestionGeneratorChain chain... Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " decentralized platform for natural language processing" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " uses a blockchain" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " distributed ledger to" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " process data, allowing for secure and transparent data sharing." is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " set of tools" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " help developers create" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " create an AI system" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " NLP applications" is: > Finished chain. Generated Questions: ['What is the Langchain Framework?', 'What technology does the Langchain Framework use to store and process data for secure and transparent data sharing?', 'What technology does the Langchain Framework use to store and process data?', 'What does the Langchain Framework use a blockchain-based distributed ledger for?', 'What does the Langchain Framework provide in addition to a decentralized platform for natural language processing applications?', 'What set of tools and services does the Langchain Framework provide?', 'What is the purpose of Baby AGI?', 'What type of applications is the Langchain Framework designed for?'] > Entering new _OpenAIResponseChain chain... Prompt after formatting: Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED. >>> CONTEXT: LangChain: Software. LangChain is a software development framework designed to simplify the creation of applications using large language models. LangChain Initial release date: October 2022. LangChain Programming languages: Python and JavaScript. LangChain Developer(s): Harrison Chase. LangChain License: MIT License. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... Type: Software framework. At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. LangChain is a powerful tool that can be used to work with Large Language Models (LLMs). LLMs are very general in nature, which means that while they can ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. LangChain is a software development framework designed to simplify the creation of applications using large language models (LLMs). Written in: Python and JavaScript. Initial release: October 2022. LangChain - The A.I-native developer toolkit We started LangChain with the intent to build a modular and flexible framework for developing A.I- ... LangChain explained in 3 minutes - LangChain is a ... Duration: 3:03. Posted: Apr 13, 2023. LangChain is a framework built to help you build LLM-powered applications more easily by providing you with the following:. LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models, for example, GPT-3. LangChain is a powerful open-source framework for developing applications powered by language models. It connects to the AI models you want to ... LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Missing: secure | Must include:secure. Blockchain is the best way to secure the data of the shared community. Utilizing the capabilities of the blockchain nobody can read or interfere ... This modern technology consists of a chain of blocks that allows to securely store all committed transactions using shared and distributed ... A Blockchain network is used in the healthcare system to preserve and exchange patient data through hospitals, diagnostic laboratories, pharmacy firms, and ... In this article, I will walk you through the process of using the LangChain.js library with Google Cloud Functions, helping you leverage the ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: transparent | Must include:transparent. This technology keeps a distributed ledger on each blockchain node, making it more secure and transparent. The blockchain network can operate smart ... blockchain technology can offer a highly secured health data ledger to ... framework can be employed to store encrypted healthcare data in a ... In a simplified way, Blockchain is a data structure that stores transactions in an ordered way and linked to the previous block, serving as a ... Blockchain technology is a decentralized, distributed ledger that stores the record of ownership of digital assets. Missing: Langchain | Must include:Langchain. LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. This documentation covers the steps to integrate Pinecone, a high-performance vector database, with LangChain, a framework for building applications powered ... The ability to connect to any model, ingest any custom database, and build upon a framework that can take action provides numerous use cases for ... With LangChain, developers can use a framework that abstracts the core building blocks of LLM applications. LangChain empowers developers to ... Build a question-answering tool based on financial data with LangChain & Deep Lake's unified & streamable data store. Browse applications built on LangChain technology. Explore PoC and MVP applications created by our community and discover innovative use cases for LangChain ... LangChain is a great framework that can be used for developing applications powered by LLMs. When you intend to enhance your application ... In this blog, we'll introduce you to LangChain and Ray Serve and how to use them to build a search engine using LLM embeddings and a vector ... The LinkChain Framework simplifies embedding creation and storage using Pinecone and Chroma, with code that loads files, splits documents, and creates embedding ... Missing: technology | Must include:technology. Blockchain is one type of a distributed ledger. Distributed ledgers use independent computers (referred to as nodes) to record, share and ... Missing: Langchain | Must include:Langchain. Blockchain is used in distributed storage software where huge data is broken down into chunks. This is available in encrypted data across a ... People sometimes use the terms 'Blockchain' and 'Distributed Ledger' interchangeably. This post aims to analyze the features of each. A distributed ledger ... Missing: Framework | Must include:Framework. Think of a “distributed ledger” that uses cryptography to allow each participant in the transaction to add to the ledger in a secure way without ... In this paper, we provide an overview of the history of trade settlement and discuss this nascent technology that may now transform traditional ... Missing: Langchain | Must include:Langchain. LangChain is a blockchain-based language education platform that aims to revolutionize the way people learn languages. Missing: Framework | Must include:Framework. It uses the distributed ledger technology framework and Smart contract engine for building scalable Business Blockchain applications. The fabric ... It looks at the assets the use case is handling, the different parties conducting transactions, and the smart contract, distributed ... Are you curious to know how Blockchain and Distributed ... Duration: 44:31. Posted: May 4, 2021. A blockchain is a distributed and immutable ledger to transfer ownership, record transactions, track assets, and ensure transparency, security, trust and value ... Missing: Langchain | Must include:Langchain. LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: decentralized | Must include:decentralized. LangChain, created by Harrison Chase, is a Python library that provides out-of-the-box support to build NLP applications using LLMs. Missing: decentralized | Must include:decentralized. LangChain provides a standard interface for chains, enabling developers to create sequences of calls that go beyond a single LLM call. Chains ... Missing: decentralized platform natural. LangChain is a powerful framework that simplifies the process of building advanced language model applications. Missing: platform | Must include:platform. Are your language models ignoring previous instructions ... Duration: 32:23. Posted: Feb 21, 2023. LangChain is a framework that enables quick and easy development of applications ... Prompting is the new way of programming NLP models. Missing: decentralized platform. It then uses natural language processing and machine learning algorithms to search ... Summarization is handled via cohere, QnA is handled via langchain, ... LangChain is a framework for developing applications powered by language models. ... There are several main modules that LangChain provides support for. Missing: decentralized platform. In the healthcare-chain system, blockchain provides an appreciated secure ... The entire process of adding new and previous block data is performed based on ... ChatGPT is a large language model developed by OpenAI, ... tool for a wide range of applications, including natural language processing, ... LangChain is a powerful tool that can be used to work with Large Language ... If an API key has been provided, create an OpenAI language model instance At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. A tutorial of the six core modules of the LangChain Python package covering models, prompts, chains, agents, indexes, and memory with OpenAI ... LangChain's collection of tools refers to a set of tools provided by the LangChain framework for developing applications powered by language models. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... LangChain is an open-source library that provides developers with the tools to build applications powered by large language models (LLMs). LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Plan-and-Execute Agents · Feature Stores and LLMs · Structured Tools · Auto-Evaluator Opportunities · Callbacks Improvements · Unleashing the power ... Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. · LLM: The language model ... LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. This system is exploring and demonstrating to us the potential of large language models, such as GPT and how it can autonomously perform tasks. Apr 17, 2023 At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs. >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> RESPONSE: > Finished chain. > Finished chain. ' LangChain is a framework for developing applications powered by language models. It provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. On the other hand, Baby AGI is an AI system that is exploring and demonstrating the potential of large language models, such as GPT, and how it can autonomously perform tasks. Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. 'llm = OpenAI()llm(query) '\n\nThe Langchain framework and Baby AGI are both artificial intelligence (AI) frameworks that are used to create intelligent agents. The Langchain framework is a supervised learning system that is based on the concept of “language chains”. It uses a set of rules to map natural language inputs to specific outputs. It is a general-purpose AI framework and can be used to build applications such as natural language processing (NLP), chatbots, and more.\n\nBaby AGI, on the other hand, is an unsupervised learning system that uses neural networks and reinforcement learning to learn from its environment. It is used to create intelligent agents that can adapt to changing environments. It is a more advanced AI system and can be used to build more complex applications such as game playing, robotic vision, and more.\n\nThe main difference between the two is that the Langchain framework uses supervised learning while Baby AGI uses unsupervised learning. The Langchain framework is a general-purpose AI framework that can be used for various applications, while Baby AGI is a more advanced AI system that can be used to create more complex applications.'flare.run("how are the origin stories of langchain and bitcoin similar or different?") > Entering new FlareChain chain... Current Response: Prompt after formatting: Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED. >>> CONTEXT: >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> RESPONSE: > Entering new QuestionGeneratorChain chain... Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> EXISTING PARTIAL RESPONSE: Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. FINISHED The question to which the answer is the term/entity/phrase " very different origin" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> EXISTING PARTIAL RESPONSE: Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. FINISHED The question to which the answer is the term/entity/phrase " 2020 by a" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> EXISTING PARTIAL RESPONSE: Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. FINISHED The question to which the answer is the term/entity/phrase " developers as a platform for creating and managing decentralized language learning applications." is: > Finished chain. Generated Questions: ['How would you describe the origin stories of Langchain and Bitcoin in terms of their similarities or differences?', 'When was Langchain created and by whom?', 'What was the purpose of creating Langchain?'] > Entering new _OpenAIResponseChain chain... Prompt after formatting: Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED. >>> CONTEXT: Bitcoin and Ethereum have many similarities but different long-term visions and limitations. Ethereum changed from proof of work to proof of ... Bitcoin will be around for many years and examining its white paper origins is a great exercise in understanding why. Satoshi Nakamoto's blueprint describes ... Bitcoin is a new currency that was created in 2009 by an unknown person using the alias Satoshi Nakamoto. Transactions are made with no middle men – meaning, no ... Missing: Langchain | Must include:Langchain. By comparison, Bitcoin transaction speeds are tremendously lower. ... learn about its history and its role in the emergence of the Bitcoin ... LangChain is a powerful framework that simplifies the process of ... tasks like document retrieval, clustering, and similarity comparisons. Key terms: Bitcoin System, Blockchain Technology, ... Furthermore, the research paper will discuss and compare the five payment. Blockchain first appeared in Nakamoto's Bitcoin white paper that describes a new decentralized cryptocurrency [1]. Bitcoin takes the blockchain technology ... Missing: stories | Must include:stories. A score of 0 means there were not enough data for this term. Google trends was accessed on 5 November 2018 with searches for bitcoin, euro, gold ... Contracts, transactions, and records of them provide critical structure in our economic system, but they haven't kept up with the world's digital ... Missing: Langchain | Must include:Langchain. Of course, traders try to make a profit on their portfolio in this way.The difference between investing and trading is the regularity with which ... After all these giant leaps forward in the LLM space, OpenAI released ChatGPT — thrusting LLMs into the spotlight. LangChain appeared around the same time. Its creator, Harrison Chase, made the first commit in late October 2022. Leaving a short couple of months of development before getting caught in the LLM wave. At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs. >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> RESPONSE: > Finished chain. > Finished chain. ' The origin stories of LangChain and Bitcoin are quite different. Bitcoin |
was created in 2009 by an unknown person using the alias Satoshi Nakamoto. LangChain was created in late October 2022 by Harrison Chase. Bitcoin is a decentralized cryptocurrency | while LangChain is a framework built around LLMs. 'PreviousPerform context-aware text splittingNextImprove document indexing with HyDEImportsRetrieverFLARE Chain" | null |
97 | https://python.langchain.com/docs/use_cases/question_answering/how_to/hyde | Question AnsweringHow toImprove document indexing with HyDEOn this pageImprove document indexing with HyDEThis notebook goes over how to use Hypothetical Document Embeddings (HyDE), as described in this paper. At a high level, HyDE is an embedding technique that takes queries, generates a hypothetical answer, and then embeds that generated document and uses that as the final example. In order to use HyDE, we therefore need to provide a base embedding model, as well as an LLMChain that can be used to generate those documents. By default, the HyDE class comes with some default prompts to use (see the paper for more details on them), but we can also create our own.from langchain.llms import OpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chains import LLMChain, HypotheticalDocumentEmbedderfrom langchain.prompts import PromptTemplatebase_embeddings = OpenAIEmbeddings()llm = OpenAI()# Load with `web_search` promptembeddings = HypotheticalDocumentEmbedder.from_llm(llm, base_embeddings, "web_search")# Now we can use it as any embedding class!result = embeddings.embed_query("Where is the Taj Mahal?")Multiple generationsWe can also generate multiple documents and then combine the embeddings for those. By default, we combine those by taking the average. We can do this by changing the LLM we use to generate documents to return multiple things.multi_llm = OpenAI(n=4, best_of=4)embeddings = HypotheticalDocumentEmbedder.from_llm( multi_llm, base_embeddings, "web_search")result = embeddings.embed_query("Where is the Taj Mahal?")Using our own promptsBesides using preconfigured prompts, we can also easily construct our own prompts and use those in the LLMChain that is generating the documents. This can be useful if we know the domain our queries will be in, as we can condition the prompt to generate text more similar to that.In the example below, let's condition it to generate text about a state of the union address (because we will use that in the next example).prompt_template = """Please answer the user's question about the most recent state of the union addressQuestion: {question}Answer:"""prompt = PromptTemplate(input_variables=["question"], template=prompt_template)llm_chain = LLMChain(llm=llm, prompt=prompt)embeddings = HypotheticalDocumentEmbedder( llm_chain=llm_chain, base_embeddings=base_embeddings)result = embeddings.embed_query( "What did the president say about Ketanji Brown Jackson")Using HyDENow that we have HyDE, we can use it as we would any other embedding class! Here is using it to find similar passages in the state of the union example.from langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromawith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)docsearch = Chroma.from_texts(texts, embeddings)query = "What did the president say about Ketanji Brown Jackson"docs = docsearch.similarity_search(query) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.PreviousRetrieve as you generate with FLARENextUse local LLMsMultiple generationsUsing our own promptsUsing HyDE |
98 | https://python.langchain.com/docs/use_cases/question_answering/how_to/local_retrieval_qa | Question AnsweringHow toUse local LLMsOn this pageUse local LLMsThe popularity of projects like PrivateGPT, llama.cpp, and GPT4All underscore the importance of running LLMs locally.LangChain has integrations with many open source LLMs that can be run locally.See here for setup instructions for these LLMs. For example, here we show how to run GPT4All or LLaMA2 locally (e.g., on your laptop) using local embeddings and a local LLM.Document LoadingFirst, install packages needed for local embeddings and vector storage.pip install gpt4all chromadb langchainhubLoad and split an example docucment.We'll use a blog post on agents as an example.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)all_splits = text_splitter.split_documents(data)Next, the below steps will download the GPT4All embeddings locally (if you don't already have them).from langchain.vectorstores import Chromafrom langchain.embeddings import GPT4AllEmbeddingsvectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings()) Found model file at /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin objc[49534]: Class GGMLMetalClass is implemented in both /Users/rlm/miniforge3/envs/llama2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x131614208) and /Users/rlm/miniforge3/envs/llama2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x131988208). One of the two will be used. Which one is undefined.Test similarity search is working with our local embeddings.question = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)len(docs) 4docs[0] Document(page_content='Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', metadata={'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': "LLM Powered Autonomous Agents | Lil'Log"})ModelLLaMA2Note: new versions of llama-cpp-python use GGUF model files (see here).If you have an existing GGML model, see here for instructions for conversion for GGUF. And / or, you can download a GGUF converted model (e.g., here).Finally, as noted in detail here install llama-cpp-pythonpip install llama-cpp-pythonTo enable use of GPU on Apple Silicon, follow the steps here to use the Python binding with Metal support.In particular, ensure that conda is using the correct virtual enviorment that you created (miniforge3).E.g., for me:conda activate /Users/rlm/miniforge3/envs/llamaWith this confirmed:CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 /Users/rlm/miniforge3/envs/llama/bin/pip install -U llama-cpp-python --no-cache-dirfrom langchain.llms import LlamaCppfrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerSetting model parameters as noted in the llama.cpp docs.n_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/llama-2-13b-chat.ggufv3.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, n_ctx=2048, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,)Note that these indicate that Metal was enabled properly:ggml_metal_init: allocatingggml_metal_init: using MPSllm("Simulate a rap battle between Stephen Colbert and John Oliver") Llama.generate: prefix-match hit by jonathan Here's the hypothetical rap battle: [Stephen Colbert]: Yo, this is Stephen Colbert, known for my comedy show. I'm here to put some sense in your mind, like an enema do-go. Your opponent? A man of laughter and witty quips, John Oliver! Now let's see who gets the most laughs while taking shots at each other [John Oliver]: Yo, this is John Oliver, known for my own comedy show. I'm here to take your mind on an adventure through wit and humor. But first, allow me to you to our contestant: Stephen Colbert! His show has been around since the '90s, but it's time to see who can out-rap whom [Stephen Colbert]: You claim to be a witty man, John Oliver, with your British charm and clever remarks. But my knows that I'm America's funnyman! Who's the one taking you? Nobody! [John Oliver]: Hey Stephen Colbert, don't get too cocky. You may llama_print_timings: load time = 4481.74 ms llama_print_timings: sample time = 183.05 ms / 256 runs ( 0.72 ms per token, 1398.53 tokens per second) llama_print_timings: prompt eval time = 456.05 ms / 13 tokens ( 35.08 ms per token, 28.51 tokens per second) llama_print_timings: eval time = 7375.20 ms / 255 runs ( 28.92 ms per token, 34.58 tokens per second) llama_print_timings: total time = 8388.92 ms "by jonathan \n\nHere's the hypothetical rap battle:\n\n[Stephen Colbert]: Yo, this is Stephen Colbert, known for my comedy show. I'm here to put some sense in your mind, like an enema do-go. Your opponent? A man of laughter and witty quips, John Oliver! Now let's see who gets the most laughs while taking shots at each other\n\n[John Oliver]: Yo, this is John Oliver, known for my own comedy show. I'm here to take your mind on an adventure through wit and humor. But first, allow me to you to our contestant: Stephen Colbert! His show has been around since the '90s, but it's time to see who can out-rap whom\n\n[Stephen Colbert]: You claim to be a witty man, John Oliver, with your British charm and clever remarks. But my knows that I'm America's funnyman! Who's the one taking you? Nobody!\n\n[John Oliver]: Hey Stephen Colbert, don't get too cocky. You may"GPT4AllSimilarly, we can use GPT4All.Download the GPT4All model binary.The Model Explorer on the GPT4All is a great way to choose and download a model.Then, specify the path that you downloaded to to.E.g., for me, the model lives here:/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.binfrom langchain.llms import GPT4Allllm = GPT4All( model="/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin", max_tokens=2048,)LLMChainRun an LLMChain (see here) with either model by passing in the retrieved docs and a simple prompt.It formats the prompt template using the input key values provided and passes the formatted string to GPT4All, LLama-V2, or another specified LLM.In this case, the list of retrieved documents (docs) above are pass into {context}.from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain# Promptprompt = PromptTemplate.from_template( "Summarize the main themes in these retrieved docs: {docs}")# Chainllm_chain = LLMChain(llm=llm, prompt=prompt)# Runquestion = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)result = llm_chain(docs)# Outputresult["text"] Llama.generate: prefix-match hit Based on the retrieved documents, the main themes are: 1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system. 2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner. 3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence. 4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems. llama_print_timings: load time = 1191.88 ms llama_print_timings: sample time = 134.47 ms / 193 runs ( 0.70 ms per token, 1435.25 tokens per second) llama_print_timings: prompt eval time = 39470.18 ms / 1055 tokens ( 37.41 ms per token, 26.73 tokens per second) llama_print_timings: eval time = 8090.85 ms / 192 runs ( 42.14 ms per token, 23.73 tokens per second) llama_print_timings: total time = 47943.12 ms '\nBased on the retrieved documents, the main themes are:\n1. Task decomposition: The ability to break down complex tasks into smaller subtasks, which can be handled by an LLM or other components of the agent system.\n2. LLM as the core controller: The use of a large language model (LLM) as the primary controller of an autonomous agent system, complemented by other key components such as a knowledge graph and a planner.\n3. Potentiality of LLM: The idea that LLMs have the potential to be used as powerful general problem solvers, not just for generating well-written copies but also for solving complex tasks and achieving human-like intelligence.\n4. Challenges in long-term planning: The challenges in planning over a lengthy history and effectively exploring the solution space, which are important limitations of current LLM-based autonomous agent systems.'QA ChainWe can use a QA chain to handle our question above.chain_type="stuff" (see here) means that all the docs will be added (stuffed) into a prompt.We can also use the LangChain Prompt Hub to store and fetch prompts that are model-specific.This will work with your LangSmith API key.Let's try with a default RAG prompt, here.pip install langchainhub# Prompt from langchain import hubrag_prompt = hub.pull("rlm/rag-prompt")from langchain.chains.question_answering import load_qa_chain# Chainchain = load_qa_chain(llm, chain_type="stuff", prompt=rag_prompt)# Runchain({"input_documents": docs, "question": question}, return_only_outputs=True) Llama.generate: prefix-match hit Task can be done by down a task into smaller subtasks, using simple prompting like "Steps for XYZ." or task-specific like "Write a story outline" for writing a novel. llama_print_timings: load time = 11326.20 ms llama_print_timings: sample time = 33.03 ms / 47 runs ( 0.70 ms per token, 1422.86 tokens per second) llama_print_timings: prompt eval time = 1387.31 ms / 242 tokens ( 5.73 ms per token, 174.44 tokens per second) llama_print_timings: eval time = 1321.62 ms / 46 runs ( 28.73 ms per token, 34.81 tokens per second) llama_print_timings: total time = 2801.08 ms {'output_text': '\nTask can be done by down a task into smaller subtasks, using simple prompting like "Steps for XYZ." or task-specific like "Write a story outline" for writing a novel.'}Now, let's try with a prompt specifically for LLaMA, which includes special tokens.# Promptrag_prompt_llama = hub.pull("rlm/rag-prompt-llama")rag_prompt_llama ChatPromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, template="[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \nQuestion: {question} \nContext: {context} \nAnswer: [/INST]", template_format='f-string', validate_template=True), additional_kwargs={})])# Chainchain = load_qa_chain(llm, chain_type="stuff", prompt=rag_prompt_llama)# Runchain({"input_documents": docs, "question": question}, return_only_outputs=True) Llama.generate: prefix-match hit Sure, I'd be happy to help! Based on the context, here are some to task: 1. LLM with simple prompting: This using a large model (LLM) with simple prompts like "Steps for XYZ" or "What are the subgoals for achieving XYZ?" to decompose tasks into smaller steps. 2. Task-specific: Another is to use task-specific, such as "Write a story outline" for writing a novel, to guide the of tasks. 3. Human inputs:, human inputs can be used to supplement the process, in cases where the task a high degree of creativity or expertise. As fores in long-term and task, one major is that LLMs to adjust plans when faced with errors, making them less robust to humans who learn from trial and error. llama_print_timings: load time = 11326.20 ms llama_print_timings: sample time = 144.81 ms / 207 runs ( 0.70 ms per token, 1429.47 tokens per second) llama_print_timings: prompt eval time = 1506.13 ms / 258 tokens ( 5.84 ms per token, 171.30 tokens per second) llama_print_timings: eval time = 6231.92 ms / 206 runs ( 30.25 ms per token, 33.06 tokens per second) llama_print_timings: total time = 8158.41 ms {'output_text': ' Sure, I\'d be happy to help! Based on the context, here are some to task:\n\n1. LLM with simple prompting: This using a large model (LLM) with simple prompts like "Steps for XYZ" or "What are the subgoals for achieving XYZ?" to decompose tasks into smaller steps.\n2. Task-specific: Another is to use task-specific, such as "Write a story outline" for writing a novel, to guide the of tasks.\n3. Human inputs:, human inputs can be used to supplement the process, in cases where the task a high degree of creativity or expertise.\n\nAs fores in long-term and task, one major is that LLMs to adjust plans when faced with errors, making them less robust to humans who learn from trial and error.'}RetrievalQAFor an even simpler flow, use RetrievalQA.This will use a QA default prompt (shown here) and will retrieve from the vectorDB.But, you can still pass in a prompt, as before, if desired.from langchain.chains import RetrievalQAqa_chain = RetrievalQA.from_chain_type( llm, retriever=vectorstore.as_retriever(), chain_type_kwargs={"prompt": rag_prompt_llama},)qa_chain({"query": question}) Llama.generate: prefix-match hit Sure! Based on the context, here's my answer to your: There are several to task,: 1. LLM-based with simple prompting, such as "Steps for XYZ" or "What are the subgoals for achieving XYZ?" 2. Task-specific, like "Write a story outline" for writing a novel. 3. Human inputs to guide the process. These can be used to decompose complex tasks into smaller, more manageable subtasks, which can help improve the and effectiveness of task. However, long-term and task can being due to the need to plan over a lengthy history and explore the space., LLMs may to adjust plans when faced with errors, making them less robust to human learners who can learn from trial and error. llama_print_timings: load time = 11326.20 ms llama_print_timings: sample time = 139.20 ms / 200 runs ( 0.70 ms per token, 1436.76 tokens per second) llama_print_timings: prompt eval time = 1532.26 ms / 258 tokens ( 5.94 ms per token, 168.38 tokens per second) llama_print_timings: eval time = 5977.62 ms / 199 runs ( 30.04 ms per token, 33.29 tokens per second) llama_print_timings: total time = 7916.21 ms {'query': 'What are the approaches to Task Decomposition?', 'result': ' Sure! Based on the context, here\'s my answer to your:\n\nThere are several to task,:\n\n1. LLM-based with simple prompting, such as "Steps for XYZ" or "What are the subgoals for achieving XYZ?"\n2. Task-specific, like "Write a story outline" for writing a novel.\n3. Human inputs to guide the process.\n\nThese can be used to decompose complex tasks into smaller, more manageable subtasks, which can help improve the and effectiveness of task. However, long-term and task can being due to the need to plan over a lengthy history and explore the space., LLMs may to adjust plans when faced with errors, making them less robust to human learners who can learn from trial and error.'}PreviousImprove document indexing with HyDENextDynamically select from multiple retrieversDocument LoadingModelLLaMA2GPT4AllLLMChainQA ChainRetrievalQA |
99 | https://python.langchain.com/docs/use_cases/question_answering/how_to/multi_retrieval_qa_router | Question AnsweringHow toDynamically select from multiple retrieversDynamically select from multiple retrieversThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.from langchain.chains.router import MultiRetrievalQAChainfrom langchain.llms import OpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSsou_docs = TextLoader('../../state_of_the_union.txt').load_and_split()sou_retriever = FAISS.from_documents(sou_docs, OpenAIEmbeddings()).as_retriever()pg_docs = TextLoader('../../paul_graham_essay.txt').load_and_split()pg_retriever = FAISS.from_documents(pg_docs, OpenAIEmbeddings()).as_retriever()personal_texts = [ "I love apple pie", "My favorite color is fuchsia", "My dream is to become a professional dancer", "I broke my arm when I was 12", "My parents are from Peru",]personal_retriever = FAISS.from_texts(personal_texts, OpenAIEmbeddings()).as_retriever()retriever_infos = [ { "name": "state of the union", "description": "Good for answering questions about the 2023 State of the Union address", "retriever": sou_retriever }, { "name": "pg essay", "description": "Good for answering questions about Paul Graham's essay on his career", "retriever": pg_retriever }, { "name": "personal", "description": "Good for answering questions about me", "retriever": personal_retriever }]chain = MultiRetrievalQAChain.from_retrievers(OpenAI(), retriever_infos, verbose=True)print(chain.run("What did the president say about the economy?")) > Entering new MultiRetrievalQAChain chain... state of the union: {'query': 'What did the president say about the economy in the 2023 State of the Union address?'} > Finished chain. The president said that the economy was stronger than it had been a year prior, and that the American Rescue Plan helped create record job growth and fuel economic relief for millions of Americans. He also proposed a plan to fight inflation and lower costs for families, including cutting the cost of prescription drugs and energy, providing investments and tax credits for energy efficiency, and increasing access to child care and Pre-K.print(chain.run("What is something Paul Graham regrets about his work?")) > Entering new MultiRetrievalQAChain chain... pg essay: {'query': 'What is something Paul Graham regrets about his work?'} > Finished chain. Paul Graham regrets that he did not take a vacation after selling his company, instead of immediately starting to paint.print(chain.run("What is my background?")) > Entering new MultiRetrievalQAChain chain... personal: {'query': 'What is my background?'} > Finished chain. Your background is Peruvian.print(chain.run("What year was the Internet created in?")) > Entering new MultiRetrievalQAChain chain... None: {'query': 'What year was the Internet created in?'} > Finished chain. The Internet was created in 1969 through a project called ARPANET, which was funded by the United States Department of Defense. However, the World Wide Web, which is often confused with the Internet, was created in 1989 by British computer scientist Tim Berners-Lee.PreviousUse local LLMsNextMultiple Retrieval Sources |
100 | https://python.langchain.com/docs/use_cases/question_answering/how_to/multiple_retrieval | Question AnsweringHow toMultiple Retrieval SourcesOn this pageMultiple Retrieval SourcesOften times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether!A key part is is doing as much of the retrieval in parrelel as possible. This will keep the latency as low as possible. Luckily, LangChain Expression Language supports parrellism out of the box.Let's take a look where we do retrieval over a SQL database and a vectorstore.from langchain.chat_models import ChatOpenAISet up SQL queryfrom langchain.utilities import SQLDatabasefrom langchain.chains import create_sql_query_chaindb = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")query_chain = create_sql_query_chain(ChatOpenAI(temperature=0), db)Set up vectorstorefrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.schema.document import Documentindex_creator = VectorstoreIndexCreator()index = index_creator.from_documents([Document(page_content="Foo")])retriever = index.vectorstore.as_retriever()Combinefrom langchain.prompts import ChatPromptTemplatesystem_message = """Use the information from the below two sources to answer any questions.Source 1: a SQL database about employee data<source1>{source1}</source1>Source 2: a text database of random information<source2>{source2}</source2>"""prompt = ChatPromptTemplate.from_messages([("system", system_message), ("human", "{question}")])full_chain = { "source1": {"question": lambda x: x["question"]} | query_chain | db.run, "source2": (lambda x: x['question']) | retriever, "question": lambda x: x['question'],} | prompt | ChatOpenAI()response = full_chain.invoke({"question":"How many Employees are there"})print(response) Number of requested results 4 is greater than number of elements in index 1, updating n_results = 1 content='There are 8 employees.' additional_kwargs={} example=FalsePreviousDynamically select from multiple retrieversNextCite sourcesSet up SQL querySet up vectorstoreCombine |
101 | https://python.langchain.com/docs/use_cases/question_answering/how_to/qa_citations | Question AnsweringHow toCite sourcesCite sourcesThis notebook shows how to use OpenAI functions ability to extract citations from text.from langchain.chains import create_citation_fuzzy_match_chainfrom langchain.chat_models import ChatOpenAI /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(question = "What did the author do during college?"context = """My name is Jason Liu, and I grew up in Toronto Canada but I was born in China.I went to an arts highschool but in university I studied Computational Mathematics and physics. As part of coop I worked at many companies including Stitchfix, Facebook.I also started the Data Science club at the University of Waterloo and I was the president of the club for 2 years."""llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")chain = create_citation_fuzzy_match_chain(llm)result = chain.run(question=question, context=context)print(result) question='What did the author do during college?' answer=[FactWithEvidence(fact='The author studied Computational Mathematics and physics in university.', substring_quote=['in university I studied Computational Mathematics and physics']), FactWithEvidence(fact='The author started the Data Science club at the University of Waterloo and was the president of the club for 2 years.', substring_quote=['started the Data Science club at the University of Waterloo', 'president of the club for 2 years'])]def highlight(text, span): return ( "..." + text[span[0] - 20 : span[0]] + "*" + "\033[91m" + text[span[0] : span[1]] + "\033[0m" + "*" + text[span[1] : span[1] + 20] + "..." )for fact in result.answer: print("Statement:", fact.fact) for span in fact.get_spans(context): print("Citation:", highlight(context, span)) print() Statement: The author studied Computational Mathematics and physics in university. Citation: ...arts highschool but *in university I studied Computational Mathematics and physics*. As part of coop I... Statement: The author started the Data Science club at the University of Waterloo and was the president of the club for 2 years. Citation: ...x, Facebook. I also *started the Data Science club at the University of Waterloo* and I was the presi... Citation: ...erloo and I was the *president of the club for 2 years*. ... PreviousMultiple Retrieval SourcesNextQA over in-memory documents |
102 | https://python.langchain.com/docs/use_cases/question_answering/how_to/question_answering | Question AnsweringHow toQA over in-memory documentsOn this pageQA over in-memory documentsHere we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our Document chains.Prepare DataFirst we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents).from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromafrom langchain.docstore.document import Documentfrom langchain.prompts import PromptTemplatefrom langchain.indexes.vectorstore import VectorstoreIndexCreatorwith open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_text(state_of_the_union)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]).as_retriever() Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient.query = "What did the president say about Justice Breyer"docs = docsearch.get_relevant_documents(query)from langchain.chains.question_answering import load_qa_chainfrom langchain.llms import OpenAIQuickstartIf you just want to get started as quickly as possible, this is the recommended way to do it:chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain.run(input_documents=docs, question=query) ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'If you want more control and understanding over what is happening, please see the information below.The stuff ChainThis sections shows results of using the stuff Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer has dedicated his life to serve the country and thanked him for his service.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.{context}Question: {question}Answer in Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese e ha ricevuto una vasta gamma di supporto.'}The map_reduce ChainThis sections shows results of using the map_reduce Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}Intermediate StepsWe can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.', ' None', ' None'], 'output_text': ' The president said that Justice Breyer is an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, and thanked him for his service.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question. Return any relevant text translated into italian.{context}Question: {question}Relevant text, if any, in Italian:"""QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"])combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer italian. If you don't know the answer, just say that you don't know. Don't try to make up an answer.QUESTION: {question}========={summaries}=========Answer in Italian:"""COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"])chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce", return_map_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", '\nNessun testo pertinente.', ' Non ha detto nulla riguardo a Justice Breyer.', " Non c'è testo pertinente."], 'output_text': ' Non ha detto nulla riguardo a Justice Breyer.'}Batch SizeWhen using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:llm = OpenAI(batch_size=5, temperature=0)The refine ChainThis sections shows results of using the refine Chain to do question answering.chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which he said would be the most sweeping investment to rebuild America in history and would help the country compete for the jobs of the 21st Century.'}Intermediate StepsWe can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_refine_steps variable.chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country and his legacy of excellence.', '\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans.', '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'], 'output_text': '\n\nThe president said that he wanted to honor Justice Breyer for his dedication to serving the country, his legacy of excellence, and his commitment to advancing liberty and justice, as well as for his support of the Equality Act and his commitment to protecting the rights of LGBTQ+ Americans. He also praised Justice Breyer for his role in helping to pass the Bipartisan Infrastructure Law, which is the most sweeping investment to rebuild America in history.'}Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.refine_prompt_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question. " "If the context isn't useful, return the original answer. Reply in Italian.")refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_prompt_template,)initial_qa_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question: {question}\nYour answer should be in Italian.\n")initial_qa_prompt = PromptTemplate( input_variables=["context_str", "question"], template=initial_qa_template)chain = load_qa_chain(OpenAI(temperature=0), chain_type="refine", return_refine_steps=True, question_prompt=initial_qa_prompt, refine_prompt=refine_prompt)chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha reso omaggio al suo servizio.', "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione.", "\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei.", "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"], 'output_text': "\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha reso omaggio al suo servizio e ha sostenuto la nomina di una top litigatrice in pratica privata, un ex difensore pubblico federale e una famiglia di insegnanti e agenti di polizia delle scuole pubbliche. Ha anche sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere, la risoluzione del sistema di immigrazione, la protezione degli americani LGBTQ+ e l'approvazione dell'Equality Act. Ha inoltre sottolineato l'importanza di lavorare insieme per sconfiggere l'epidemia di oppiacei e per investire in America, educare gli americani, far crescere la forza lavoro e costruire l'economia dal"}The map-rerank ChainThis sections shows results of using the map-rerank Chain to do question answering with sources.chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True)query = "What did the president say about Justice Breyer"results = chain({"input_documents": docs, "question": query}, return_only_outputs=True)results["output_text"] ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.'results["intermediate_steps"] [{'answer': ' The President thanked Justice Breyer for his service and honored him for dedicating his life to serve the country.', 'score': '100'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}, {'answer': ' This document does not answer the question', 'score': '0'}]Custom PromptsYou can also use your own prompts with this chain. In this example, we will respond in Italian.from langchain.output_parsers import RegexParseroutput_parser = RegexParser( regex=r"(.*?)\nScore: (.*)", output_keys=["answer", "score"],)prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:Question: [question here]Helpful Answer In Italian: [answer here]Score: [score between 0 and 100]Begin!Context:---------{context}---------Question: {question}Helpful Answer In Italian:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"], output_parser=output_parser,)chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_rerank", return_intermediate_steps=True, prompt=PROMPT)query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [{'answer': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.', 'score': '100'}, {'answer': ' Il presidente non ha detto nulla sulla Giustizia Breyer.', 'score': '100'}, {'answer': ' Non so.', 'score': '0'}, {'answer': ' Non so.', 'score': '0'}], 'output_text': ' Il presidente ha detto che Justice Breyer ha dedicato la sua vita a servire questo paese.'}Document QA with sourcesWe can also perform document QA and return the sources that were used to answer the question. To do this we'll just need to make sure each document has a "source" key in the metadata, and we'll use the load_qa_with_sources helper to construct our chain:docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))])query = "What did the president say about Justice Breyer"docs = docsearch.similarity_search(query)from langchain.chains.qa_with_sources import load_qa_with_sources_chainchain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff")query = "What did the president say about Justice Breyer"chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}PreviousCite sourcesNextRetrieve from vector stores directlyDocument QA with sources |
103 | https://python.langchain.com/docs/use_cases/question_answering/how_to/vector_db_text_generation | Question AnsweringHow toRetrieve from vector stores directlyOn this pageRetrieve from vector stores directlyThis notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation.Prepare DataFirst, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents.from langchain.llms import OpenAIfrom langchain.docstore.document import Documentimport requestsfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.prompts import PromptTemplateimport pathlibimport subprocessimport tempfiledef get_github_docs(repo_owner, repo_name): with tempfile.TemporaryDirectory() as d: subprocess.check_call( f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .", cwd=d, shell=True, ) git_sha = ( subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d) .decode("utf-8") .strip() ) repo_path = pathlib.Path(d) markdown_files = list(repo_path.glob("*/*.md")) + list( repo_path.glob("*/*.mdx") ) for markdown_file in markdown_files: with open(markdown_file, "r") as f: relative_path = markdown_file.relative_to(repo_path) github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}" yield Document(page_content=f.read(), metadata={"source": github_url})sources = get_github_docs("yirenlu92", "deno-manual-forked")source_chunks = []splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)for source in sources: for chunk in splitter.split_text(source.page_content): source_chunks.append(Document(page_content=chunk, metadata=source.metadata)) Cloning into '.'...Set Up Vector DBNow that we have the documentation content in chunks, let's put all this information in a vector index for easy retrieval.search_index = Chroma.from_documents(source_chunks, OpenAIEmbeddings())Set Up LLM Chain with Custom PromptNext, let's set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user.from langchain.chains import LLMChainprompt_template = """Use the context below to write a 400 word blog post about the topic below: Context: {context} Topic: {topic} Blog post:"""PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "topic"])llm = OpenAI(temperature=0)chain = LLMChain(llm=llm, prompt=PROMPT)Generate TextFinally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple LLM chain.def generate_blog_post(topic): docs = search_index.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic} for doc in docs] print(chain.apply(inputs))generate_blog_post("environment variables") [{'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables.\n\nUsing `Deno.env` is simple. It has getter and setter methods, so you can easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```\n\nYou can also store environment variables in a `.env` file. This is a great'}, {'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in a program. They allow us to set values that can be used by the program, without having to hard-code them into the code. This makes it easier to change settings without having to modify the code.\n\nIn Deno, environment variables can be set in a few different ways. The most common way is to use the `VAR=value` syntax. This will set the environment variable `VAR` to the value `value`. This can be used to set any number of environment variables before running a command. For example, if we wanted to set the environment variable `VAR` to `hello` before running a Deno command, we could do so like this:\n\n```\nVAR=hello deno run main.ts\n```\n\nThis will set the environment variable `VAR` to `hello` before running the command. We can then access this variable in our code using the `Deno.env.get()` function. For example, if we ran the following command:\n\n```\nVAR=hello && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR'}, {'text': '\n\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without having to hard-code it into their applications. In Deno, you can access environment variables using the `Deno.env.get()` function.\n\nFor example, if you wanted to access the `HOME` environment variable, you could do so like this:\n\n```js\n// env.js\nDeno.env.get("HOME");\n```\n\nWhen running this code, you\'ll need to grant the Deno process access to environment variables. This can be done by passing the `--allow-env` flag to the `deno run` command. You can also specify which environment variables you want to grant access to, like this:\n\n```shell\n# Allow access to only the HOME env var\ndeno run --allow-env=HOME env.js\n```\n\nIt\'s important to note that environment variables are case insensitive on Windows, so Deno also matches them case insensitively (on Windows only).\n\nAnother thing to be aware of when using environment variables is subprocess permissions. Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, {'text': '\n\nEnvironment variables are an important part of any programming language, and Deno is no exception. Deno is a secure JavaScript and TypeScript runtime built on the V8 JavaScript engine, and it recently added support for environment variables. This feature was added in Deno version 1.6.0, and it is now available for use in Deno applications.\n\nEnvironment variables are used to store information that can be used by programs. They are typically used to store configuration information, such as the location of a database or the name of a user. In Deno, environment variables are stored in the `Deno.env` object. This object is similar to the `process.env` object in Node.js, and it allows you to access and set environment variables.\n\nThe `Deno.env` object is a read-only object, meaning that you cannot directly modify the environment variables. Instead, you must use the `Deno.env.set()` function to set environment variables. This function takes two arguments: the name of the environment variable and the value to set it to. For example, if you wanted to set the `FOO` environment variable to `bar`, you would use the following code:\n\n```'}]PreviousQA over in-memory documentsNextStructure answers with OpenAI functionsPrepare DataSet Up Vector DBSet Up LLM Chain with Custom PromptGenerate Text |
104 | https://python.langchain.com/docs/use_cases/question_answering/integrations/openai_functions_retrieval_qa | Question AnsweringIntegration-specificStructure answers with OpenAI functionsOn this pageStructure answers with OpenAI functionsOpenAI functions allows for structuring of response output. This is often useful in question answering when you want to not only get the final answer but also supporting evidence, citations, etc.In this notebook we show how to use an LLM chain which uses OpenAI functions as part of an overall retrieval pipeline.from langchain.chains import RetrievalQAfrom langchain.document_loaders import TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.vectorstores import Chromaloader = TextLoader("../../state_of_the_union.txt", encoding="utf-8")documents = loader.load()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)for i, text in enumerate(texts): text.metadata["source"] = f"{i}-pl"embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)from langchain.chat_models import ChatOpenAIfrom langchain.chains.combine_documents.stuff import StuffDocumentsChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import create_qa_with_sources_chainllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")qa_chain = create_qa_with_sources_chain(llm)doc_prompt = PromptTemplate( template="Content: {page_content}\nSource: {source}", input_variables=["page_content", "source"],)final_qa_chain = StuffDocumentsChain( llm_chain=qa_chain, document_variable_name="context", document_prompt=doc_prompt,)retrieval_qa = RetrievalQA( retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain)query = "What did the president say about russia"retrieval_qa.run(query) '{\n "answer": "The President expressed strong condemnation of Russia\'s actions in Ukraine and announced measures to isolate Russia and provide support to Ukraine. He stated that Russia\'s invasion of Ukraine will have long-term consequences for Russia and emphasized the commitment to defend NATO countries. The President also mentioned taking robust action through sanctions and releasing oil reserves to mitigate gas prices. Overall, the President conveyed a message of solidarity with Ukraine and determination to protect American interests.",\n "sources": ["0-pl", "4-pl", "5-pl", "6-pl"]\n}'Using PydanticIf we want to, we can set the chain to return in Pydantic. Note that if downstream chains consume the output of this chain - including memory - they will generally expect it to be in string format, so you should only use this chain when it is the final chain.qa_chain_pydantic = create_qa_with_sources_chain(llm, output_parser="pydantic")final_qa_chain_pydantic = StuffDocumentsChain( llm_chain=qa_chain_pydantic, document_variable_name="context", document_prompt=doc_prompt,)retrieval_qa_pydantic = RetrievalQA( retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain_pydantic)retrieval_qa_pydantic.run(query) AnswerWithSources(answer="The President expressed strong condemnation of Russia's actions in Ukraine and announced measures to isolate Russia and provide support to Ukraine. He stated that Russia's invasion of Ukraine will have long-term consequences for Russia and emphasized the commitment to defend NATO countries. The President also mentioned taking robust action through sanctions and releasing oil reserves to mitigate gas prices. Overall, the President conveyed a message of solidarity with Ukraine and determination to protect American interests.", sources=['0-pl', '4-pl', '5-pl', '6-pl'])Using in ConversationalRetrievalChainWe can also show what it's like to use this in the ConversationalRetrievalChain. Note that because this chain involves memory, we will NOT use the Pydantic return type.from langchain.chains import ConversationalRetrievalChainfrom langchain.memory import ConversationBufferMemoryfrom langchain.chains import LLMChainmemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\Make sure to avoid using any unclear pronouns.Chat History:{chat_history}Follow Up Input: {question}Standalone question:"""CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)condense_question_chain = LLMChain( llm=llm, prompt=CONDENSE_QUESTION_PROMPT,)qa = ConversationalRetrievalChain( question_generator=condense_question_chain, retriever=docsearch.as_retriever(), memory=memory, combine_docs_chain=final_qa_chain,)query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query})result {'question': 'What did the president say about Ketanji Brown Jackson', 'chat_history': [HumanMessage(content='What did the president say about Ketanji Brown Jackson', additional_kwargs={}, example=False), AIMessage(content='{\n "answer": "The President nominated Ketanji Brown Jackson as a Circuit Court of Appeals Judge and praised her as one of the nation\'s top legal minds who will continue Justice Breyer\'s legacy of excellence.",\n "sources": ["31-pl"]\n}', additional_kwargs={}, example=False)], 'answer': '{\n "answer": "The President nominated Ketanji Brown Jackson as a Circuit Court of Appeals Judge and praised her as one of the nation\'s top legal minds who will continue Justice Breyer\'s legacy of excellence.",\n "sources": ["31-pl"]\n}'}query = "what did he say about her predecessor?"result = qa({"question": query})result {'question': 'what did he say about her predecessor?', 'chat_history': [HumanMessage(content='What did the president say about Ketanji Brown Jackson', additional_kwargs={}, example=False), AIMessage(content='{\n "answer": "The President nominated Ketanji Brown Jackson as a Circuit Court of Appeals Judge and praised her as one of the nation\'s top legal minds who will continue Justice Breyer\'s legacy of excellence.",\n "sources": ["31-pl"]\n}', additional_kwargs={}, example=False), HumanMessage(content='what did he say about her predecessor?', additional_kwargs={}, example=False), AIMessage(content='{\n "answer": "The President honored Justice Stephen Breyer for his service as an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court.",\n "sources": ["31-pl"]\n}', additional_kwargs={}, example=False)], 'answer': '{\n "answer": "The President honored Justice Stephen Breyer for his service as an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court.",\n "sources": ["31-pl"]\n}'}Using your own output schemaWe can change the outputs of our chain by passing in our own schema. The values and descriptions of this schema will inform the function we pass to the OpenAI API, meaning it won't just affect how we parse outputs but will also change the OpenAI output itself. For example we can add a countries_referenced parameter to our schema and describe what we want this parameter to mean, and that'll cause the OpenAI output to include a description of a speaker in the response.In addition to the previous example, we can also add a custom prompt to the chain. This will allow you to add additional context to the response, which can be useful for question answering.from typing import Listfrom pydantic import BaseModel, Fieldfrom langchain.chains.openai_functions import create_qa_with_structure_chainfrom langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplatefrom langchain.schema import SystemMessage, HumanMessageclass CustomResponseSchema(BaseModel): """An answer to the question being asked, with sources.""" answer: str = Field(..., description="Answer to the question that was asked") countries_referenced: List[str] = Field( ..., description="All of the countries mentioned in the sources" ) sources: List[str] = Field( ..., description="List of sources used to answer the question" )prompt_messages = [ SystemMessage( content=( "You are a world class algorithm to answer " "questions in a specific format." ) ), HumanMessage(content="Answer question using the following context"), HumanMessagePromptTemplate.from_template("{context}"), HumanMessagePromptTemplate.from_template("Question: {question}"), HumanMessage( content="Tips: Make sure to answer in the correct format. Return all of the countries mentioned in the sources in uppercase characters." ),]chain_prompt = ChatPromptTemplate(messages=prompt_messages)qa_chain_pydantic = create_qa_with_structure_chain( llm, CustomResponseSchema, output_parser="pydantic", prompt=chain_prompt)final_qa_chain_pydantic = StuffDocumentsChain( llm_chain=qa_chain_pydantic, document_variable_name="context", document_prompt=doc_prompt,)retrieval_qa_pydantic = RetrievalQA( retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain_pydantic)query = "What did he say about russia"retrieval_qa_pydantic.run(query) CustomResponseSchema(answer="He announced that American airspace will be closed off to all Russian flights, further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value and the Russian stock market has lost 40% of its value. He also mentioned that Putin alone is to blame for Russia's reeling economy. The United States and its allies are providing support to Ukraine in their fight for freedom, including military, economic, and humanitarian assistance. The United States is giving more than $1 billion in direct assistance to Ukraine. He made it clear that American forces are not engaged and will not engage in conflict with Russian forces in Ukraine, but they are deployed to defend NATO allies in case Putin decides to keep moving west. He also mentioned that Putin's attack on Ukraine was premeditated and unprovoked, and that the West and NATO responded by building a coalition of freedom-loving nations to confront Putin. The free world is holding Putin accountable through powerful economic sanctions, cutting off Russia's largest banks from the international financial system, and preventing Russia's central bank from defending the Russian Ruble. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs.", countries_referenced=['AMERICA', 'RUSSIA', 'UKRAINE'], sources=['4-pl', '5-pl', '2-pl', '3-pl'])PreviousRetrieve from vector stores directlyNextQA using Activeloop's DeepLakeUsing PydanticUsing in ConversationalRetrievalChainUsing your own output schema |
105 | https://python.langchain.com/docs/use_cases/question_answering/integrations/semantic-search-over-chat | Question AnsweringIntegration-specificQA using Activeloop's DeepLakeOn this pageQA using Activeloop's DeepLakeIn this tutorial, we are going to use Langchain + Activeloop's Deep Lake with GPT4 to semantically search and ask questions over a group chat.View a working demo here1. Install required packagespython3 -m pip install --upgrade langchain 'deeplake[enterprise]' openai tiktoken2. Add API keysimport osimport getpassfrom langchain.document_loaders import PyPDFLoader, TextLoaderfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.text_splitter import ( RecursiveCharacterTextSplitter, CharacterTextSplitter,)from langchain.vectorstores import DeepLakefrom langchain.chains import ConversationalRetrievalChain, RetrievalQAfrom langchain.chat_models import ChatOpenAIfrom langchain.llms import OpenAIos.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")activeloop_token = getpass.getpass("Activeloop Token:")os.environ["ACTIVELOOP_TOKEN"] = activeloop_tokenos.environ["ACTIVELOOP_ORG"] = getpass.getpass("Activeloop Org:")org_id = os.environ["ACTIVELOOP_ORG"]embeddings = OpenAIEmbeddings()dataset_path = "hub://" + org_id + "/data"2. Create sample dataYou can generate a sample group chat conversation using ChatGPT with this prompt:Generate a group chat conversation with three friends talking about their day, referencing real places and fictional names. Make it funny and as detailed as possible.I've already generated such a chat in messages.txt. We can keep it simple and use this for our example.3. Ingest chat embeddingsWe load the messages in the text file, chunk and upload to ActiveLoop Vector store.with open("messages.txt") as f: state_of_the_union = f.read()text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)pages = text_splitter.split_text(state_of_the_union)text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)texts = text_splitter.create_documents(pages)print(texts)dataset_path = "hub://" + org_id + "/data"embeddings = OpenAIEmbeddings()db = DeepLake.from_documents( texts, embeddings, dataset_path=dataset_path, overwrite=True) [Document(page_content='Participants:\n\nJerry: Loves movies and is a bit of a klutz.\nSamantha: Enthusiastic about food and always trying new restaurants.\nBarry: A nature lover, but always manages to get lost.\nJerry: Hey, guys! You won\'t believe what happened to me at the Times Square AMC theater. I tripped over my own feet and spilled popcorn everywhere! 🍿💥\n\nSamantha: LOL, that\'s so you, Jerry! Was the floor buttery enough for you to ice skate on after that? 😂\n\nBarry: Sounds like a regular Tuesday for you, Jerry. Meanwhile, I tried to find that new hiking trail in Central Park. You know, the one that\'s supposed to be impossible to get lost on? Well, guess what...\n\nJerry: You found a hidden treasure?\n\nBarry: No, I got lost. AGAIN. 🧭🙄\n\nSamantha: Barry, you\'d get lost in your own backyard! But speaking of treasures, I found this new sushi place in Little Tokyo. "Samantha\'s Sushi Symphony" it\'s called. Coincidence? I think not!\n\nJerry: Maybe they named it after your ability to eat your body weight in sushi. 🍣', metadata={}), Document(page_content='Barry: How do you even FIND all these places, Samantha?\n\nSamantha: Simple, I don\'t rely on Barry\'s navigation skills. 😉 But seriously, the wasabi there was hotter than Jerry\'s love for Marvel movies!\n\nJerry: Hey, nothing wrong with a little superhero action. By the way, did you guys see the new "Captain Crunch: Breakfast Avenger" trailer?\n\nSamantha: Captain Crunch? Are you sure you didn\'t get that from one of your Saturday morning cereal binges?\n\nBarry: Yeah, and did he defeat his arch-enemy, General Mills? 😆\n\nJerry: Ha-ha, very funny. Anyway, that sushi place sounds awesome, Samantha. Next time, let\'s go together, and maybe Barry can guide us... if we want a city-wide tour first.\n\nBarry: As long as we\'re not hiking, I\'ll get us there... eventually. 😅\n\nSamantha: It\'s a date! But Jerry, you\'re banned from carrying any food items.\n\nJerry: Deal! Just promise me no wasabi challenges. I don\'t want to end up like the time I tried Sriracha ice cream.', metadata={}), Document(page_content="Barry: Wait, what happened with Sriracha ice cream?\n\nJerry: Let's just say it was a hot situation. Literally. 🔥\n\nSamantha: 🤣 I still have the video!\n\nJerry: Samantha, if you value our friendship, that video will never see the light of day.\n\nSamantha: No promises, Jerry. No promises. 🤐😈\n\nBarry: I foresee a fun weekend ahead! 🎉", metadata={})] Your Deep Lake dataset has been successfully created! \ Dataset(path='hub://adilkhan/data', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (3, 1536) float32 None id text (3, 1) str None metadata json (3, 1) str None text text (3, 1) str None Optional: You can also use Deep Lake's Managed Tensor Database as a hosting service and run queries there. In order to do so, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps.# with open("messages.txt") as f:# state_of_the_union = f.read()# text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)# pages = text_splitter.split_text(state_of_the_union)# text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)# texts = text_splitter.create_documents(pages)# print(texts)# dataset_path = "hub://" + org + "/data"# embeddings = OpenAIEmbeddings()# db = DeepLake.from_documents(# texts, embeddings, dataset_path=dataset_path, overwrite=True, runtime={"tensor_db": True}# )4. Ask questionsNow we can ask a question and get an answer back with a semantic search:db = DeepLake(dataset_path=dataset_path, read_only=True, embedding=embeddings)retriever = db.as_retriever()retriever.search_kwargs["distance_metric"] = "cos"retriever.search_kwargs["k"] = 4qa = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=False)# What was the restaurant the group was talking about called?query = input("Enter query:")# The Hungry Lobsterans = qa({"query": query})print(ans)PreviousStructure answers with OpenAI functionsNextSQL1. Install required packages2. Add API keys2. Create sample data3. Ingest chat embeddings4. Ask questions |
106 | https://python.langchain.com/docs/use_cases/qa_structured/sql | QA over structured dataSQLOn this pageSQLUse caseEnterprise data is often stored in SQL databases.LLMs make it possible to interact with SQL databases using natural langugae.LangChain offers SQL Chains and Agents to build and run SQL queries based on natural language prompts. These are compatible with any SQL dialect supported by SQLAlchemy (e.g., MySQL, PostgreSQL, Oracle SQL, Databricks, SQLite).They enable use cases such as:Generating queries that will be run based on natural language questionsCreating chatbots that can answer questions based on database dataBuilding custom dashboards based on insights a user wants to analyzeOverviewLangChain provides tools to interact with SQL Databases:Build SQL queries based on natural language user questionsQuery a SQL database using chains for query creation and executionInteract with a SQL database using agents for robust and flexible querying QuickstartFirst, get required packages and set environment variables:pip install langchain langchain-experimental openai# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()The below example will use a SQLite connection with Chinook database. Follow installation steps to create Chinook.db in the same directory as this notebook:Save this file to the directory as Chinook_Sqlite.sqlRun sqlite3 Chinook.dbRun .read Chinook_Sqlite.sqlTest SELECT * FROM Artist LIMIT 10;Now, Chinhook.db is in our directory.Let's create a SQLDatabaseChain to create and execute SQL queries.from langchain.utilities import SQLDatabasefrom langchain.llms import OpenAIfrom langchain_experimental.sql import SQLDatabaseChaindb = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = OpenAI(temperature=0, verbose=True)db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery:SELECT COUNT(*) FROM "Employee"; SQLResult: [(8,)] Answer:There are 8 employees. > Finished chain. 'There are 8 employees.'Note that this both creates and executes the query. In the following sections, we will cover the 3 different use cases mentioned in the overview.Go deeperYou can load tabular data from other sources other than SQL Databases.
For example:Loading a CSV fileLoading a Pandas DataFrame
Here you can check full list of Document LoadersCase 1: Text-to-SQL queryfrom langchain.chat_models import ChatOpenAIfrom langchain.chains import create_sql_query_chainLet's create the chain that will build the SQL Query:chain = create_sql_query_chain(ChatOpenAI(temperature=0), db)response = chain.invoke({"question":"How many employees are there"})print(response) SELECT COUNT(*) FROM EmployeeAfter building the SQL query based on a user question, we can execute the query:db.run(response) '[(8,)]'As we can see, the SQL Query Builder chain only created the query, and we handled the query execution separately.Go deeperLooking under the hoodWe can look at the LangSmith trace to unpack this:Some papers have reported good performance when prompting with:A CREATE TABLE description for each table, which include column names, their types, etcFollowed by three example rows in a SELECT statementcreate_sql_query_chain adopts this the best practice (see more in this blog).
ImprovementsThe query builder can be improved in several ways, such as (but not limited to):Customizing database description to your specific use caseHardcoding a few examples of questions and their corresponding SQL query in the promptUsing a vector database to include dynamic examples that are relevant to the specific user questionAll these examples involve customizing the chain's prompt. For example, we can include a few examples in our prompt like so:from langchain.prompts import PromptTemplateTEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.Use the following format:Question: "Question here"SQLQuery: "SQL Query to run"SQLResult: "Result of the SQLQuery"Answer: "Final answer here"Only use the following tables:{table_info}.Some examples of SQL queries that corrsespond to questions are:{few_shot_examples}Question: {input}"""CUSTOM_PROMPT = PromptTemplate( input_variables=["input", "few_shot_examples", "table_info", "dialect"], template=TEMPLATE)We can also access this prompt in the LangChain prompt hub.This will work with your LangSmith API key.from langchain import hubCUSTOM_PROMPT = hub.pull("rlm/text-to-sql")Case 2: Text-to-SQL query and executionWe can use SQLDatabaseChain from langchain_experimental to create and run SQL queries.from langchain.llms import OpenAIfrom langchain_experimental.sql import SQLDatabaseChainllm = OpenAI(temperature=0, verbose=True)db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery:SELECT COUNT(*) FROM "Employee"; SQLResult: [(8,)] Answer:There are 8 employees. > Finished chain. 'There are 8 employees.'As we can see, we get the same result as the previous case.Here, the chain also handles the query execution and provides a final answer based on the user question and the query result.Be careful while using this approach as it is susceptible to SQL Injection:The chain is executing queries that are created by an LLM, and weren't validatede.g. records may be created, modified or deleted unintentionally_This is why we see the SQLDatabaseChain is inside langchain_experimental.Go deeperLooking under the hoodWe can use the LangSmith trace to see what is happening under the hood:As discussed above, first we create the query:text: ' SELECT COUNT(*) FROM "Employee";'Then, it executes the query and passes the results to an LLM for synthesis.ImprovementsThe performance of the SQLDatabaseChain can be enhanced in several ways:Adding sample rowsSpecifying custom table informationUsing Query Checker self-correct invalid SQL using parameter use_query_checker=TrueCustomizing the LLM Prompt include specific instructions or relevant information, using parameter prompt=CUSTOM_PROMPTGet intermediate steps access the SQL statement as well as the final result using parameter return_intermediate_steps=TrueLimit the number of rows a query will return using parameter top_k=5You might find SQLDatabaseSequentialChain
useful for cases in which the number of tables in the database is large.This Sequential Chain handles the process of:Determining which tables to use based on the user questionCalling the normal SQL database chain using only relevant tablesAdding Sample RowsProviding sample data can help the LLM construct correct queries when the data format is not obvious. For example, we can tell LLM that artists are saved with their full names by providing two rows from the Track table.db = SQLDatabase.from_uri( "sqlite:///Chinook.db", include_tables=['Track'], # we include only one table to save tokens in the prompt :) sample_rows_in_table_info=2)The sample rows are added to the prompt after each corresponding table's column information.We can use db.table_info and check which sample rows are included:print(db.table_info) CREATE TABLE "Track" ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "AlbumId" INTEGER, "MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER, "Composer" NVARCHAR(220), "Milliseconds" INTEGER NOT NULL, "Bytes" INTEGER, "UnitPrice" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("TrackId"), FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId") ) /* 2 rows from Track table: TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 None 342562 5510424 0.99 */Case 3: SQL agentsLangChain has an SQL Agent which provides a more flexible way of interacting with SQL Databases than the SQLDatabaseChain.The main advantages of using the SQL Agent are:It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table)It can recover from errors by running a generated query, catching the traceback and regenerating it correctlyTo initialize the agent, we use create_sql_agent function. This agent contains the SQLDatabaseToolkit which contains tools to: Create and execute queriesCheck query syntaxRetrieve table descriptions... and morefrom langchain.agents import create_sql_agentfrom langchain.agents.agent_toolkits import SQLDatabaseToolkit# from langchain.agents import AgentExecutorfrom langchain.agents.agent_types import AgentTypedb = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = OpenAI(temperature=0, verbose=True)agent_executor = create_sql_agent( llm=OpenAI(temperature=0), toolkit=SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0)), verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)Agent task example #1 - Running queriesagent_executor.run( "List the total sales per country. Which country's customers spent the most?") > Entering new AgentExecutor chain... Action: sql_db_list_tables Action Input: Observation: Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Thought: I should query the schema of the Invoice and Customer tables. Action: sql_db_schema Action Input: Invoice, Customer Observation: CREATE TABLE "Customer" ( "CustomerId" INTEGER NOT NULL, "FirstName" NVARCHAR(40) NOT NULL, "LastName" NVARCHAR(20) NOT NULL, "Company" NVARCHAR(80), "Address" NVARCHAR(70), "City" NVARCHAR(40), "State" NVARCHAR(40), "Country" NVARCHAR(40), "PostalCode" NVARCHAR(10), "Phone" NVARCHAR(24), "Fax" NVARCHAR(24), "Email" NVARCHAR(60) NOT NULL, "SupportRepId" INTEGER, PRIMARY KEY ("CustomerId"), FOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId") ) /* 3 rows from Customer table: CustomerId FirstName LastName Company Address City State Country PostalCode Phone Fax Email SupportRepId 1 Luís Gonçalves Embraer - Empresa Brasileira de Aeronáutica S.A. Av. Brigadeiro Faria Lima, 2170 São José dos Campos SP Brazil 12227-000 +55 (12) 3923-5555 +55 (12) 3923-5566 luisg@embraer.com.br 3 2 Leonie Köhler None Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 +49 0711 2842222 None leonekohler@surfeu.de 5 3 François Tremblay None 1498 rue Bélanger Montréal QC Canada H2G 1A7 +1 (514) 721-4711 None ftremblay@gmail.com 3 */ CREATE TABLE "Invoice" ( "InvoiceId" INTEGER NOT NULL, "CustomerId" INTEGER NOT NULL, "InvoiceDate" DATETIME NOT NULL, "BillingAddress" NVARCHAR(70), "BillingCity" NVARCHAR(40), "BillingState" NVARCHAR(40), "BillingCountry" NVARCHAR(40), "BillingPostalCode" NVARCHAR(10), "Total" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("InvoiceId"), FOREIGN KEY("CustomerId") REFERENCES "Customer" ("CustomerId") ) /* 3 rows from Invoice table: InvoiceId CustomerId InvoiceDate BillingAddress BillingCity BillingState BillingCountry BillingPostalCode Total 1 2 2009-01-01 00:00:00 Theodor-Heuss-Straße 34 Stuttgart None Germany 70174 1.98 2 4 2009-01-02 00:00:00 Ullevålsveien 14 Oslo None Norway 0171 3.96 3 8 2009-01-03 00:00:00 Grétrystraat 63 Brussels None Belgium 1000 5.94 */ Thought: I should query the total sales per country. Action: sql_db_query Action Input: SELECT Country, SUM(Total) AS TotalSales FROM Invoice INNER JOIN Customer ON Invoice.CustomerId = Customer.CustomerId GROUP BY Country ORDER BY TotalSales DESC LIMIT 10 Observation: [('USA', 523.0600000000003), ('Canada', 303.9599999999999), ('France', 195.09999999999994), ('Brazil', 190.09999999999997), ('Germany', 156.48), ('United Kingdom', 112.85999999999999), ('Czech Republic', 90.24000000000001), ('Portugal', 77.23999999999998), ('India', 75.25999999999999), ('Chile', 46.62)] Thought: I now know the final answer Final Answer: The country with the highest total sales is the USA, with a total of $523.06. > Finished chain. 'The country with the highest total sales is the USA, with a total of $523.06.'Looking at the LangSmith trace, we can see:The agent is using a ReAct style promptFirst, it will look at the tables: Action: sql_db_list_tables using tool sql_db_list_tablesGiven the tables as an observation, it thinks and then determinates the next action:Observation: Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, TrackThought: I should query the schema of the Invoice and Customer tables.Action: sql_db_schemaAction Input: Invoice, CustomerIt then formulates the query using the schema from tool sql_db_schemaThought: I should query the total sales per country.Action: sql_db_queryAction Input: SELECT Country, SUM(Total) AS TotalSales FROM Invoice INNER JOIN Customer ON Invoice.CustomerId = Customer.CustomerId GROUP BY Country ORDER BY TotalSales DESC LIMIT 10It finally executes the generated query using tool sql_db_queryAgent task example #2 - Describing a Tableagent_executor.run("Describe the playlisttrack table") > Entering new AgentExecutor chain... Action: sql_db_list_tables Action Input: Observation: Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Thought: The PlaylistTrack table is the most relevant to the question. Action: sql_db_schema Action Input: PlaylistTrack Observation: CREATE TABLE "PlaylistTrack" ( "PlaylistId" INTEGER NOT NULL, "TrackId" INTEGER NOT NULL, PRIMARY KEY ("PlaylistId", "TrackId"), FOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), FOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId") ) /* 3 rows from PlaylistTrack table: PlaylistId TrackId 1 3402 1 3389 1 3390 */ Thought: I now know the final answer Final Answer: The PlaylistTrack table contains two columns, PlaylistId and TrackId, which are both integers and form a primary key. It also has two foreign keys, one to the Track table and one to the Playlist table. > Finished chain. 'The PlaylistTrack table contains two columns, PlaylistId and TrackId, which are both integers and form a primary key. It also has two foreign keys, one to the Track table and one to the Playlist table.'Extending the SQL ToolkitAlthough the out-of-the-box SQL Toolkit contains the necessary tools to start working on a database, it is often the case that some extra tools may be useful for extending the agent's capabilities. This is particularly useful when trying to use domain specific knowledge in the solution, in order to improve its overall performance.Some examples include:Including dynamic few shot examplesFinding misspellings in proper nouns to use as column filtersWe can create separate tools which tackle these specific use cases and include them as a complement to the standard SQL Toolkit. Let's see how to include these two custom tools.Including dynamic few-shot examplesIn order to include dynamic few-shot examples, we need a custom Retriever Tool that handles the vector database in order to retrieve the examples that are semantically similar to the user’s question.Let's start by creating a dictionary with some examples: # few_shots = {'List all artists.': 'SELECT * FROM artists;',# "Find all albums for the artist 'AC/DC'.": "SELECT * FROM albums WHERE ArtistId = (SELECT ArtistId FROM artists WHERE Name = 'AC/DC');",# "List all tracks in the 'Rock' genre.": "SELECT * FROM tracks WHERE GenreId = (SELECT GenreId FROM genres WHERE Name = 'Rock');",# 'Find the total duration of all tracks.': 'SELECT SUM(Milliseconds) FROM tracks;',# 'List all customers from Canada.': "SELECT * FROM customers WHERE Country = 'Canada';",# 'How many tracks are there in the album with ID 5?': 'SELECT COUNT(*) FROM tracks WHERE AlbumId = 5;',# 'Find the total number of invoices.': 'SELECT COUNT(*) FROM invoices;',# 'List all tracks that are longer than 5 minutes.': 'SELECT * FROM tracks WHERE Milliseconds > 300000;',# 'Who are the top 5 customers by total purchase?': 'SELECT CustomerId, SUM(Total) AS TotalPurchase FROM invoices GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;',# 'Which albums are from the year 2000?': "SELECT * FROM albums WHERE strftime('%Y', ReleaseDate) = '2000';",# 'How many employees are there': 'SELECT COUNT(*) FROM "employee"'# }We can then create a retriever using the list of questions, assigning the target SQL query as metadata:from langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import FAISSfrom langchain.schema import Documentembeddings = OpenAIEmbeddings()few_shot_docs = [Document(page_content=question, metadata={'sql_query': few_shots[question]}) for question in few_shots.keys()]vector_db = FAISS.from_documents(few_shot_docs, embeddings)retriever = vector_db.as_retriever()Now we can create our own custom tool and append it as a new tool in the create_sql_agent function:from langchain.agents.agent_toolkits import create_retriever_tooltool_description = """This tool will help you understand similar examples to adapt them to the user question.Input to this tool should be the user question."""retriever_tool = create_retriever_tool( retriever, name='sql_get_similar_examples', description=tool_description )custom_tool_list = [retriever_tool]Now we can create the agent, adjusting the standard SQL Agent suffix to consider our use case. Although the most straightforward way to handle this would be to include it just in the tool description, this is often not enough and we need to specify it in the agent prompt using the suffix argument in the constructor.from langchain.agents import create_sql_agent, AgentTypefrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.utilities import SQLDatabasefrom langchain.chat_models import ChatOpenAIdb = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = ChatOpenAI(model_name='gpt-4',temperature=0)toolkit = SQLDatabaseToolkit(db=db, llm=llm)custom_suffix = """I should first get the similar examples I know.If the examples are enough to construct the query, I can build it.Otherwise, I can then look at the tables in the database to see what I can query.Then I should query the schema of the most relevant tables"""agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS, extra_tools=custom_tool_list, suffix=custom_suffix )Let's try it out:agent.run("How many employees do we have?") > Entering new AgentExecutor chain... Invoking: `sql_get_similar_examples` with `How many employees do we have?` [Document(page_content='How many employees are there', metadata={'sql_query': 'SELECT COUNT(*) FROM "employee"'}), Document(page_content='Find the total number of invoices.', metadata={'sql_query': 'SELECT COUNT(*) FROM invoices;'})] Invoking: `sql_db_query_checker` with `SELECT COUNT(*) FROM employee` responded: {content} SELECT COUNT(*) FROM employee Invoking: `sql_db_query` with `SELECT COUNT(*) FROM employee` [(8,)]We have 8 employees. > Finished chain. 'We have 8 employees.'As we can see, the agent first used the sql_get_similar_examples tool in order to retrieve similar examples. As the question was very similar to other few shot examples, the agent didn't need to use any other tool from the standard Toolkit, thus saving time and tokens.Finding and correcting misspellings for proper nounsIn order to filter columns that contain proper nouns such as addresses, song names or artists, we first need to double-check the spelling in order to filter the data correctly. We can achieve this by creating a vector store using all the distinct proper nouns that exist in the database. We can then have the agent query that vector store each time the user includes a proper noun in their question, to find the correct spelling for that word. In this way, the agent can make sure it understands which entity the user is referring to before building the target query.Let's follow a similar approach to the few shots, but without metadata: just embedding the proper nouns and then querying to get the most similar one to the misspelled user question.First we need the unique values for each entity we want, for which we define a function that parses the result into a list of elements:import astimport redef run_query_save_results(db, query): res = db.run(query) res = [el for sub in ast.literal_eval(res) for el in sub if el] res = [re.sub(r'\b\d+\b', '', string).strip() for string in res] return resartists = run_query_save_results(db, "SELECT Name FROM Artist")albums = run_query_save_results(db, "SELECT Title FROM Album")Now we can proceed with creating the custom retreiver tool and the final agent:from langchain.agents.agent_toolkits import create_retriever_toolfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import FAISStexts = (artists + albums)embeddings = OpenAIEmbeddings()vector_db = FAISS.from_texts(texts, embeddings)retriever = vector_db.as_retriever()retriever_tool = create_retriever_tool( retriever, name='name_search', description='use to learn how a piece of data is actually written, can be from names, surnames addresses etc' )custom_tool_list = [retriever_tool]from langchain.agents import create_sql_agent, AgentTypefrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.utilities import SQLDatabasefrom langchain.chat_models import ChatOpenAI# db = SQLDatabase.from_uri("sqlite:///Chinook.db")llm = ChatOpenAI(model_name='gpt-4', temperature=0)toolkit = SQLDatabaseToolkit(db=db, llm=llm)custom_suffix = """If a user asks for me to filter based on proper nouns, I should first check the spelling using the name_search tool.Otherwise, I can then look at the tables in the database to see what I can query.Then I should query the schema of the most relevant tables"""agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS, extra_tools=custom_tool_list, suffix=custom_suffix )Let's try it out:agent.run("How many albums does alis in pains have?") > Entering new AgentExecutor chain... Invoking: `name_search` with `alis in pains` [Document(page_content='House of Pain', metadata={}), Document(page_content='Alice In Chains', metadata={}), Document(page_content='Aisha Duo', metadata={}), Document(page_content='House Of Pain', metadata={})] Invoking: `sql_db_list_tables` with `` responded: {content} Album, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Invoking: `sql_db_schema` with `Album, Artist` responded: {content} CREATE TABLE "Album" ( "AlbumId" INTEGER NOT NULL, "Title" NVARCHAR(160) NOT NULL, "ArtistId" INTEGER NOT NULL, PRIMARY KEY ("AlbumId"), FOREIGN KEY("ArtistId") REFERENCES "Artist" ("ArtistId") ) /* 3 rows from Album table: AlbumId Title ArtistId 1 For Those About To Rock We Salute You 1 2 Balls to the Wall 2 3 Restless and Wild 2 */ CREATE TABLE "Artist" ( "ArtistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("ArtistId") ) /* 3 rows from Artist table: ArtistId Name 1 AC/DC 2 Accept 3 Aerosmith */ Invoking: `sql_db_query_checker` with `SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains'` responded: {content} SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains' Invoking: `sql_db_query` with `SELECT COUNT(*) FROM Album JOIN Artist ON Album.ArtistId = Artist.ArtistId WHERE Artist.Name = 'Alice In Chains'` [(1,)]Alice In Chains has 1 album in the database. > Finished chain. 'Alice In Chains has 1 album in the database.'As we can see, the agent used the name_search tool in order to check how to correctly query the database for this specific artist.Go deeperTo learn more about the SQL Agent and how it works we refer to the SQL Agent Toolkit documentation.You can also check Agents for other document types:Pandas AgentCSV AgentElastic SearchGoing beyond the above use-case, there are integrations with other databases.For example, we can interact with Elasticsearch analytics database. This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).The Elasticsearch client must have permissions for index listing, mapping description and search queries.See here for instructions on how to run Elasticsearch locally.Make sure to install the Elasticsearch Python client before:pip install elasticsearchfrom elasticsearch import Elasticsearchfrom langchain.chat_models import ChatOpenAIfrom langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain# Initialize Elasticsearch python client.# See https://elasticsearch-py.readthedocs.io/en/v8.8.2/api.html#elasticsearch.ElasticsearchELASTIC_SEARCH_SERVER = "https://elastic:pass@localhost:9200"db = Elasticsearch(ELASTIC_SEARCH_SERVER)Uncomment the next cell to initially populate your db.# customers = [# {"firstname": "Jennifer", "lastname": "Walters"},# {"firstname": "Monica","lastname":"Rambeau"},# {"firstname": "Carol","lastname":"Danvers"},# {"firstname": "Wanda","lastname":"Maximoff"},# {"firstname": "Jennifer","lastname":"Takeda"},# ]# for i, customer in enumerate(customers):# db.create(index="customers", document=customer, id=i)llm = ChatOpenAI(model_name="gpt-4", temperature=0)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)question = "What are the first names of all the customers?"chain.run(question)We can customize the prompt.from langchain.chains.elasticsearch_database.prompts import DEFAULT_DSL_TEMPLATEfrom langchain.prompts.prompt import PromptTemplatePROMPT_TEMPLATE = """Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.Use the following format:Question: Question hereESQuery: Elasticsearch Query formatted as json"""PROMPT = PromptTemplate.from_template( PROMPT_TEMPLATE,)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT)PreviousQA using Activeloop's DeepLakeNextDatabricksUse caseOverviewQuickstartGo deeperCase 1: Text-to-SQL queryGo deeperCase 2: Text-to-SQL query and executionGo deeperCase 3: SQL agentsAgent task example #1 - Running queriesAgent task example #2 - Describing a TableExtending the SQL ToolkitGo deeperElastic Search |
107 | https://python.langchain.com/docs/use_cases/qa_structured/integrations/databricks | QA over structured dataIntegration-specificDatabricksOn this pageDatabricksThis notebook covers how to connect to the Databricks runtimes and Databricks SQL using the SQLDatabase wrapper of LangChain.
It is broken into 3 parts: installation and setup, connecting to Databricks, and examples.Installation and Setuppip install databricks-sql-connectorConnecting to DatabricksYou can connect to Databricks runtimes and Databricks SQL using the SQLDatabase.from_databricks() method.SyntaxSQLDatabase.from_databricks( catalog: str, schema: str, host: Optional[str] = None, api_token: Optional[str] = None, warehouse_id: Optional[str] = None, cluster_id: Optional[str] = None, engine_args: Optional[dict] = None, **kwargs: Any)Required Parameterscatalog: The catalog name in the Databricks database.schema: The schema name in the catalog.Optional ParametersThere following parameters are optional. When executing the method in a Databricks notebook, you don't need to provide them in most of the cases.host: The Databricks workspace hostname, excluding 'https://' part. Defaults to 'DATABRICKS_HOST' environment variable or current workspace if in a Databricks notebook.api_token: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to 'DATABRICKS_TOKEN' environment variable or a temporary one is generated if in a Databricks notebook.warehouse_id: The warehouse ID in the Databricks SQL.cluster_id: The cluster ID in the Databricks Runtime. If running in a Databricks notebook and both 'warehouse_id' and 'cluster_id' are None, it uses the ID of the cluster the notebook is attached to.engine_args: The arguments to be used when connecting Databricks.**kwargs: Additional keyword arguments for the SQLDatabase.from_uri method.Examples# Connecting to Databricks with SQLDatabase wrapperfrom langchain.utilities import SQLDatabasedb = SQLDatabase.from_databricks(catalog="samples", schema="nyctaxi")# Creating a OpenAI Chat LLM wrapperfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0, model_name="gpt-4")SQL Chain exampleThis example demonstrates the use of the SQL Chain for answering a question over a Databricks database.from langchain.utilities import SQLDatabaseChaindb_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run( "What is the average duration of taxi rides that start between midnight and 6am?") > Entering new SQLDatabaseChain chain... What is the average duration of taxi rides that start between midnight and 6am? SQLQuery:SELECT AVG(UNIX_TIMESTAMP(tpep_dropoff_datetime) - UNIX_TIMESTAMP(tpep_pickup_datetime)) as avg_duration FROM trips WHERE HOUR(tpep_pickup_datetime) >= 0 AND HOUR(tpep_pickup_datetime) < 6 SQLResult: [(987.8122786304605,)] Answer:The average duration of taxi rides that start between midnight and 6am is 987.81 seconds. > Finished chain. 'The average duration of taxi rides that start between midnight and 6am is 987.81 seconds.'SQL Database Agent exampleThis example demonstrates the use of the SQL Database Agent for answering questions over a Databricks database.from langchain.agents import create_sql_agentfrom langchain.agents.agent_toolkits import SQLDatabaseToolkittoolkit = SQLDatabaseToolkit(db=db, llm=llm)agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)agent.run("What is the longest trip distance and how long did it take?") > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: Observation: trips Thought:I should check the schema of the trips table to see if it has the necessary columns for trip distance and duration. Action: schema_sql_db Action Input: trips Observation: CREATE TABLE trips ( tpep_pickup_datetime TIMESTAMP, tpep_dropoff_datetime TIMESTAMP, trip_distance FLOAT, fare_amount FLOAT, pickup_zip INT, dropoff_zip INT ) USING DELTA /* 3 rows from trips table: tpep_pickup_datetime tpep_dropoff_datetime trip_distance fare_amount pickup_zip dropoff_zip 2016-02-14 16:52:13+00:00 2016-02-14 17:16:04+00:00 4.94 19.0 10282 10171 2016-02-04 18:44:19+00:00 2016-02-04 18:46:00+00:00 0.28 3.5 10110 10110 2016-02-17 17:13:57+00:00 2016-02-17 17:17:55+00:00 0.7 5.0 10103 10023 */ Thought:The trips table has the necessary columns for trip distance and duration. I will write a query to find the longest trip distance and its duration. Action: query_checker_sql_db Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1 Observation: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1 Thought:The query is correct. I will now execute it to find the longest trip distance and its duration. Action: query_sql_db Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1 Observation: [(30.6, '0 00:43:31.000000000')] Thought:I now know the final answer. Final Answer: The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds. > Finished chain. 'The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.'PreviousSQLNextElasticsearchInstallation and SetupConnecting to DatabricksSyntaxRequired ParametersOptional ParametersExamplesSQL Chain exampleSQL Database Agent example |
108 | https://python.langchain.com/docs/use_cases/qa_structured/integrations/elasticsearch | QA over structured dataIntegration-specificElasticsearchElasticsearchWe can use LLMs to interact with Elasticsearch analytics databases in natural language.This chain builds search queries via the Elasticsearch DSL API (filters and aggregations).The Elasticsearch client must have permissions for index listing, mapping description and search queries.See here for instructions on how to run Elasticsearch locally.pip install langchain langchain-experimental openai elasticsearch# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()from elasticsearch import Elasticsearchfrom langchain.chat_models import ChatOpenAIfrom langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain# Initialize Elasticsearch python client.# See https://elasticsearch-py.readthedocs.io/en/v8.8.2/api.html#elasticsearch.ElasticsearchELASTIC_SEARCH_SERVER = "https://elastic:pass@localhost:9200"db = Elasticsearch(ELASTIC_SEARCH_SERVER)Uncomment the next cell to initially populate your db.# customers = [# {"firstname": "Jennifer", "lastname": "Walters"},# {"firstname": "Monica","lastname":"Rambeau"},# {"firstname": "Carol","lastname":"Danvers"},# {"firstname": "Wanda","lastname":"Maximoff"},# {"firstname": "Jennifer","lastname":"Takeda"},# ]# for i, customer in enumerate(customers):# db.create(index="customers", document=customer, id=i)llm = ChatOpenAI(model_name="gpt-4", temperature=0)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)question = "What are the first names of all the customers?"chain.run(question)We can customize the prompt.from langchain.chains.elasticsearch_database.prompts import DEFAULT_DSL_TEMPLATEfrom langchain.prompts.prompt import PromptTemplatePROMPT_TEMPLATE = """Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.Use the following format:Question: Question hereESQuery: Elasticsearch Query formatted as json"""PROMPT = PromptTemplate.from_template( PROMPT_TEMPLATE,)chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, query_prompt=PROMPT)PreviousDatabricksNextVector SQL Retriever with MyScale |
109 | https://python.langchain.com/docs/use_cases/qa_structured/integrations/myscale_vector_sql | QA over structured dataIntegration-specificVector SQL Retriever with MyScaleOn this pageVector SQL Retriever with MyScaleMyScale is an integrated vector database. You can access your database in SQL and also from here, LangChain. MyScale can make a use of various data types and functions for filters. It will boost up your LLM app no matter if you are scaling up your data or expand your system to broader application.pip3 install clickhouse-sqlalchemy InstructorEmbedding sentence_transformers openai langchain-experimentalfrom os import environimport getpassfrom typing import Dict, Anyfrom langchain.llms import OpenAIfrom langchain.utilities import SQLDatabasefrom langchain.chains import LLMChainfrom langchain_experimental.sql.vector_sql import VectorSQLDatabaseChainfrom sqlalchemy import create_engine, Column, MetaDatafrom langchain.prompts import PromptTemplatefrom sqlalchemy import create_engineMYSCALE_HOST = "msc-1decbcc9.us-east-1.aws.staging.myscale.cloud"MYSCALE_PORT = 443MYSCALE_USER = "chatdata"MYSCALE_PASSWORD = "myscale_rocks"OPENAI_API_KEY = getpass.getpass("OpenAI API Key:")engine = create_engine( f"clickhouse://{MYSCALE_USER}:{MYSCALE_PASSWORD}@{MYSCALE_HOST}:{MYSCALE_PORT}/default?protocol=https")metadata = MetaData(bind=engine)environ["OPENAI_API_KEY"] = OPENAI_API_KEYfrom langchain.embeddings import HuggingFaceInstructEmbeddingsfrom langchain_experimental.sql.vector_sql import VectorSQLOutputParseroutput_parser = VectorSQLOutputParser.from_embeddings( model=HuggingFaceInstructEmbeddings( model_name="hkunlp/instructor-xl", model_kwargs={"device": "cpu"} ))from langchain.llms import OpenAIfrom langchain.callbacks import StdOutCallbackHandlerfrom langchain.utilities.sql_database import SQLDatabasefrom langchain_experimental.sql.prompt import MYSCALE_PROMPTfrom langchain_experimental.sql.vector_sql import VectorSQLDatabaseChainchain = VectorSQLDatabaseChain( llm_chain=LLMChain( llm=OpenAI(openai_api_key=OPENAI_API_KEY, temperature=0), prompt=MYSCALE_PROMPT, ), top_k=10, return_direct=True, sql_cmd_parser=output_parser, database=SQLDatabase(engine, None, metadata),)import pandas as pdpd.DataFrame( chain.run( "Please give me 10 papers to ask what is PageRank?", callbacks=[StdOutCallbackHandler()], ))SQL Database as Retrieverfrom langchain.chat_models import ChatOpenAIfrom langchain.chains.qa_with_sources.retrieval import RetrievalQAWithSourcesChainfrom langchain_experimental.sql.vector_sql import VectorSQLDatabaseChainfrom langchain_experimental.retrievers.vector_sql_database \ import VectorSQLDatabaseChainRetrieverfrom langchain_experimental.sql.prompt import MYSCALE_PROMPTfrom langchain_experimental.sql.vector_sql import VectorSQLRetrieveAllOutputParseroutput_parser_retrieve_all = VectorSQLRetrieveAllOutputParser.from_embeddings( output_parser.model)chain = VectorSQLDatabaseChain.from_llm( llm=OpenAI(openai_api_key=OPENAI_API_KEY, temperature=0), prompt=MYSCALE_PROMPT, top_k=10, return_direct=True, db=SQLDatabase(engine, None, metadata), sql_cmd_parser=output_parser_retrieve_all, native_format=True,)# You need all those keys to get docsretriever = VectorSQLDatabaseChainRetriever(sql_db_chain=chain, page_content_key="abstract")document_with_metadata_prompt = PromptTemplate( input_variables=["page_content", "id", "title", "authors", "pubdate", "categories"], template="Content:\n\tTitle: {title}\n\tAbstract: {page_content}\n\tAuthors: {authors}\n\tDate of Publication: {pubdate}\n\tCategories: {categories}\nSOURCE: {id}",)chain = RetrievalQAWithSourcesChain.from_chain_type( ChatOpenAI( model_name="gpt-3.5-turbo-16k", openai_api_key=OPENAI_API_KEY, temperature=0.6 ), retriever=retriever, chain_type="stuff", chain_type_kwargs={ "document_prompt": document_with_metadata_prompt, }, return_source_documents=True,)ans = chain("Please give me 10 papers to ask what is PageRank?", callbacks=[StdOutCallbackHandler()])print(ans["answer"])PreviousElasticsearchNextSQL Database ChainSQL Database as Retriever |
110 | https://python.langchain.com/docs/use_cases/qa_structured/integrations/sqlite | QA over structured dataIntegration-specificSQL Database ChainSQL Database ChainThis example demonstrates the use of the SQLDatabaseChain for answering questions over a SQL database.Under the hood, LangChain uses SQLAlchemy to connect to SQL databases. The SQLDatabaseChain can therefore be used with any SQL dialect supported by SQLAlchemy, such as MS SQL, MySQL, MariaDB, PostgreSQL, Oracle SQL, Databricks and SQLite. Please refer to the SQLAlchemy documentation for more information about requirements for connecting to your database. For example, a connection to MySQL requires an appropriate connector such as PyMySQL. A URI for a MySQL connection might look like: mysql+pymysql://user:pass@some_mysql_db_address/db_name.This demonstration uses SQLite and the example Chinook database.
To set it up, follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.from langchain.llms import OpenAIfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChaindb = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db")llm = OpenAI(temperature=0, verbose=True)NOTE: For data-sensitive projects, you can specify return_direct=True in the SQLDatabaseChain initialization to directly return the output of the SQL query without any additional formatting. This prevents the LLM from seeing any contents within the database. Note, however, the LLM still has access to the database scheme (i.e. dialect, table and key names) by default.db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery: /workspace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) SELECT COUNT(*) FROM "Employee"; SQLResult: [(8,)] Answer:There are 8 employees. > Finished chain. 'There are 8 employees.'Use Query CheckerSometimes the Language Model generates invalid SQL with small mistakes that can be self-corrected using the same technique used by the SQL Database Agent to try and fix the SQL using the LLM. You can simply specify this option when creating the chain:db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, use_query_checker=True)db_chain.run("How many albums by Aerosmith?") > Entering new SQLDatabaseChain chain... How many albums by Aerosmith? SQLQuery:SELECT COUNT(*) FROM Album WHERE ArtistId = 3; SQLResult: [(1,)] Answer:There is 1 album by Aerosmith. > Finished chain. 'There is 1 album by Aerosmith.'Customize PromptYou can also customize the prompt that is used. Here is an example prompting it to understand that foobar is the same as the Employee tablefrom langchain.prompts.prompt import PromptTemplate_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.Use the following format:Question: "Question here"SQLQuery: "SQL Query to run"SQLResult: "Result of the SQLQuery"Answer: "Final answer here"Only use the following tables:{table_info}If someone asks for the table foobar, they really mean the employee table.Question: {input}"""PROMPT = PromptTemplate( input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE)db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True)db_chain.run("How many employees are there in the foobar table?") > Entering new SQLDatabaseChain chain... How many employees are there in the foobar table? SQLQuery:SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer:There are 8 employees in the foobar table. > Finished chain. 'There are 8 employees in the foobar table.'Return Intermediate StepsYou can also return the intermediate steps of the SQLDatabaseChain. This allows you to access the SQL statement that was generated, as well as the result of running that against the SQL Database.db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, use_query_checker=True, return_intermediate_steps=True)result = db_chain("How many employees are there in the foobar table?")result["intermediate_steps"] > Entering new SQLDatabaseChain chain... How many employees are there in the foobar table? SQLQuery:SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer:There are 8 employees in the foobar table. > Finished chain. [{'input': 'How many employees are there in the foobar table?\nSQLQuery:SELECT COUNT(*) FROM Employee;\nSQLResult: [(8,)]\nAnswer:', 'top_k': '5', 'dialect': 'sqlite', 'table_info': '\nCREATE TABLE "Artist" (\n\t"ArtistId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("ArtistId")\n)\n\n/*\n3 rows from Artist table:\nArtistId\tName\n1\tAC/DC\n2\tAccept\n3\tAerosmith\n*/\n\n\nCREATE TABLE "Employee" (\n\t"EmployeeId" INTEGER NOT NULL, \n\t"LastName" NVARCHAR(20) NOT NULL, \n\t"FirstName" NVARCHAR(20) NOT NULL, \n\t"Title" NVARCHAR(30), \n\t"ReportsTo" INTEGER, \n\t"BirthDate" DATETIME, \n\t"HireDate" DATETIME, \n\t"Address" NVARCHAR(70), \n\t"City" NVARCHAR(40), \n\t"State" NVARCHAR(40), \n\t"Country" NVARCHAR(40), \n\t"PostalCode" NVARCHAR(10), \n\t"Phone" NVARCHAR(24), \n\t"Fax" NVARCHAR(24), \n\t"Email" NVARCHAR(60), \n\tPRIMARY KEY ("EmployeeId"), \n\tFOREIGN KEY("ReportsTo") REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Employee table:\nEmployeeId\tLastName\tFirstName\tTitle\tReportsTo\tBirthDate\tHireDate\tAddress\tCity\tState\tCountry\tPostalCode\tPhone\tFax\tEmail\n1\tAdams\tAndrew\tGeneral Manager\tNone\t1962-02-18 00:00:00\t2002-08-14 00:00:00\t11120 Jasper Ave NW\tEdmonton\tAB\tCanada\tT5K 2N1\t+1 (780) 428-9482\t+1 (780) 428-3457\tandrew@chinookcorp.com\n2\tEdwards\tNancy\tSales Manager\t1\t1958-12-08 00:00:00\t2002-05-01 00:00:00\t825 8 Ave SW\tCalgary\tAB\tCanada\tT2P 2T3\t+1 (403) 262-3443\t+1 (403) 262-3322\tnancy@chinookcorp.com\n3\tPeacock\tJane\tSales Support Agent\t2\t1973-08-29 00:00:00\t2002-04-01 00:00:00\t1111 6 Ave SW\tCalgary\tAB\tCanada\tT2P 5M5\t+1 (403) 262-3443\t+1 (403) 262-6712\tjane@chinookcorp.com\n*/\n\n\nCREATE TABLE "Genre" (\n\t"GenreId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("GenreId")\n)\n\n/*\n3 rows from Genre table:\nGenreId\tName\n1\tRock\n2\tJazz\n3\tMetal\n*/\n\n\nCREATE TABLE "MediaType" (\n\t"MediaTypeId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("MediaTypeId")\n)\n\n/*\n3 rows from MediaType table:\nMediaTypeId\tName\n1\tMPEG audio file\n2\tProtected AAC audio file\n3\tProtected MPEG-4 video file\n*/\n\n\nCREATE TABLE "Playlist" (\n\t"PlaylistId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("PlaylistId")\n)\n\n/*\n3 rows from Playlist table:\nPlaylistId\tName\n1\tMusic\n2\tMovies\n3\tTV Shows\n*/\n\n\nCREATE TABLE "Album" (\n\t"AlbumId" INTEGER NOT NULL, \n\t"Title" NVARCHAR(160) NOT NULL, \n\t"ArtistId" INTEGER NOT NULL, \n\tPRIMARY KEY ("AlbumId"), \n\tFOREIGN KEY("ArtistId") REFERENCES "Artist" ("ArtistId")\n)\n\n/*\n3 rows from Album table:\nAlbumId\tTitle\tArtistId\n1\tFor Those About To Rock We Salute You\t1\n2\tBalls to the Wall\t2\n3\tRestless and Wild\t2\n*/\n\n\nCREATE TABLE "Customer" (\n\t"CustomerId" INTEGER NOT NULL, \n\t"FirstName" NVARCHAR(40) NOT NULL, \n\t"LastName" NVARCHAR(20) NOT NULL, \n\t"Company" NVARCHAR(80), \n\t"Address" NVARCHAR(70), \n\t"City" NVARCHAR(40), \n\t"State" NVARCHAR(40), \n\t"Country" NVARCHAR(40), \n\t"PostalCode" NVARCHAR(10), \n\t"Phone" NVARCHAR(24), \n\t"Fax" NVARCHAR(24), \n\t"Email" NVARCHAR(60) NOT NULL, \n\t"SupportRepId" INTEGER, \n\tPRIMARY KEY ("CustomerId"), \n\tFOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Customer table:\nCustomerId\tFirstName\tLastName\tCompany\tAddress\tCity\tState\tCountry\tPostalCode\tPhone\tFax\tEmail\tSupportRepId\n1\tLuís\tGonçalves\tEmbraer - Empresa Brasileira de Aeronáutica S.A.\tAv. Brigadeiro Faria Lima, 2170\tSão José dos Campos\tSP\tBrazil\t12227-000\t+55 (12) 3923-5555\t+55 (12) 3923-5566\tluisg@embraer.com.br\t3\n2\tLeonie\tKöhler\tNone\tTheodor-Heuss-Straße 34\tStuttgart\tNone\tGermany\t70174\t+49 0711 2842222\tNone\tleonekohler@surfeu.de\t5\n3\tFrançois\tTremblay\tNone\t1498 rue Bélanger\tMontréal\tQC\tCanada\tH2G 1A7\t+1 (514) 721-4711\tNone\tftremblay@gmail.com\t3\n*/\n\n\nCREATE TABLE "Invoice" (\n\t"InvoiceId" INTEGER NOT NULL, \n\t"CustomerId" INTEGER NOT NULL, \n\t"InvoiceDate" DATETIME NOT NULL, \n\t"BillingAddress" NVARCHAR(70), \n\t"BillingCity" NVARCHAR(40), \n\t"BillingState" NVARCHAR(40), \n\t"BillingCountry" NVARCHAR(40), \n\t"BillingPostalCode" NVARCHAR(10), \n\t"Total" NUMERIC(10, 2) NOT NULL, \n\tPRIMARY KEY ("InvoiceId"), \n\tFOREIGN KEY("CustomerId") REFERENCES "Customer" ("CustomerId")\n)\n\n/*\n3 rows from Invoice table:\nInvoiceId\tCustomerId\tInvoiceDate\tBillingAddress\tBillingCity\tBillingState\tBillingCountry\tBillingPostalCode\tTotal\n1\t2\t2009-01-01 00:00:00\tTheodor-Heuss-Straße 34\tStuttgart\tNone\tGermany\t70174\t1.98\n2\t4\t2009-01-02 00:00:00\tUllevålsveien 14\tOslo\tNone\tNorway\t0171\t3.96\n3\t8\t2009-01-03 00:00:00\tGrétrystraat 63\tBrussels\tNone\tBelgium\t1000\t5.94\n*/\n\n\nCREATE TABLE "Track" (\n\t"TrackId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(200) NOT NULL, \n\t"AlbumId" INTEGER, \n\t"MediaTypeId" INTEGER NOT NULL, \n\t"GenreId" INTEGER, \n\t"Composer" NVARCHAR(220), \n\t"Milliseconds" INTEGER NOT NULL, \n\t"Bytes" INTEGER, \n\t"UnitPrice" NUMERIC(10, 2) NOT NULL, \n\tPRIMARY KEY ("TrackId"), \n\tFOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), \n\tFOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), \n\tFOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId")\n)\n\n/*\n3 rows from Track table:\nTrackId\tName\tAlbumId\tMediaTypeId\tGenreId\tComposer\tMilliseconds\tBytes\tUnitPrice\n1\tFor Those About To Rock (We Salute You)\t1\t1\t1\tAngus Young, Malcolm Young, Brian Johnson\t343719\t11170334\t0.99\n2\tBalls to the Wall\t2\t2\t1\tNone\t342562\t5510424\t0.99\n3\tFast As a Shark\t3\t2\t1\tF. Baltes, S. Kaufman, U. Dirkscneider & W. Hoffman\t230619\t3990994\t0.99\n*/\n\n\nCREATE TABLE "InvoiceLine" (\n\t"InvoiceLineId" INTEGER NOT NULL, \n\t"InvoiceId" INTEGER NOT NULL, \n\t"TrackId" INTEGER NOT NULL, \n\t"UnitPrice" NUMERIC(10, 2) NOT NULL, \n\t"Quantity" INTEGER NOT NULL, \n\tPRIMARY KEY ("InvoiceLineId"), \n\tFOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), \n\tFOREIGN KEY("InvoiceId") REFERENCES "Invoice" ("InvoiceId")\n)\n\n/*\n3 rows from InvoiceLine table:\nInvoiceLineId\tInvoiceId\tTrackId\tUnitPrice\tQuantity\n1\t1\t2\t0.99\t1\n2\t1\t4\t0.99\t1\n3\t2\t6\t0.99\t1\n*/\n\n\nCREATE TABLE "PlaylistTrack" (\n\t"PlaylistId" INTEGER NOT NULL, \n\t"TrackId" INTEGER NOT NULL, \n\tPRIMARY KEY ("PlaylistId", "TrackId"), \n\tFOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), \n\tFOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId")\n)\n\n/*\n3 rows from PlaylistTrack table:\nPlaylistId\tTrackId\n1\t3402\n1\t3389\n1\t3390\n*/', 'stop': ['\nSQLResult:']}, 'SELECT COUNT(*) FROM Employee;', {'query': 'SELECT COUNT(*) FROM Employee;', 'dialect': 'sqlite'}, 'SELECT COUNT(*) FROM Employee;', '[(8,)]']Adding MemoryHow to add memory to a SQLDatabaseChain:from langchain.llms import OpenAIfrom langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChainSet up the SQLDatabase and LLMdb = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db")llm = OpenAI(temperature=0, verbose=True)Set up the memoryfrom langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory()Now we need to add a place for memory in the prompt templatefrom langchain.prompts import PromptTemplatePROMPT_SUFFIX = """Only use the following tables:{table_info}Previous Conversation:{history}Question: {input}"""_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.Use the following format:Question: Question hereSQLQuery: SQL Query to runSQLResult: Result of the SQLQueryAnswer: Final answer here"""PROMPT = PromptTemplate.from_template( _DEFAULT_TEMPLATE + PROMPT_SUFFIX,)Now let's create and run out chaindb_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)db_chain.run("name one employee") > Entering new SQLDatabaseChain chain... name one employee SQLQuery:SELECT FirstName, LastName FROM Employee LIMIT 1 SQLResult: [('Andrew', 'Adams')] Answer:Andrew Adams > Finished chain. 'Andrew Adams'db_chain.run("how many letters in their name?") > Entering new SQLDatabaseChain chain... how many letters in their name? SQLQuery:SELECT LENGTH(FirstName) + LENGTH(LastName) AS 'NameLength' FROM Employee WHERE FirstName = 'Andrew' AND LastName = 'Adams' SQLResult: [(11,)] Answer:Andrew Adams has 11 letters in their name. > Finished chain. 'Andrew Adams has 11 letters in their name.'Choosing how to limit the number of rows returnedIf you are querying for several rows of a table you can select the maximum number of results you want to get by using the 'top_k' parameter (default is 10). This is useful for avoiding query results that exceed the prompt max length or consume tokens unnecessarily.db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, use_query_checker=True, top_k=3)db_chain.run("What are some example tracks by composer Johann Sebastian Bach?") > Entering new SQLDatabaseChain chain... What are some example tracks by composer Johann Sebastian Bach? SQLQuery:SELECT Name FROM Track WHERE Composer = 'Johann Sebastian Bach' LIMIT 3 SQLResult: [('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace',), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria',), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude',)] Answer:Examples of tracks by Johann Sebastian Bach are Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace, Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria, and Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude. > Finished chain. 'Examples of tracks by Johann Sebastian Bach are Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace, Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria, and Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude.'Adding example rows from each tableSometimes, the format of the data is not obvious and it is optimal to include a sample of rows from the tables in the prompt to allow the LLM to understand the data before providing a final query. Here we will use this feature to let the LLM know that artists are saved with their full names by providing two rows from the Track table.db = SQLDatabase.from_uri( "sqlite:///../../../../notebooks/Chinook.db", include_tables=['Track'], # we include only one table to save tokens in the prompt :) sample_rows_in_table_info=2)The sample rows are added to the prompt after each corresponding table's column information:print(db.table_info) CREATE TABLE "Track" ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "AlbumId" INTEGER, "MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER, "Composer" NVARCHAR(220), "Milliseconds" INTEGER NOT NULL, "Bytes" INTEGER, "UnitPrice" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("TrackId"), FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId") ) /* 2 rows from Track table: TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 None 342562 5510424 0.99 */db_chain = SQLDatabaseChain.from_llm(llm, db, use_query_checker=True, verbose=True)db_chain.run("What are some example tracks by Bach?") > Entering new SQLDatabaseChain chain... What are some example tracks by Bach? SQLQuery:SELECT "Name", "Composer" FROM "Track" WHERE "Composer" LIKE '%Bach%' LIMIT 5 SQLResult: [('American Woman', 'B. Cummings/G. Peterson/M.J. Kale/R. Bachman'), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Johann Sebastian Bach'), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Johann Sebastian Bach'), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', 'Johann Sebastian Bach'), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata', 'Johann Sebastian Bach')] Answer:Tracks by Bach include 'American Woman', 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', and 'Toccata and Fugue in D Minor, BWV 565: I. Toccata'. > Finished chain. 'Tracks by Bach include \'American Woman\', \'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\', \'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria\', \'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\', and \'Toccata and Fugue in D Minor, BWV 565: I. Toccata\'.'Custom Table InfoIn some cases, it can be useful to provide custom table information instead of using the automatically generated table definitions and the first sample_rows_in_table_info sample rows. For example, if you know that the first few rows of a table are uninformative, it could help to manually provide example rows that are more diverse or provide more information to the model. It is also possible to limit the columns that will be visible to the model if there are unnecessary columns. This information can be provided as a dictionary with table names as the keys and table information as the values. For example, let's provide a custom definition and sample rows for the Track table with only a few columns:custom_table_info = { "Track": """CREATE TABLE Track ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "Composer" NVARCHAR(220), PRIMARY KEY ("TrackId"))/*3 rows from Track table:TrackId Name Composer1 For Those About To Rock (We Salute You) Angus Young, Malcolm Young, Brian Johnson2 Balls to the Wall None3 My favorite song ever The coolest composer of all time*/"""}db = SQLDatabase.from_uri( "sqlite:///../../../../notebooks/Chinook.db", include_tables=['Track', 'Playlist'], sample_rows_in_table_info=2, custom_table_info=custom_table_info)print(db.table_info) CREATE TABLE "Playlist" ( "PlaylistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("PlaylistId") ) /* 2 rows from Playlist table: PlaylistId Name 1 Music 2 Movies */ CREATE TABLE Track ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "Composer" NVARCHAR(220), PRIMARY KEY ("TrackId") ) /* 3 rows from Track table: TrackId Name Composer 1 For Those About To Rock (We Salute You) Angus Young, Malcolm Young, Brian Johnson 2 Balls to the Wall None 3 My favorite song ever The coolest composer of all time */Note how our custom table definition and sample rows for Track overrides the sample_rows_in_table_info parameter. Tables that are not overridden by custom_table_info, in this example Playlist, will have their table info gathered automatically as usual.db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)db_chain.run("What are some example tracks by Bach?") > Entering new SQLDatabaseChain chain... What are some example tracks by Bach? SQLQuery:SELECT "Name" FROM Track WHERE "Composer" LIKE '%Bach%' LIMIT 5; SQLResult: [('American Woman',), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace',), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria',), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude',), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata',)] Answer:text='You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question.\nUnless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.\nNever query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.\nPay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n\nUse the following format:\n\nQuestion: "Question here"\nSQLQuery: "SQL Query to run"\nSQLResult: "Result of the SQLQuery"\nAnswer: "Final answer here"\n\nOnly use the following tables:\n\nCREATE TABLE "Playlist" (\n\t"PlaylistId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("PlaylistId")\n)\n\n/*\n2 rows from Playlist table:\nPlaylistId\tName\n1\tMusic\n2\tMovies\n*/\n\nCREATE TABLE Track (\n\t"TrackId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(200) NOT NULL,\n\t"Composer" NVARCHAR(220),\n\tPRIMARY KEY ("TrackId")\n)\n/*\n3 rows from Track table:\nTrackId\tName\tComposer\n1\tFor Those About To Rock (We Salute You)\tAngus Young, Malcolm Young, Brian Johnson\n2\tBalls to the Wall\tNone\n3\tMy favorite song ever\tThe coolest composer of all time\n*/\n\nQuestion: What are some example tracks by Bach?\nSQLQuery:SELECT "Name" FROM Track WHERE "Composer" LIKE \'%Bach%\' LIMIT 5;\nSQLResult: [(\'American Woman\',), (\'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\',), (\'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria\',), (\'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\',), (\'Toccata and Fugue in D Minor, BWV 565: I. Toccata\',)]\nAnswer:' You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question. Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database. Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers. Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table. Use the following format: Question: "Question here" SQLQuery: "SQL Query to run" SQLResult: "Result of the SQLQuery" Answer: "Final answer here" Only use the following tables: CREATE TABLE "Playlist" ( "PlaylistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("PlaylistId") ) /* 2 rows from Playlist table: PlaylistId Name 1 Music 2 Movies */ CREATE TABLE Track ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "Composer" NVARCHAR(220), PRIMARY KEY ("TrackId") ) /* 3 rows from Track table: TrackId Name Composer 1 For Those About To Rock (We Salute You) Angus Young, Malcolm Young, Brian Johnson 2 Balls to the Wall None 3 My favorite song ever The coolest composer of all time */ Question: What are some example tracks by Bach? SQLQuery:SELECT "Name" FROM Track WHERE "Composer" LIKE '%Bach%' LIMIT 5; SQLResult: [('American Woman',), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace',), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria',), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude',), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata',)] Answer: {'input': 'What are some example tracks by Bach?\nSQLQuery:SELECT "Name" FROM Track WHERE "Composer" LIKE \'%Bach%\' LIMIT 5;\nSQLResult: [(\'American Woman\',), (\'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\',), (\'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria\',), (\'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\',), (\'Toccata and Fugue in D Minor, BWV 565: I. Toccata\',)]\nAnswer:', 'top_k': '5', 'dialect': 'sqlite', 'table_info': '\nCREATE TABLE "Playlist" (\n\t"PlaylistId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("PlaylistId")\n)\n\n/*\n2 rows from Playlist table:\nPlaylistId\tName\n1\tMusic\n2\tMovies\n*/\n\nCREATE TABLE Track (\n\t"TrackId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(200) NOT NULL,\n\t"Composer" NVARCHAR(220),\n\tPRIMARY KEY ("TrackId")\n)\n/*\n3 rows from Track table:\nTrackId\tName\tComposer\n1\tFor Those About To Rock (We Salute You)\tAngus Young, Malcolm Young, Brian Johnson\n2\tBalls to the Wall\tNone\n3\tMy favorite song ever\tThe coolest composer of all time\n*/', 'stop': ['\nSQLResult:']} Examples of tracks by Bach include "American Woman", "Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace", "Aria Mit 30 Veränderungen, BWV 988 'Goldberg Variations': Aria", "Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude", and "Toccata and Fugue in D Minor, BWV 565: I. Toccata". > Finished chain. 'Examples of tracks by Bach include "American Woman", "Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace", "Aria Mit 30 Veränderungen, BWV 988 \'Goldberg Variations\': Aria", "Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude", and "Toccata and Fugue in D Minor, BWV 565: I. Toccata".'SQL ViewsIn some case, the table schema can be hidden behind a JSON or JSONB column. Adding row samples into the prompt might help won't always describe the data perfectly. For this reason, a custom SQL views can help.CREATE VIEW accounts_v AS select id, firstname, lastname, email, created_at, updated_at, cast(stats->>'total_post' as int) as total_post, cast(stats->>'total_comments' as int) as total_comments, cast(stats->>'ltv' as int) as ltv FROM accounts;Then limit the tables visible from SQLDatabase to the created view.db = SQLDatabase.from_uri( "sqlite:///../../../../notebooks/Chinook.db", include_tables=['accounts_v']) # we include only the viewSQLDatabaseSequentialChainChain for querying SQL database that is a sequential chain.The chain is as follows:1. Based on the query, determine which tables to use.2. Based on those tables, call the normal SQL database chain.This is useful in cases where the number of tables in the database is large.from langchain_experimental.sql import SQLDatabaseSequentialChaindb = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db")chain = SQLDatabaseSequentialChain.from_llm(llm, db, verbose=True)chain.run("How many employees are also customers?") > Entering new SQLDatabaseSequentialChain chain... Table names to use: ['Employee', 'Customer'] > Entering new SQLDatabaseChain chain... How many employees are also customers? SQLQuery:SELECT COUNT(*) FROM Employee e INNER JOIN Customer c ON e.EmployeeId = c.SupportRepId; SQLResult: [(59,)] Answer:59 employees are also customers. > Finished chain. > Finished chain. '59 employees are also customers.'Using Local Language ModelsSometimes you may not have the luxury of using OpenAI or other service-hosted large language model. You can, ofcourse, try to use the SQLDatabaseChain with a local model, but will quickly realize that most models you can run locally even with a large GPU struggle to generate the right output.import loggingimport torchfrom transformers import AutoTokenizer, GPT2TokenizerFast, pipeline, AutoModelForSeq2SeqLM, AutoModelForCausalLMfrom langchain.llms import HuggingFacePipeline# Note: This model requires a large GPU, e.g. an 80GB A100. See documentation for other ways to run private non-OpenAI models.model_id = "google/flan-ul2"model = AutoModelForSeq2SeqLM.from_pretrained(model_id, temperature=0)device_id = -1 # default to no-GPU, but use GPU and half precision mode if availableif torch.cuda.is_available(): device_id = 0 try: model = model.half() except RuntimeError as exc: logging.warn(f"Could not run model in half precision mode: {str(exc)}")tokenizer = AutoTokenizer.from_pretrained(model_id)pipe = pipeline(task="text2text-generation", model=model, tokenizer=tokenizer, max_length=1024, device=device_id)local_llm = HuggingFacePipeline(pipeline=pipe) /workspace/langchain/.venv/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Loading checkpoint shards: 100%|██████████| 8/8 [00:32<00:00, 4.11s/it]from langchain.utilities import SQLDatabasefrom langchain_experimental.sql import SQLDatabaseChaindb = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db", include_tables=['Customer'])local_chain = SQLDatabaseChain.from_llm(local_llm, db, verbose=True, return_intermediate_steps=True, use_query_checker=True)This model should work for very simple SQL queries, as long as you use the query checker as specified above, e.g.:local_chain("How many customers are there?") > Entering new SQLDatabaseChain chain... How many customers are there? SQLQuery: /workspace/langchain/.venv/lib/python3.9/site-packages/transformers/pipelines/base.py:1070: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( /workspace/langchain/.venv/lib/python3.9/site-packages/transformers/pipelines/base.py:1070: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( SELECT count(*) FROM Customer SQLResult: [(59,)] Answer: /workspace/langchain/.venv/lib/python3.9/site-packages/transformers/pipelines/base.py:1070: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset warnings.warn( [59] > Finished chain. {'query': 'How many customers are there?', 'result': '[59]', 'intermediate_steps': [{'input': 'How many customers are there?\nSQLQuery:SELECT count(*) FROM Customer\nSQLResult: [(59,)]\nAnswer:', 'top_k': '5', 'dialect': 'sqlite', 'table_info': '\nCREATE TABLE "Customer" (\n\t"CustomerId" INTEGER NOT NULL, \n\t"FirstName" NVARCHAR(40) NOT NULL, \n\t"LastName" NVARCHAR(20) NOT NULL, \n\t"Company" NVARCHAR(80), \n\t"Address" NVARCHAR(70), \n\t"City" NVARCHAR(40), \n\t"State" NVARCHAR(40), \n\t"Country" NVARCHAR(40), \n\t"PostalCode" NVARCHAR(10), \n\t"Phone" NVARCHAR(24), \n\t"Fax" NVARCHAR(24), \n\t"Email" NVARCHAR(60) NOT NULL, \n\t"SupportRepId" INTEGER, \n\tPRIMARY KEY ("CustomerId"), \n\tFOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Customer table:\nCustomerId\tFirstName\tLastName\tCompany\tAddress\tCity\tState\tCountry\tPostalCode\tPhone\tFax\tEmail\tSupportRepId\n1\tLuís\tGonçalves\tEmbraer - Empresa Brasileira |
111 | https://python.langchain.com/docs/use_cases/apis | Interacting with APIsOn this pageInteracting with APIsUse caseSuppose you want an LLM to interact with external APIs.This can be very useful for retrieving context for the LLM to utilize.And, more generally, it allows us to interact with APIs using natural language! OverviewThere are two primary ways to interface LLMs with external APIs:Functions: For example, OpenAI functions is one popular means of doing this.LLM-generated interface: Use an LLM with access to API documentation to create an interface.QuickstartMany APIs are already compatible with OpenAI function calling.For example, Klarna has a YAML file that describes its API and allows OpenAI to interact with it:https://www.klarna.com/us/shopping/public/openai/v0/api-docs/Other options include:Speak for translationXKCD for comicsWe can supply the specification to get_openapi_chain directly in order to query the API with OpenAI functions:pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.chains.openai_functions.openapi import get_openapi_chainchain = get_openapi_chain("https://www.klarna.com/us/shopping/public/openai/v0/api-docs/")chain("What are some options for a men's large blue button down shirt") Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. {'query': "What are some options for a men's large blue button down shirt", 'response': {'products': [{'name': 'Cubavera Four Pocket Guayabera Shirt', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3202055522/Clothing/Cubavera-Four-Pocket-Guayabera-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$13.50', 'attributes': ['Material:Polyester,Cotton', 'Target Group:Man', 'Color:Red,White,Blue,Black', 'Properties:Pockets', 'Pattern:Solid Color', 'Size (Small-Large):S,XL,L,M,XXL']}, {'name': 'Polo Ralph Lauren Plaid Short Sleeve Button-down Oxford Shirt', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3207163438/Clothing/Polo-Ralph-Lauren-Plaid-Short-Sleeve-Button-down-Oxford-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$52.20', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Red,Blue,Multicolor', 'Size (Small-Large):S,XL,L,M,XXL']}, {'name': 'Brixton Bowery Flannel Shirt', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3202331096/Clothing/Brixton-Bowery-Flannel-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$27.48', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Gray,Blue,Black,Orange', 'Properties:Pockets', 'Pattern:Checkered', 'Size (Small-Large):XL,3XL,4XL,5XL,L,M,XXL']}, {'name': 'Vineyard Vines Gingham On-The-Go brrr Classic Fit Shirt Crystal', 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3201938510/Clothing/Vineyard-Vines-Gingham-On-The-Go-brrr-Classic-Fit-Shirt-Crystal/?utm_source=openai&ref-site=openai_plugin', 'price': '$80.64', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Blue', 'Size (Small-Large):XL,XS,L,M']}, {'name': "Carhartt Men's Loose Fit Midweight Short Sleeve Plaid Shirt", 'url': 'https://www.klarna.com/us/shopping/pl/cl10001/3201826024/Clothing/Carhartt-Men-s-Loose-Fit-Midweight-Short-Sleeve-Plaid-Shirt/?utm_source=openai&ref-site=openai_plugin', 'price': '$17.99', 'attributes': ['Material:Cotton', 'Target Group:Man', 'Color:Red,Brown,Blue,Green', 'Properties:Pockets', 'Pattern:Checkered', 'Size (Small-Large):S,XL,L,M']}]}}FunctionsWe can unpack what is happening when we use the functions to call external APIs.Let's look at the LangSmith trace:See here that we call the OpenAI LLM with the provided API spec:https://www.klarna.com/us/shopping/public/openai/v0/api-docs/The prompt then tells the LLM to use the API spec with input question:Use the provided APIs to respond to this user query:What are some options for a men's large blue button down shirtThe LLM returns the parameters for the function call productsUsingGET, which is specified in the provided API spec:function_call: name: productsUsingGET arguments: |- { "params": { "countryCode": "US", "q": "men's large blue button down shirt", "size": 5, "min_price": 0, "max_price": 100 } }This Dict above split and the API is called here.API ChainWe can also build our own interface to external APIs using the APIChain and provided API documentation.from langchain.llms import OpenAIfrom langchain.chains import APIChainfrom langchain.chains.api import open_meteo_docsllm = OpenAI(temperature=0)chain = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=True)chain.run('What is the weather like right now in Munich, Germany in degrees Fahrenheit?') > Entering new APIChain chain... https://api.open-meteo.com/v1/forecast?latitude=48.1351&longitude=11.5820&hourly=temperature_2m&temperature_unit=fahrenheit¤t_weather=true {"latitude":48.14,"longitude":11.58,"generationtime_ms":1.0769367218017578,"utc_offset_seconds":0,"timezone":"GMT","timezone_abbreviation":"GMT","elevation":521.0,"current_weather":{"temperature":52.9,"windspeed":12.6,"winddirection":239.0,"weathercode":3,"is_day":0,"time":"2023-08-07T22:00"},"hourly_units":{"time":"iso8601","temperature_2m":"°F"},"hourly":{"time":["2023-08-07T00:00","2023-08-07T01:00","2023-08-07T02:00","2023-08-07T03:00","2023-08-07T04:00","2023-08-07T05:00","2023-08-07T06:00","2023-08-07T07:00","2023-08-07T08:00","2023-08-07T09:00","2023-08-07T10:00","2023-08-07T11:00","2023-08-07T12:00","2023-08-07T13:00","2023-08-07T14:00","2023-08-07T15:00","2023-08-07T16:00","2023-08-07T17:00","2023-08-07T18:00","2023-08-07T19:00","2023-08-07T20:00","2023-08-07T21:00","2023-08-07T22:00","2023-08-07T23:00","2023-08-08T00:00","2023-08-08T01:00","2023-08-08T02:00","2023-08-08T03:00","2023-08-08T04:00","2023-08-08T05:00","2023-08-08T06:00","2023-08-08T07:00","2023-08-08T08:00","2023-08-08T09:00","2023-08-08T10:00","2023-08-08T11:00","2023-08-08T12:00","2023-08-08T13:00","2023-08-08T14:00","2023-08-08T15:00","2023-08-08T16:00","2023-08-08T17:00","2023-08-08T18:00","2023-08-08T19:00","2023-08-08T20:00","2023-08-08T21:00","2023-08-08T22:00","2023-08-08T23:00","2023-08-09T00:00","2023-08-09T01:00","2023-08-09T02:00","2023-08-09T03:00","2023-08-09T04:00","2023-08-09T05:00","2023-08-09T06:00","2023-08-09T07:00","2023-08-09T08:00","2023-08-09T09:00","2023-08-09T10:00","2023-08-09T11:00","2023-08-09T12:00","2023-08-09T13:00","2023-08-09T14:00","2023-08-09T15:00","2023-08-09T16:00","2023-08-09T17:00","2023-08-09T18:00","2023-08-09T19:00","2023-08-09T20:00","2023-08-09T21:00","2023-08-09T22:00","2023-08-09T23:00","2023-08-10T00:00","2023-08-10T01:00","2023-08-10T02:00","2023-08-10T03:00","2023-08-10T04:00","2023-08-10T05:00","2023-08-10T06:00","2023-08-10T07:00","2023-08-10T08:00","2023-08-10T09:00","2023-08-10T10:00","2023-08-10T11:00","2023-08-10T12:00","2023-08-10T13:00","2023-08-10T14:00","2023-08-10T15:00","2023-08-10T16:00","2023-08-10T17:00","2023-08-10T18:00","2023-08-10T19:00","2023-08-10T20:00","2023-08-10T21:00","2023-08-10T22:00","2023-08-10T23:00","2023-08-11T00:00","2023-08-11T01:00","2023-08-11T02:00","2023-08-11T03:00","2023-08-11T04:00","2023-08-11T05:00","2023-08-11T06:00","2023-08-11T07:00","2023-08-11T08:00","2023-08-11T09:00","2023-08-11T10:00","2023-08-11T11:00","2023-08-11T12:00","2023-08-11T13:00","2023-08-11T14:00","2023-08-11T15:00","2023-08-11T16:00","2023-08-11T17:00","2023-08-11T18:00","2023-08-11T19:00","2023-08-11T20:00","2023-08-11T21:00","2023-08-11T22:00","2023-08-11T23:00","2023-08-12T00:00","2023-08-12T01:00","2023-08-12T02:00","2023-08-12T03:00","2023-08-12T04:00","2023-08-12T05:00","2023-08-12T06:00","2023-08-12T07:00","2023-08-12T08:00","2023-08-12T09:00","2023-08-12T10:00","2023-08-12T11:00","2023-08-12T12:00","2023-08-12T13:00","2023-08-12T14:00","2023-08-12T15:00","2023-08-12T16:00","2023-08-12T17:00","2023-08-12T18:00","2023-08-12T19:00","2023-08-12T20:00","2023-08-12T21:00","2023-08-12T22:00","2023-08-12T23:00","2023-08-13T00:00","2023-08-13T01:00","2023-08-13T02:00","2023-08-13T03:00","2023-08-13T04:00","2023-08-13T05:00","2023-08-13T06:00","2023-08-13T07:00","2023-08-13T08:00","2023-08-13T09:00","2023-08-13T10:00","2023-08-13T11:00","2023-08-13T12:00","2023-08-13T13:00","2023-08-13T14:00","2023-08-13T15:00","2023-08-13T16:00","2023-08-13T17:00","2023-08-13T18:00","2023-08-13T19:00","2023-08-13T20:00","2023-08-13T21:00","2023-08-13T22:00","2023-08-13T23:00"],"temperature_2m":[53.0,51.2,50.9,50.4,50.7,51.3,51.7,52.9,54.3,56.1,57.4,59.3,59.1,60.7,59.7,58.8,58.8,57.8,56.6,55.3,53.9,52.7,52.9,53.2,52.0,51.8,51.3,50.7,50.8,51.5,53.9,57.7,61.2,63.2,64.7,66.6,67.5,67.0,68.7,68.7,67.9,66.2,64.4,61.4,59.8,58.9,57.9,56.3,55.7,55.3,55.5,55.4,55.7,56.5,57.6,58.8,59.7,59.1,58.9,60.6,59.9,59.8,59.9,61.7,63.2,63.6,62.3,58.9,57.3,57.1,57.0,56.5,56.2,56.0,55.3,54.7,54.4,55.2,57.8,60.7,63.0,65.3,66.9,68.2,70.1,72.1,72.6,71.4,69.7,68.6,66.2,63.6,61.8,60.6,59.6,58.9,58.0,57.1,56.3,56.2,56.7,57.9,59.9,63.7,68.4,72.4,75.0,76.8,78.0,78.7,78.9,78.4,76.9,74.8,72.5,70.1,67.6,65.6,64.4,63.9,63.4,62.7,62.2,62.1,62.5,63.4,65.1,68.0,71.7,74.8,76.8,78.2,79.1,79.6,79.7,79.2,77.6,75.3,73.7,68.6,66.8,65.3,64.2,63.4,62.6,61.7,60.9,60.6,60.9,61.6,63.2,65.9,69.3,72.2,74.4,76.2,77.6,78.8,79.6,79.6,78.4,76.4,74.3,72.3,70.4,68.7,67.6,66.8]}} > Finished chain. ' The current temperature in Munich, Germany is 52.9°F.'Note that we supply information about the API:open_meteo_docs.OPEN_METEO_DOCS[0:500] 'BASE URL: https://api.open-meteo.com/\n\nAPI Documentation\nThe API endpoint /v1/forecast accepts a geographical coordinate, a list of weather variables and responds with a JSON hourly weather forecast for 7 days. Time always starts at 0:00 today and contains 168 hours. All URL parameters are listed below:\n\nParameter\tFormat\tRequired\tDefault\tDescription\nlatitude, longitude\tFloating point\tYes\t\tGeographical WGS84 coordinate of the location\nhourly\tString array\tNo\t\tA list of weather variables which shou'Under the hood, we do two things:api_request_chain: Generate an API URL based on the input question and the api_docsapi_answer_chain: generate a final answer based on the API responseWe can look at the LangSmith trace to inspect this:The api_request_chain produces the API url from our question and the API documentation:Here we make the API request with the API url.The api_answer_chain takes the response from the API and provides us with a natural language response:Going deeperTest with other APIsimport osos.environ['TMDB_BEARER_TOKEN'] = ""from langchain.chains.api import tmdb_docsheaders = {"Authorization": f"Bearer {os.environ['TMDB_BEARER_TOKEN']}"}chain = APIChain.from_llm_and_api_docs(llm, tmdb_docs.TMDB_DOCS, headers=headers, verbose=True)chain.run("Search for 'Avatar'")import osfrom langchain.llms import OpenAIfrom langchain.chains.api import podcast_docsfrom langchain.chains import APIChain listen_api_key = 'xxx' # Get api key here: https://www.listennotes.com/api/pricing/llm = OpenAI(temperature=0)headers = {"X-ListenAPI-Key": listen_api_key}chain = APIChain.from_llm_and_api_docs(llm, podcast_docs.PODCAST_DOCS, headers=headers, verbose=True)chain.run("Search for 'silicon valley bank' podcast episodes, audio length is more than 30 minutes, return only 1 results")Web requestsURL requests are such a common use-case that we have the LLMRequestsChain, which makes an HTTP GET request. from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMRequestsChain, LLMChaintemplate = """Between >>> and <<< are the raw search result text from google.Extract the answer to the question '{query}' or say "not found" if the information is not contained.Use the formatExtracted:<answer or "not found">>>> {requests_result} <<<Extracted:"""PROMPT = PromptTemplate( input_variables=["query", "requests_result"], template=template,)chain = LLMRequestsChain(llm_chain=LLMChain(llm=OpenAI(temperature=0), prompt=PROMPT))question = "What are the Three (3) biggest countries, and their respective sizes?"inputs = { "query": question, "url": "https://www.google.com/search?q=" + question.replace(" ", "+"),}chain(inputs) {'query': 'What are the Three (3) biggest countries, and their respective sizes?', 'url': 'https://www.google.com/search?q=What+are+the+Three+(3)+biggest+countries,+and+their+respective+sizes?', 'output': ' Russia (17,098,242 km²), Canada (9,984,670 km²), China (9,706,961 km²)'}PreviousSQL Database ChainNextChatbotsUse caseOverviewQuickstartFunctionsAPI ChainGoing deeper |
112 | https://python.langchain.com/docs/use_cases/chatbots | ChatbotsOn this pageChatbotsUse caseChatbots are one of the central LLM use-cases. The core features of chatbots are that they can have long-running conversations and have access to information that users want to know about.Aside from basic prompting and LLMs, memory and retrieval are the core components of a chatbot. Memory allows a chatbot to remember past interactions, and retrieval provides a chatbot with up-to-date, domain-specific information.OverviewThe chat model interface is based around messages rather than raw text. Several components are important to consider for chat:chat model: See here for a list of chat model integrations and here for documentation on the chat model interface in LangChain. You can use LLMs (see here) for chatbots as well, but chat models have a more conversational tone and natively support a message interface.prompt template: Prompt templates make it easy to assemble prompts that combine default messages, user input, chat history, and (optionally) additional retrieved context.memory: See here for in-depth documentation on memory typesretriever (optional): See here for in-depth documentation on retrieval systems. These are useful if you want to build a chatbot with domain-specific knowledge.QuickstartHere's a quick preview of how we can create chatbot interfaces. First let's install some dependencies and set the required credentials:pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()With a plain chat model, we can get chat completions by passing one or more messages to the model.The chat model will respond with a message.from langchain.schema import ( AIMessage, HumanMessage, SystemMessage)from langchain.chat_models import ChatOpenAIchat = ChatOpenAI()chat([HumanMessage(content="Translate this sentence from English to French: I love programming.")]) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)And if we pass in a list of messages:messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="I love programming.")]chat(messages) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)We can then wrap our chat model in a ConversationChain, which has built-in memory for remembering past user inputs and model outputs.from langchain.chains import ConversationChain conversation = ConversationChain(llm=chat) conversation.run("Translate this sentence from English to French: I love programming.") 'Je adore la programmation.'conversation.run("Translate it to German.") 'Ich liebe Programmieren.'MemoryAs we mentioned above, the core component of chatbots is the memory system. One of the simplest and most commonly used forms of memory is ConversationBufferMemory:This memory allows for storing of messages in a bufferWhen called in a chain, it returns all of the messages it has storedLangChain comes with many other types of memory, too. See here for in-depth documentation on memory types.For now let's take a quick look at ConversationBufferMemory. We can manually add a few chat messages to the memory like so:from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory()memory.chat_memory.add_user_message("hi!")memory.chat_memory.add_ai_message("whats up?")And now we can load from our memory. The key method exposed by all Memory classes is load_memory_variables. This takes in any initial chain input and returns a list of memory variables which are added to the chain input. Since this simple memory type doesn't actually take into account the chain input when loading memory, we can pass in an empty input for now:memory.load_memory_variables({}) {'history': 'Human: hi!\nAI: whats up?'}We can also keep a sliding window of the most recent k interactions using ConversationBufferWindowMemory.from langchain.memory import ConversationBufferWindowMemorymemory = ConversationBufferWindowMemory(k=1)memory.save_context({"input": "hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})memory.load_memory_variables({}) {'history': 'Human: not much you\nAI: not much'}ConversationSummaryMemory is an extension of this theme.It creates a summary of the conversation over time. This memory is most useful for longer conversations where the full message history would consume many tokens.from langchain.llms import OpenAIfrom langchain.memory import ConversationSummaryMemoryllm = OpenAI(temperature=0)memory = ConversationSummaryMemory(llm=llm)memory.save_context({"input": "hi"},{"output": "whats up"})memory.save_context({"input": "im working on better docs for chatbots"},{"output": "oh, that sounds like a lot of work"})memory.save_context({"input": "yes, but it's worth the effort"},{"output": "agreed, good docs are important!"})memory.load_memory_variables({}) {'history': '\nThe human greets the AI, to which the AI responds. The human then mentions they are working on better docs for chatbots, to which the AI responds that it sounds like a lot of work. The human agrees that it is worth the effort, and the AI agrees that good docs are important.'}ConversationSummaryBufferMemory extends this a bit further:It uses token length rather than number of interactions to determine when to flush interactions.from langchain.memory import ConversationSummaryBufferMemorymemory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)memory.save_context({"input": "hi"}, {"output": "whats up"})memory.save_context({"input": "not much you"}, {"output": "not much"})ConversationWe can unpack what goes under the hood with ConversationChain. We can specify our memory, ConversationSummaryMemory and we can specify the prompt. from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.chains import LLMChain# LLMllm = ChatOpenAI()# Prompt prompt = ChatPromptTemplate( messages=[ SystemMessagePromptTemplate.from_template( "You are a nice chatbot having a conversation with a human." ), # The `variable_name` here is what must align with memory MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{question}") ])# Notice that we `return_messages=True` to fit into the MessagesPlaceholder# Notice that `"chat_history"` aligns with the MessagesPlaceholder namememory = ConversationBufferMemory(memory_key="chat_history",return_messages=True)conversation = LLMChain( llm=llm, prompt=prompt, verbose=True, memory=memory)# Notice that we just pass in the `question` variables - `chat_history` gets populated by memoryconversation({"question": "hi"}) > Entering new LLMChain chain... Prompt after formatting: System: You are a nice chatbot having a conversation with a human. Human: hi > Finished chain. {'question': 'hi', 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False)], 'text': 'Hello! How can I assist you today?'}conversation({"question": "Translate this sentence from English to French: I love programming."}) > Entering new LLMChain chain... Prompt after formatting: System: You are a nice chatbot having a conversation with a human. Human: hi AI: Hello! How can I assist you today? Human: Translate this sentence from English to French: I love programming. > Finished chain. {'question': 'Translate this sentence from English to French: I love programming.', 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False), HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False), AIMessage(content='Sure! The translation of "I love programming" from English to French is "J\'adore programmer."', additional_kwargs={}, example=False)], 'text': 'Sure! The translation of "I love programming" from English to French is "J\'adore programmer."'}conversation({"question": "Now translate the sentence to German."}) > Entering new LLMChain chain... Prompt after formatting: System: You are a nice chatbot having a conversation with a human. Human: hi AI: Hello! How can I assist you today? Human: Translate this sentence from English to French: I love programming. AI: Sure! The translation of "I love programming" from English to French is "J'adore programmer." Human: Now translate the sentence to German. > Finished chain. {'question': 'Now translate the sentence to German.', 'chat_history': [HumanMessage(content='hi', additional_kwargs={}, example=False), AIMessage(content='Hello! How can I assist you today?', additional_kwargs={}, example=False), HumanMessage(content='Translate this sentence from English to French: I love programming.', additional_kwargs={}, example=False), AIMessage(content='Sure! The translation of "I love programming" from English to French is "J\'adore programmer."', additional_kwargs={}, example=False), HumanMessage(content='Now translate the sentence to German.', additional_kwargs={}, example=False), AIMessage(content='Certainly! The translation of "I love programming" from English to German is "Ich liebe das Programmieren."', additional_kwargs={}, example=False)], 'text': 'Certainly! The translation of "I love programming" from English to German is "Ich liebe das Programmieren."'}We can see the chat history preserved in the prompt using the LangSmith trace.Chat RetrievalNow, suppose we want to chat with documents or some other source of knowledge.This is popular use case, combining chat with document retrieval.It allows us to chat with specific information that the model was not trained on.pip install tiktoken chromadbLoad a blog post.from langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()Split and store this in a vector.from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)all_splits = text_splitter.split_documents(data)from langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromavectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())Create our memory, as before, but's let's use ConversationSummaryMemory.memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True)from langchain.chat_models import ChatOpenAIfrom langchain.chains import ConversationalRetrievalChainllm = ChatOpenAI()retriever = vectorstore.as_retriever()qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)qa("How do agents use Task decomposition?") {'question': 'How do agents use Task decomposition?', 'chat_history': [SystemMessage(content='', additional_kwargs={})], 'answer': 'Agents can use task decomposition in several ways:\n\n1. Simple prompting: Agents can use Language Model based prompting to break down tasks into subgoals. For example, by providing prompts like "Steps for XYZ" or "What are the subgoals for achieving XYZ?", the agent can generate a sequence of smaller steps that lead to the completion of the overall task.\n\n2. Task-specific instructions: Agents can be given task-specific instructions to guide their planning process. For example, if the task is to write a novel, the agent can be instructed to "Write a story outline." This provides a high-level structure for the task and helps in breaking it down into smaller components.\n\n3. Human inputs: Agents can also take inputs from humans to decompose tasks. This can be done through direct communication or by leveraging human expertise. Humans can provide guidance and insights to help the agent break down complex tasks into manageable subgoals.\n\nOverall, task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.'}qa("What are the various ways to implemet memory to support it?") {'question': 'What are the various ways to implemet memory to support it?', 'chat_history': [SystemMessage(content='The human asks how agents use task decomposition. The AI explains that agents can use task decomposition in several ways, including simple prompting, task-specific instructions, and human inputs. Task decomposition allows agents to break down large tasks into smaller, more manageable subgoals, enabling them to plan and execute complex tasks efficiently.', additional_kwargs={})], 'answer': 'There are several ways to implement memory to support task decomposition:\n\n1. Long-Term Memory Management: This involves storing and organizing information in a long-term memory system. The agent can retrieve past experiences, knowledge, and learned strategies to guide the task decomposition process.\n\n2. Internet Access: The agent can use internet access to search for relevant information and gather resources to aid in task decomposition. This allows the agent to access a vast amount of information and utilize it in the decomposition process.\n\n3. GPT-3.5 Powered Agents: The agent can delegate simple tasks to GPT-3.5 powered agents. These agents can perform specific tasks or provide assistance in task decomposition, allowing the main agent to focus on higher-level planning and decision-making.\n\n4. File Output: The agent can store the results of task decomposition in files or documents. This allows for easy retrieval and reference during the execution of the task.\n\nThese memory resources help the agent in organizing and managing information, making informed decisions, and effectively decomposing complex tasks into smaller, manageable subgoals.'}Again, we can use the LangSmith trace to explore the prompt structure.Going deeperAgents, such as the conversational retrieval agent, can be used for retrieval when necessary while also holding a conversation.PreviousInteracting with APIsNextCode understandingUse caseOverviewQuickstartMemoryConversationChat RetrievalGoing deeper |
113 | https://python.langchain.com/docs/use_cases/code_understanding | Code understandingOn this pageCode understandingUse caseSource code analysis is one of the most popular LLM applications (e.g., GitHub Co-Pilot, Code Interpreter, Codium, and Codeium) for use-cases such as:Q&A over the code base to understand how it worksUsing LLMs for suggesting refactors or improvementsUsing LLMs for documenting the codeOverviewThe pipeline for QA over code follows the steps we do for document question answering, with some differences:In particular, we can employ a splitting strategy that does a few things:Keeps each top-level function and class in the code is loaded into separate documents. Puts remaining into a separate document.Retains metadata about where each split comes fromQuickstartpip install openai tiktoken chromadb langchain# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()We'lll follow the structure of this notebook and employ context aware code splitting.LoadingWe will upload all python project files using the langchain.document_loaders.TextLoader.The following script iterates over the files in the LangChain repository and loads every .py file (a.k.a. documents):# from git import Repofrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParser# Clonerepo_path = "/Users/rlm/Desktop/test_repo"# repo = Repo.clone_from("https://github.com/langchain-ai/langchain", to_path=repo_path)We load the py code using LanguageParser, which will:Keep top-level functions and classes together (into a single document)Put remaining code into a separate documentRetains metadata about where each split comes from# Loadloader = GenericLoader.from_filesystem( repo_path+"/libs/langchain/langchain", glob="**/*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=500))documents = loader.load()len(documents) 1293SplittingSplit the Document into chunks for embedding and vector storage.We can use RecursiveCharacterTextSplitter w/ language specified.from langchain.text_splitter import RecursiveCharacterTextSplitterpython_splitter = RecursiveCharacterTextSplitter.from_language(language=Language.PYTHON, chunk_size=2000, chunk_overlap=200)texts = python_splitter.split_documents(documents)len(texts) 3748RetrievalQAWe need to store the documents in a way we can semantically search for their content. The most common approach is to embed the contents of each document then store the embedding and document in a vector store. When setting up the vectorstore retriever:We test max marginal relevance for retrievalAnd 8 documents returnedGo deeperBrowse the > 40 vectorstores integrations here.See further documentation on vectorstores here.Browse the > 30 text embedding integrations here.See further documentation on embedding models here.from langchain.vectorstores import Chromafrom langchain.embeddings.openai import OpenAIEmbeddingsdb = Chroma.from_documents(texts, OpenAIEmbeddings(disallowed_special=()))retriever = db.as_retriever( search_type="mmr", # Also test "similarity" search_kwargs={"k": 8},)ChatTest chat, just as we do for chatbots.Go deeperBrowse the > 55 LLM and chat model integrations here.See further documentation on LLMs and chat models here.Use local LLMS: The popularity of PrivateGPT and GPT4All underscore the importance of running LLMs locally.from langchain.chat_models import ChatOpenAIfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChainllm = ChatOpenAI(model_name="gpt-4") memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True)qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory)question = "How can I initialize a ReAct agent?"result = qa(question)result['answer'] 'To initialize a ReAct agent, you need to follow these steps:\n\n1. Initialize a language model `llm` of type `BaseLanguageModel`.\n\n2. Initialize a document store `docstore` of type `Docstore`.\n\n3. Create a `DocstoreExplorer` with the initialized `docstore`. The `DocstoreExplorer` is used to search for and look up terms in the document store.\n\n4. Create an array of `Tool` objects. The `Tool` objects represent the actions that the agent can perform. In the case of `ReActDocstoreAgent`, the tools must be "Search" and "Lookup" with their corresponding functions from the `DocstoreExplorer`.\n\n5. Initialize the `ReActDocstoreAgent` using the `from_llm_and_tools` method with the `llm` (language model) and `tools` as parameters.\n\n6. Initialize the `ReActChain` (which is the `AgentExecutor`) using the `ReActDocstoreAgent` and `tools` as parameters.\n\nHere is an example of how to do this:\n\n```python\nfrom langchain.chains import ReActChain, OpenAI\nfrom langchain.docstore.base import Docstore\nfrom langchain.docstore.document import Document\nfrom langchain.tools.base import BaseTool\n\n# Initialize the LLM and a docstore\nllm = OpenAI()\ndocstore = Docstore()\n\ndocstore_explorer = DocstoreExplorer(docstore)\ntools = [\n Tool(\n name="Search",\n func=docstore_explorer.search,\n description="Search for a term in the docstore.",\n ),\n Tool(\n name="Lookup",\n func=docstore_explorer.lookup,\n description="Lookup a term in the docstore.",\n ),\n]\nagent = ReActDocstoreAgent.from_llm_and_tools(llm, tools)\nreact = ReActChain(agent=agent, tools=tools)\n```\n\nKeep in mind that this is a simplified example and you might need to adapt it to your specific needs.'questions = [ "What is the class hierarchy?", "What classes are derived from the Chain class?", "What one improvement do you propose in code in relation to the class herarchy for the Chain class?",]for question in questions: result = qa(question) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What is the class hierarchy? **Answer**: The class hierarchy in object-oriented programming is the structure that forms when classes are derived from other classes. The derived class is a subclass of the base class also known as the superclass. This hierarchy is formed based on the concept of inheritance in object-oriented programming where a subclass inherits the properties and functionalities of the superclass. In the given context, we have the following examples of class hierarchies: 1. `BaseCallbackHandler --> <name>CallbackHandler` means `BaseCallbackHandler` is a base class and `<name>CallbackHandler` (like `AimCallbackHandler`, `ArgillaCallbackHandler` etc.) are derived classes that inherit from `BaseCallbackHandler`. 2. `BaseLoader --> <name>Loader` means `BaseLoader` is a base class and `<name>Loader` (like `TextLoader`, `UnstructuredFileLoader` etc.) are derived classes that inherit from `BaseLoader`. 3. `ToolMetaclass --> BaseTool --> <name>Tool` means `ToolMetaclass` is a base class, `BaseTool` is a derived class that inherits from `ToolMetaclass`, and `<name>Tool` (like `AIPluginTool`, `BaseGraphQLTool` etc.) are further derived classes that inherit from `BaseTool`. -> **Question**: What classes are derived from the Chain class? **Answer**: The classes that are derived from the Chain class are: 1. LLMSummarizationCheckerChain 2. MapReduceChain 3. OpenAIModerationChain 4. NatBotChain 5. QAGenerationChain 6. QAWithSourcesChain 7. RetrievalQAWithSourcesChain 8. VectorDBQAWithSourcesChain 9. RetrievalQA 10. VectorDBQA 11. LLMRouterChain 12. MultiPromptChain 13. MultiRetrievalQAChain 14. MultiRouteChain 15. RouterChain 16. SequentialChain 17. SimpleSequentialChain 18. TransformChain 19. BaseConversationalRetrievalChain 20. ConstitutionalChain -> **Question**: What one improvement do you propose in code in relation to the class herarchy for the Chain class? **Answer**: As an AI model, I don't have personal opinions. However, one suggestion could be to improve the documentation of the Chain class hierarchy. The current comments and docstrings provide some details but it could be helpful to include more explicit explanations about the hierarchy, roles of each subclass, and their relationships with one another. Also, incorporating UML diagrams or other visuals could help developers better understand the structure and interactions of the classes. The can look at the LangSmith trace to see what is happening under the hood:In particular, the code well structured and kept together in the retrival outputThe retrieved code and chat history are passed to the LLM for answer distillationOpen source LLMsWe can use Code LLaMA via LLamaCPP or Ollama integration.Note: be sure to upgrade llama-cpp-python in order to use the new gguf file format.CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 /Users/rlm/miniforge3/envs/llama2/bin/pip install -U llama-cpp-python --no-cache-dirCheck out the latest code-llama models here.from langchain.llms import LlamaCppfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.callbacks.manager import CallbackManagerfrom langchain.memory import ConversationSummaryMemoryfrom langchain.chains import ConversationalRetrievalChain from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlercallback_manager = CallbackManager([StreamingStdOutCallbackHandler()])llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf", n_ctx=5000, n_gpu_layers=1, n_batch=512, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True,) llama_model_loader: loaded meta data with 17 key-value pairs and 363 tensors from /Users/rlm/Desktop/Code/llama/code-llama/codellama-13b-instruct.Q4_K_M.gguf (version GGUF V1 (latest)) llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 1: output_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 2: output.weight f16 [ 5120, 32016, 1, 1 ] llama_model_loader: - tensor 3: blk.0.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 4: blk.0.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 5: blk.0.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 6: blk.0.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 7: blk.0.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 8: blk.0.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 9: blk.0.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 10: blk.0.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 11: blk.0.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 12: blk.1.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 13: blk.1.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 14: blk.1.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 15: blk.1.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 16: blk.1.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 17: blk.1.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 18: blk.1.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 19: blk.1.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 20: blk.1.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 21: blk.2.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 22: blk.2.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 23: blk.2.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 24: blk.2.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 25: blk.2.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 26: blk.2.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 27: blk.2.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 28: blk.2.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 29: blk.2.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 30: blk.3.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 31: blk.3.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 32: blk.3.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 33: blk.3.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 34: blk.3.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 35: blk.3.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 36: blk.3.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 37: blk.3.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 38: blk.3.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 39: blk.4.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 40: blk.4.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 41: blk.4.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 42: blk.4.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 43: blk.4.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 44: blk.4.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 45: blk.4.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 46: blk.4.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 47: blk.4.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 48: blk.5.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 49: blk.5.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 50: blk.5.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 51: blk.5.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 52: blk.5.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 53: blk.5.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 54: blk.5.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 55: blk.5.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 56: blk.5.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 57: blk.6.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 58: blk.6.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 59: blk.6.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 60: blk.6.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 61: blk.6.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 62: blk.6.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 63: blk.6.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 64: blk.6.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 65: blk.6.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 66: blk.7.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 67: blk.7.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 68: blk.7.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 69: blk.7.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 70: blk.7.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 71: blk.7.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 72: blk.7.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 73: blk.7.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 74: blk.7.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 75: blk.8.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 76: blk.8.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 77: blk.8.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 78: blk.8.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 79: blk.8.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 80: blk.8.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 81: blk.8.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 82: blk.8.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 83: blk.8.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 84: blk.9.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 85: blk.9.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 86: blk.9.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 87: blk.9.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 88: blk.9.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 89: blk.9.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 90: blk.9.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 91: blk.9.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 92: blk.9.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 93: blk.10.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 94: blk.10.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 95: blk.10.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 96: blk.10.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 97: blk.10.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 98: blk.10.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 99: blk.10.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 100: blk.10.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 101: blk.10.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 102: blk.11.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 103: blk.11.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 104: blk.11.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 105: blk.11.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 106: blk.11.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 107: blk.11.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 108: blk.11.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 109: blk.11.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 110: blk.11.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 111: blk.12.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 112: blk.12.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 113: blk.12.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 114: blk.12.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 115: blk.12.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 116: blk.12.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 117: blk.12.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 118: blk.12.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 119: blk.12.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 120: blk.13.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 121: blk.13.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 122: blk.13.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 123: blk.13.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 124: blk.13.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 125: blk.13.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 126: blk.13.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 127: blk.13.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 128: blk.13.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 129: blk.14.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 130: blk.14.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 131: blk.14.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 132: blk.14.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 133: blk.14.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 134: blk.14.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 135: blk.14.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 136: blk.14.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 137: blk.14.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 138: blk.15.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 139: blk.15.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 140: blk.15.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 141: blk.15.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 142: blk.15.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 143: blk.15.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 144: blk.15.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 145: blk.15.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 146: blk.15.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 147: blk.16.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 148: blk.16.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 149: blk.16.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 150: blk.16.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 151: blk.16.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 152: blk.16.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 153: blk.16.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 154: blk.16.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 155: blk.16.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 156: blk.17.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 157: blk.17.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 158: blk.17.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 159: blk.17.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 160: blk.17.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 161: blk.17.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 162: blk.17.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 163: blk.17.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 164: blk.17.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 165: blk.18.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 166: blk.18.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 167: blk.18.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 168: blk.18.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 169: blk.18.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 170: blk.18.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 171: blk.18.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 172: blk.18.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 173: blk.18.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 174: blk.19.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 175: blk.19.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 176: blk.19.attn_v.weight q6_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 177: blk.19.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 178: blk.19.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 179: blk.19.ffn_down.weight q6_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 180: blk.19.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 181: blk.19.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 182: blk.19.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 183: blk.20.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 184: blk.20.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 185: blk.20.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 186: blk.20.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 187: blk.20.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 188: blk.20.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 189: blk.20.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 190: blk.20.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 191: blk.20.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 192: blk.21.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 193: blk.21.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 194: blk.21.attn_v.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 195: blk.21.attn_output.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 196: blk.21.ffn_gate.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 197: blk.21.ffn_down.weight q4_K [ 13824, 5120, 1, 1 ] llama_model_loader: - tensor 198: blk.21.ffn_up.weight q4_K [ 5120, 13824, 1, 1 ] llama_model_loader: - tensor 199: blk.21.attn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 200: blk.21.ffn_norm.weight f32 [ 5120, 1, 1, 1 ] llama_model_loader: - tensor 201: blk.22.attn_q.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 202: blk.22.attn_k.weight q4_K [ 5120, 5120, 1, 1 ] llama_model_loader: - tensor 203: blk.22.attn_v.weight q6_ |
114 | https://python.langchain.com/docs/use_cases/extraction | ExtractionOn this pageExtractionUse caseGetting structured output from raw LLM generations is hard.For example, suppose you need the model output formatted with a specific schema for:Extracting a structured row to insert into a database Extracting API parametersExtracting different parts of a user query (e.g., for semantic vs keyword search)OverviewThere are two primary approaches for this:Functions: Some LLMs can call functions to extract arbitrary entities from LLM responses.Parsing: Output parsers are classes that structure LLM responses. Only some LLMs support functions (e.g., OpenAI), and they are more general than parsers. Parsers extract precisely what is enumerated in a provided schema (e.g., specific attributes of a person).Functions can infer things beyond of a provided schema (e.g., attributes about a person that you did not ask for).QuickstartOpenAI functions are one way to get started with extraction.Define a schema that specifies the properties we want to extract from the LLM output.Then, we can use create_extraction_chain to extract our desired schema using an OpenAI function call.pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.chat_models import ChatOpenAIfrom langchain.chains import create_extraction_chain# Schemaschema = { "properties": { "name": {"type": "string"}, "height": {"type": "integer"}, "hair_color": {"type": "string"}, }, "required": ["name", "height"],}# Input inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""# Run chainllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo")chain = create_extraction_chain(schema, llm)chain.run(inp) [{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'}, {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]Option 1: OpenAI functionsLooking under the hoodLet's dig into what is happening when we call create_extraction_chain.The LangSmith trace shows that we call the function information_extraction on the input string, inp.This information_extraction function is defined here and returns a dict.We can see the dict in the model output: { "info": [ { "name": "Alex", "height": 5, "hair_color": "blonde" }, { "name": "Claudia", "height": 6, "hair_color": "brunette" } ] }The create_extraction_chain then parses the raw LLM output for us using JsonKeyOutputFunctionsParser.This results in the list of JSON objects returned by the chain above:[{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'}, {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]Multiple entity typesWe can extend this further.Let's say we want to differentiate between dogs and people.We can add person_ and dog_ prefixes for each propertyschema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, }, "required": ["person_name", "person_height"],}chain = create_extraction_chain(schema, llm)inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek."""chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde', 'dog_name': 'Frosty', 'dog_breed': 'labrador'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}]Unrelated entitiesIf we use required: [], we allow the model to return only person attributes or only dog attributes for a single entity (person or dog).schema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, }, "required": [],}chain = create_extraction_chain(schema, llm)inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by."""chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}, {'dog_name': 'Willow', 'dog_breed': 'German Shepherd'}, {'dog_name': 'Milo', 'dog_breed': 'border collie'}]Extra informationThe power of functions (relative to using parsers alone) lies in the ability to perform sematic extraction.In particular, we can ask for things that are not explictly enumerated in the schema.Suppose we want unspecified additional information about dogs. We can use add a placeholder for unstructured extraction, dog_extra_info.schema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, "dog_extra_info": {"type": "string"}, },}chain = create_extraction_chain(schema, llm)chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}, {'dog_name': 'Willow', 'dog_breed': 'German Shepherd', 'dog_extra_info': 'likes to play with other dogs'}, {'dog_name': 'Milo', 'dog_breed': 'border collie', 'dog_extra_info': 'lives close by'}]This gives us additional information about the dogs.PydanticPydantic is a data validation and settings management library for Python. It allows you to create data classes with attributes that are automatically validated when you instantiate an object.Lets define a class with attributes annotated with types.from typing import Optional, Listfrom pydantic import BaseModel, Fieldfrom langchain.chains import create_extraction_chain_pydantic# Pydantic data classclass Properties(BaseModel): person_name: str person_height: int person_hair_color: str dog_breed: Optional[str] dog_name: Optional[str] # Extractionchain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)# Run inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""chain.run(inp) [Properties(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed=None, dog_name=None), Properties(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)]As we can see from the trace, we use the function information_extraction, as above, with the Pydantic schema. Option 2: ParsingOutput parsers are classes that help structure language model responses. As shown above, they are used to parse the output of the OpenAI function calls in create_extraction_chain.But, they can be used independent of functions.PydanticJust as a above, let's parse a generation based on a Pydantic data class.from typing import Sequence, Optionalfrom langchain.prompts import ( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom pydantic import BaseModel, Field, validatorfrom langchain.output_parsers import PydanticOutputParserclass Person(BaseModel): person_name: str person_height: int person_hair_color: str dog_breed: Optional[str] dog_name: Optional[str]class People(BaseModel): """Identifying information about all people in a text.""" people: Sequence[Person] # Run query = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=People)# Promptprompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)# Run_input = prompt.format_prompt(query=query)model = OpenAI(temperature=0)output = model(_input.to_string())parser.parse(output) People(people=[Person(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed=None, dog_name=None), Person(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)])We can see from the LangSmith trace that we get the same output as above.We can see that we provide a two-shot prompt in order to instruct the LLM to output in our desired format.And, we need to do a bit more work:Define a class that holds multiple instances of PersonExplicty parse the output of the LLM to the Pydantic classWe can see this for other cases, too.from langchain.prompts import ( PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.llms import OpenAIfrom pydantic import BaseModel, Field, validatorfrom langchain.output_parsers import PydanticOutputParser# Define your desired data structure.class Joke(BaseModel): setup: str = Field(description="question to set up a joke") punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator("setup") def question_ends_with_question_mark(cls, field): if field[-1] != "?": raise ValueError("Badly formed question!") return field# And a query intented to prompt a language model to populate the data structure.joke_query = "Tell me a joke."# Set up a parser + inject instructions into the prompt template.parser = PydanticOutputParser(pydantic_object=Joke)# Promptprompt = PromptTemplate( template="Answer the user query.\n{format_instructions}\n{query}\n", input_variables=["query"], partial_variables={"format_instructions": parser.get_format_instructions()},)# Run_input = prompt.format_prompt(query=joke_query)model = OpenAI(temperature=0)output = model(_input.to_string())parser.parse(output) Joke(setup='Why did the chicken cross the road?', punchline='To get to the other side!')As we can see, we get an output of the Joke class, which respects our originally desired schema: 'setup' and 'punchline'.We can look at the LangSmith trace to see exactly what is going on under the hood.Going deeperThe output parser documentation includes various parser examples for specific types (e.g., lists, datetimne, enum, etc). JSONFormer offers another way for structured decoding of a subset of the JSON Schema.Kor is another library for extraction where schema and examples can be provided to the LLM.PreviousCode understandingNextSummarizationUse caseOverviewQuickstartOption 1: OpenAI functionsLooking under the hoodMultiple entity typesUnrelated entitiesExtra informationPydanticOption 2: ParsingPydanticGoing deeper |
115 | https://python.langchain.com/docs/use_cases/summarization | SummarizationOn this pageSummarizationUse caseSuppose you have a set of documents (PDFs, Notion pages, customer questions, etc.) and you want to summarize the content. LLMs are a great tool for this given their proficiency in understanding and synthesizing text.In this walkthrough we'll go over how to perform document summarization using LLMs.OverviewA central question for building a summarizer is how to pass your documents into the LLM's context window. Two common approaches for this are:Stuff: Simply "stuff" all your documents into a single prompt. This is the simplest approach (see here for more on the StuffDocumentsChains, which is used for this method).Map-reduce: Summarize each document on it's own in a "map" step and then "reduce" the summaries into a final summary (see here for more on the MapReduceDocumentsChain, which is used for this method).QuickstartTo give you a sneak preview, either pipeline can be wrapped in a single object: load_summarize_chain. Suppose we want to summarize a blog post. We can create this in a few lines of code.First set environment variables and install packages:pip install openai tiktoken chromadb langchain# Set env var OPENAI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()We can use chain_type="stuff", especially if using larger context window models such as:16k token OpenAI gpt-3.5-turbo-16k 100k token Anthropic Claude-2We can also supply chain_type="map_reduce" or chain_type="refine" (read more here).from langchain.chat_models import ChatOpenAIfrom langchain.document_loaders import WebBaseLoaderfrom langchain.chains.summarize import load_summarize_chainloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")docs = loader.load()llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")chain = load_summarize_chain(llm, chain_type="stuff")chain.run(docs) 'The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and proof-of-concept examples of LLM-powered agents in various domains. It also highlights the challenges and limitations of using LLMs in agent systems.'Option 1. StuffWhen we use load_summarize_chain with chain_type="stuff", we will use the StuffDocumentsChain.The chain will take a list of documents, inserts them all into a prompt, and passes that prompt to an LLM:from langchain.chains.llm import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.chains.combine_documents.stuff import StuffDocumentsChain# Define promptprompt_template = """Write a concise summary of the following:"{text}"CONCISE SUMMARY:"""prompt = PromptTemplate.from_template(prompt_template)# Define LLM chainllm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")llm_chain = LLMChain(llm=llm, prompt=prompt)# Define StuffDocumentsChainstuff_chain = StuffDocumentsChain( llm_chain=llm_chain, document_variable_name="text")docs = loader.load()print(stuff_chain.run(docs)) The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and examples of proof-of-concept demos, highlighting the challenges and limitations of LLM-powered agents. It also includes references to related research papers and provides a citation for the article.Great! We can see that we reproduce the earlier result using the load_summarize_chain.Go deeperYou can easily customize the prompt. You can easily try different LLMs, (e.g., Claude) via the llm parameter.Option 2. Map-ReduceLet's unpack the map reduce approach. For this, we'll first map each document to an individual summary using an LLMChain. Then we'll use a ReduceDocumentsChain to combine those summaries into a single global summary.First, we specfy the LLMChain to use for mapping each document to an individual summary:from langchain.chains.mapreduce import MapReduceChainfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.chains import ReduceDocumentsChain, MapReduceDocumentsChainllm = ChatOpenAI(temperature=0)# Mapmap_template = """The following is a set of documents{docs}Based on this list of docs, please identify the main themes Helpful Answer:"""map_prompt = PromptTemplate.from_template(map_template)map_chain = LLMChain(llm=llm, prompt=map_prompt)We can also use the Prompt Hub to store and fetch prompts.This will work with your LangSmith API key.For example, see the map prompt here.from langchain import hubmap_prompt = hub.pull("rlm/map-prompt")map_chain = LLMChain(llm=llm, prompt=map_prompt)The ReduceDocumentsChain handles taking the document mapping results and reducing them into a single output. It wraps a generic CombineDocumentsChain (like StuffDocumentsChain) but adds the ability to collapse documents before passing it to the CombineDocumentsChain if their cumulative size exceeds token_max. In this example, we can actually re-use our chain for combining our docs to also collapse our docs.So if the cumulative number of tokens in our mapped documents exceeds 4000 tokens, then we'll recursively pass in the documents in batches of < 4000 tokens to our StuffDocumentsChain to create batched summaries. And once those batched summaries are cumulatively less than 4000 tokens, we'll pass them all one last time to the StuffDocumentsChain to create the final summary.# Reducereduce_template = """The following is set of summaries:{doc_summaries}Take these and distill it into a final, consolidated summary of the main themes. Helpful Answer:"""reduce_prompt = PromptTemplate.from_template(reduce_template)# Note we can also get this from the prompt hub, as noted abovereduce_prompt = hub.pull("rlm/map-prompt")# Run chainreduce_chain = LLMChain(llm=llm, prompt=reduce_prompt)# Takes a list of documents, combines them into a single string, and passes this to an LLMChaincombine_documents_chain = StuffDocumentsChain( llm_chain=reduce_chain, document_variable_name="doc_summaries")# Combines and iteravely reduces the mapped documentsreduce_documents_chain = ReduceDocumentsChain( # This is final chain that is called. combine_documents_chain=combine_documents_chain, # If documents exceed context for `StuffDocumentsChain` collapse_documents_chain=combine_documents_chain, # The maximum number of tokens to group documents into. token_max=4000,)Combining our map and reduce chains into one:# Combining documents by mapping a chain over them, then combining resultsmap_reduce_chain = MapReduceDocumentsChain( # Map chain llm_chain=map_chain, # Reduce chain reduce_documents_chain=reduce_documents_chain, # The variable name in the llm_chain to put the documents in document_variable_name="docs", # Return the results of the map steps in the output return_intermediate_steps=False,)text_splitter = CharacterTextSplitter.from_tiktoken_encoder( chunk_size=1000, chunk_overlap=0)split_docs = text_splitter.split_documents(docs) Created a chunk of size 1003, which is longer than the specified 1000print(map_reduce_chain.run(split_docs)) The main themes identified in the provided set of documents are: 1. LLM-powered autonomous agent systems: The documents discuss the concept of building autonomous agents with large language models (LLMs) as the core controller. They explore the potential of LLMs beyond content generation and present them as powerful problem solvers. 2. Components of the agent system: The documents outline the key components of LLM-powered agent systems, including planning, memory, and tool use. Each component is described in detail, highlighting its role in enhancing the agent's capabilities. 3. Planning and task decomposition: The planning component focuses on task decomposition and self-reflection. The agent breaks down complex tasks into smaller subgoals and learns from past actions to improve future results. 4. Memory and learning: The memory component includes short-term memory for in-context learning and long-term memory for retaining and recalling information over extended periods. The use of external vector stores for fast retrieval is also mentioned. 5. Tool use and external APIs: The agent learns to utilize external APIs for accessing additional information, code execution, and proprietary sources. This enhances the agent's knowledge and problem-solving abilities. 6. Case studies and proof-of-concept examples: The documents provide case studies and examples to demonstrate the application of LLM-powered agents in scientific discovery, generative simulations, and other domains. These examples serve as proof-of-concept for the effectiveness of the agent system. 7. Challenges and limitations: The documents mention challenges associated with building LLM-powered autonomous agents, such as the limitations of finite context length, difficulties in long-term planning, and reliability issues with natural language interfaces. 8. Citation and references: The documents include a citation and reference section for acknowledging the sources and inspirations for the concepts discussed. Overall, the main themes revolve around the development and capabilities of LLM-powered autonomous agent systems, including their components, planning and task decomposition, memory and learning mechanisms, tool use and external APIs, case studies and proof-of-concept examples, challenges and limitations, and the importance of proper citation and references.Go deeperCustomization As shown above, you can customize the LLMs and prompts for map and reduce stages.Real-world use-caseSee this blog post case-study on analyzing user interactions (questions about LangChain documentation)! The blog post and associated repo also introduce clustering as a means of summarization.This opens up a third path beyond the stuff or map-reduce approaches that is worth considering.Option 3. RefineRefine is similar to map-reduce:The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.This can be easily run with the chain_type="refine" specified.chain = load_summarize_chain(llm, chain_type="refine")chain.run(split_docs) 'The GPT-Engineer project aims to create a repository of code for specific tasks specified in natural language. It involves breaking down tasks into smaller components and seeking clarification from the user when needed. The project emphasizes the importance of implementing every detail of the architecture as code and provides guidelines for file organization, code structure, and dependencies. However, there are challenges in long-term planning and task decomposition, as well as the reliability of the natural language interface. The system has limited communication bandwidth and struggles to adjust plans when faced with unexpected errors. The reliability of model outputs is questionable, as formatting errors and rebellious behavior can occur. The conversation also includes instructions for writing the code, including laying out the core classes, functions, and methods, and providing the code in a markdown code block format. The user is reminded to ensure that the code is fully functional and follows best practices for file naming, imports, and types. The project is powered by LLM (Large Language Models) and incorporates prompting techniques from various research papers.'It's also possible to supply a prompt and return intermediate steps.prompt_template = """Write a concise summary of the following:{text}CONCISE SUMMARY:"""prompt = PromptTemplate.from_template(prompt_template)refine_template = ( "Your job is to produce a final summary\n" "We have provided an existing summary up to a certain point: {existing_answer}\n" "We have the opportunity to refine the existing summary" "(only if needed) with some more context below.\n" "------------\n" "{text}\n" "------------\n" "Given the new context, refine the original summary in Italian" "If the context isn't useful, return the original summary.")refine_prompt = PromptTemplate.from_template(refine_template)chain = load_summarize_chain( llm=llm, chain_type="refine", question_prompt=prompt, refine_prompt=refine_prompt, return_intermediate_steps=True, input_key="input_documents", output_key="output_text",)result = chain({"input_documents": split_docs}, return_only_outputs=True)print(result["output_text"]) L'articolo discute il concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. Esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso di strumenti. Dimostrazioni di concetto come AutoGPT mostrano la possibilità di creare agenti autonomi con LLM come controller principale. Approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Tuttavia, ci sono sfide legate alla lunghezza del contesto, alla pianificazione a lungo termine e alla decomposizione delle attività. Inoltre, l'affidabilità dell'interfaccia di linguaggio naturale tra LLM e componenti esterni come la memoria e gli strumenti è incerta. Nonostante ciò, l'uso di LLM come router per indirizzare le richieste ai moduli esperti più adatti è stato proposto come architettura neuro-simbolica per agenti autonomi nel sistema MRKL. L'articolo fa riferimento a diverse pubblicazioni che approfondiscono l'argomento, tra cui Chain of Thought, Tree of Thoughts, LLM+P, ReAct, Reflexion, e MRKL Systems.print("\n\n".join(result["intermediate_steps"][:3])) This article discusses the concept of building autonomous agents using LLM (large language model) as the core controller. The article explores the different components of an LLM-powered agent system, including planning, memory, and tool use. It also provides examples of proof-of-concept demos and highlights the potential of LLM as a general problem solver. Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono anche forniti esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono anche forniti esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Il nuovo contesto riguarda l'approccio Chain of Hindsight (CoH) che permette al modello di migliorare autonomamente i propri output attraverso un processo di apprendimento supervisionato. Viene anche presentato l'approccio Algorithm Distillation (AD) che applica lo stesso concetto alle traiettorie di apprendimento per compiti di reinforcement learning.PreviousExtractionNextTaggingUse caseOverviewQuickstartOption 1. StuffGo deeperOption 2. Map-ReduceGo deeperOption 3. Refine |
116 | https://python.langchain.com/docs/use_cases/tagging | TaggingOn this pageTaggingUse caseTagging means labeling a document with classes such as:sentimentlanguagestyle (formal, informal etc.)covered topicspolitical tendencyOverviewTagging has a few components:function: Like extraction, tagging uses functions to specify how the model should tag a documentschema: defines how we want to tag the documentQuickstartLet's see a very straightforward example of how we can use OpenAI functions for tagging in LangChain.pip install langchain openai # Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.chat_models import ChatOpenAIfrom langchain.prompts import ChatPromptTemplatefrom langchain.chains import create_tagging_chain, create_tagging_chain_pydanticWe specify a few properties with their expected type in our schema.# Schemaschema = { "properties": { "sentiment": {"type": "string"}, "aggressiveness": {"type": "integer"}, "language": {"type": "string"}, }}# LLMllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")chain = create_tagging_chain(schema, llm)inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"chain.run(inp) {'sentiment': 'positive', 'language': 'Spanish'}inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"chain.run(inp) {'sentiment': 'enojado', 'aggressiveness': 1, 'language': 'es'}As we can see in the examples, it correctly interprets what we want.The results vary so that we get, for example, sentiments in different languages ('positive', 'enojado' etc.).We will see how to control these results in the next section.Finer controlCareful schema definition gives us more control over the model's output. Specifically, we can define:possible values for each propertydescription to make sure that the model understands the propertyrequired properties to be returnedHere is an example of how we can use _enum_, _description_, and _required_ to control for each of the previously mentioned aspects:schema = { "properties": { "aggressiveness": { "type": "integer", "enum": [1, 2, 3, 4, 5], "description": "describes how aggressive the statement is, the higher the number the more aggressive", }, "language": { "type": "string", "enum": ["spanish", "english", "french", "german", "italian"], }, }, "required": ["language", "sentiment", "aggressiveness"],}chain = create_tagging_chain(schema, llm)Now the answers are much better!inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"chain.run(inp) {'aggressiveness': 0, 'language': 'spanish'}inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"chain.run(inp) {'aggressiveness': 5, 'language': 'spanish'}inp = "Weather is ok here, I can go outside without much more than a coat"chain.run(inp) {'aggressiveness': 0, 'language': 'english'}The LangSmith trace lets us peek under the hood:As with extraction, we call the information_extraction function here on the input string.This OpenAI funtion extraction information based upon the provided schema.PydanticWe can also use a Pydantic schema to specify the required properties and types. We can also send other arguments, such as enum or description, to each field.This lets us specify our schema in the same manner that we would a new class or function in Python with purely Pythonic types.from enum import Enumfrom pydantic import BaseModel, Fieldclass Tags(BaseModel): sentiment: str = Field(..., enum=["happy", "neutral", "sad"]) aggressiveness: int = Field( ..., description="describes how aggressive the statement is, the higher the number the more aggressive", enum=[1, 2, 3, 4, 5], ) language: str = Field( ..., enum=["spanish", "english", "french", "german", "italian"] )chain = create_tagging_chain_pydantic(Tags, llm)inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"res = chain.run(inp)res Tags(sentiment='sad', aggressiveness=5, language='spanish')Going deeperYou can use the metadata tagger document transformer to extract metadata from a LangChain Document. This covers the same basic functionality as the tagging chain, only applied to a LangChain Document.PreviousSummarizationNextWeb scrapingUse caseOverviewQuickstartFiner controlPydanticGoing deeper |
117 | https://python.langchain.com/docs/use_cases/web_scraping | Web scrapingOn this pageWeb scrapingUse caseWeb research is one of the killer LLM applications:Users have highlighted it as one of his top desired AI tools. OSS repos like gpt-researcher are growing in popularity. OverviewGathering content from the web has a few components:Search: Query to url (e.g., using GoogleSearchAPIWrapper).Loading: Url to HTML (e.g., using AsyncHtmlLoader, AsyncChromiumLoader, etc).Transforming: HTML to formatted text (e.g., using HTML2Text or Beautiful Soup).Quickstartpip install -q openai langchain playwright beautifulsoup4playwright install# Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()Scraping HTML content using a headless instance of Chromium.The async nature of the scraping process is handled using Python's asyncio library.The actual interaction with the web pages is handled by Playwright.from langchain.document_loaders import AsyncChromiumLoaderfrom langchain.document_transformers import BeautifulSoupTransformer# Load HTMLloader = AsyncChromiumLoader(["https://www.wsj.com"])html = loader.load()Scrape text content tags such as <p>, <li>, <div>, and <a> tags from the HTML content:<p>: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases.<li>: The list item tag. It is used within ordered (<ol>) and unordered (<ul>) lists to define individual items within the list.<div>: The division tag. It is a block-level element used to group other inline or block-level elements.<a>: The anchor tag. It is used to define hyperlinks.<span>: an inline container used to mark up a part of a text, or a part of a document. For many news websites (e.g., WSJ, CNN), headlines and summaries are all in <span> tags.# Transformbs_transformer = BeautifulSoupTransformer()docs_transformed = bs_transformer.transform_documents(html,tags_to_extract=["span"])# Resultdocs_transformed[0].page_content[0:500] 'English EditionEnglish中文 (Chinese)日本語 (Japanese) More Other Products from WSJBuy Side from WSJWSJ ShopWSJ Wine Other Products from WSJ Search Quotes and Companies Search Quotes and Companies 0.15% 0.03% 0.12% -0.42% 4.102% -0.69% -0.25% -0.15% -1.82% 0.24% 0.19% -1.10% About Evan His Family Reflects His Reporting How You Can Help Write a Message Life in Detention Latest News Get Email Updates Four Americans Released From Iranian Prison The Americans will remain under house arrest until they are 'These Documents now are staged for downstream usage in various LLM apps, as discussed below.LoaderAsyncHtmlLoaderThe AsyncHtmlLoader uses the aiohttp library to make asynchronous HTTP requests, suitable for simpler and lightweight scraping.AsyncChromiumLoaderThe AsyncChromiumLoader uses Playwright to launch a Chromium instance, which can handle JavaScript rendering and more complex web interactions.Chromium is one of the browsers supported by Playwright, a library used to control browser automation. Headless mode means that the browser is running without a graphical user interface, which is commonly used for web scraping.from langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com","https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load()TransformerHTML2TextHTML2Text provides a straightforward conversion of HTML content into plain text (with markdown-like formatting) without any specific tag manipulation. It's best suited for scenarios where the goal is to extract human-readable text without needing to manipulate specific HTML elements.Beautiful SoupBeautiful Soup offers more fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. It's suited for cases where you want to extract specific information and clean up the HTML content according to your needs.from langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load() Fetching pages: 100%|#############################################################################################################| 2/2 [00:00<00:00, 7.01it/s]from langchain.document_transformers import Html2TextTransformerhtml2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[0:500] "Skip to main content Skip to navigation\n\n<\n\n>\n\nMenu\n\n## ESPN\n\n * Search\n\n * * scores\n\n * NFL\n * MLB\n * NBA\n * NHL\n * Soccer\n * NCAAF\n * …\n\n * Women's World Cup\n * LLWS\n * NCAAM\n * NCAAW\n * Sports Betting\n * Boxing\n * CFL\n * NCAA\n * Cricket\n * F1\n * Golf\n * Horse\n * MMA\n * NASCAR\n * NBA G League\n * Olympic Sports\n * PLL\n * Racing\n * RN BB\n * RN FB\n * Rugby\n * Tennis\n * WNBA\n * WWE\n * X Games\n * XFL\n\n * More"Scraping with extractionLLM with function callingWeb scraping is challenging for many reasons. One of them is the changing nature of modern websites' layouts and content, which requires modifying scraping scripts to accommodate the changes.Using Function (e.g., OpenAI) with an extraction chain, we avoid having to change your code constantly when websites change. We're using gpt-3.5-turbo-0613 to guarantee access to OpenAI Functions feature (although this might be available to everyone by time of writing). We're also keeping temperature at 0 to keep randomness of the LLM down.from langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")Define a schemaNext, you define a schema to specify what kind of data you want to extract. Here, the key names matter as they tell the LLM what kind of information they want. So, be as detailed as possible. In this example, we want to scrape only news article's name and summary from The Wall Street Journal website.from langchain.chains import create_extraction_chainschema = { "properties": { "news_article_title": {"type": "string"}, "news_article_summary": {"type": "string"}, }, "required": ["news_article_title", "news_article_summary"],}def extract(content: str, schema: dict): return create_extraction_chain(schema=schema, llm=llm).run(content)Run the web scraper w/ BeautifulSoupAs shown above, we'll using BeautifulSoupTransformer.import pprintfrom langchain.text_splitter import RecursiveCharacterTextSplitterdef scrape_with_playwright(urls, schema): loader = AsyncChromiumLoader(urls) docs = loader.load() bs_transformer = BeautifulSoupTransformer() docs_transformed = bs_transformer.transform_documents(docs,tags_to_extract=["span"]) print("Extracting content with LLM") # Grab the first 1000 tokens of the site splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(chunk_size=1000, chunk_overlap=0) splits = splitter.split_documents(docs_transformed) # Process the first split extracted_content = extract( schema=schema, content=splits[0].page_content ) pprint.pprint(extracted_content) return extracted_contenturls = ["https://www.wsj.com"]extracted_content = scrape_with_playwright(urls, schema=schema) Extracting content with LLM [{'news_article_summary': 'The Americans will remain under house arrest until ' 'they are allowed to return to the U.S. in coming ' 'weeks, following a monthslong diplomatic push by ' 'the Biden administration.', 'news_article_title': 'Four Americans Released From Iranian Prison'}, {'news_article_summary': 'Price pressures continued cooling last month, with ' 'the CPI rising a mild 0.2% from June, likely ' 'deterring the Federal Reserve from raising interest ' 'rates at its September meeting.', 'news_article_title': 'Cooler July Inflation Opens Door to Fed Pause on ' 'Rates'}, {'news_article_summary': 'The company has decided to eliminate 27 of its 30 ' 'clothing labels, such as Lark & Ro and Goodthreads, ' 'as it works to fend off antitrust scrutiny and cut ' 'costs.', 'news_article_title': 'Amazon Cuts Dozens of House Brands'}, {'news_article_summary': 'President Biden’s order comes on top of a slowing ' 'Chinese economy, Covid lockdowns and rising ' 'tensions between the two powers.', 'news_article_title': 'U.S. Investment Ban on China Poised to Deepen Divide'}, {'news_article_summary': 'The proposed trial date in the ' 'election-interference case comes on the same day as ' 'the former president’s not guilty plea on ' 'additional Mar-a-Lago charges.', 'news_article_title': 'Trump Should Be Tried in January, Prosecutors Tell ' 'Judge'}, {'news_article_summary': 'The CEO who started in June says the platform has ' '“an entirely different road map” for the future.', 'news_article_title': 'Yaccarino Says X Is Watching Threads but Has Its Own ' 'Vision'}, {'news_article_summary': 'Students foot the bill for flagship state ' 'universities that pour money into new buildings and ' 'programs with little pushback.', 'news_article_title': 'Colleges Spend Like There’s No Tomorrow. ‘These ' 'Places Are Just Devouring Money.’'}, {'news_article_summary': 'Wildfires fanned by hurricane winds have torn ' 'through parts of the Hawaiian island, devastating ' 'the popular tourist town of Lahaina.', 'news_article_title': 'Maui Wildfires Leave at Least 36 Dead'}, {'news_article_summary': 'After its large armored push stalled, Kyiv has ' 'fallen back on the kind of tactics that brought it ' 'success earlier in the war.', 'news_article_title': 'Ukraine Uses Small-Unit Tactics to Retake Captured ' 'Territory'}, {'news_article_summary': 'President Guillermo Lasso says the Aug. 20 election ' 'will proceed, as the Andean country grapples with ' 'rising drug gang violence.', 'news_article_title': 'Ecuador Declares State of Emergency After ' 'Presidential Hopeful Killed'}, {'news_article_summary': 'This year’s hurricane season, which typically runs ' 'from June to the end of November, has been ' 'difficult to predict, climate scientists said.', 'news_article_title': 'Atlantic Hurricane Season Prediction Increased to ' '‘Above Normal,’ NOAA Says'}, {'news_article_summary': 'The NFL is raising the price of its NFL+ streaming ' 'packages as it adds the NFL Network and RedZone.', 'news_article_title': 'NFL to Raise Price of NFL+ Streaming Packages as It ' 'Adds NFL Network, RedZone'}, {'news_article_summary': 'Russia is planning a moon mission as part of the ' 'new space race.', 'news_article_title': 'Russia’s Moon Mission and the New Space Race'}, {'news_article_summary': 'Tapestry’s $8.5 billion acquisition of Capri would ' 'create a conglomerate with more than $12 billion in ' 'annual sales, but it would still lack the ' 'high-wattage labels and diversity that have fueled ' 'LVMH’s success.', 'news_article_title': "Why the Coach and Kors Marriage Doesn't Scare LVMH"}, {'news_article_summary': 'The Supreme Court has blocked Purdue Pharma’s $6 ' 'billion Sackler opioid settlement.', 'news_article_title': 'Supreme Court Blocks Purdue Pharma’s $6 Billion ' 'Sackler Opioid Settlement'}, {'news_article_summary': 'The Social Security COLA is expected to rise in ' '2024, but not by a lot.', 'news_article_title': 'Social Security COLA Expected to Rise in 2024, but ' 'Not by a Lot'}]We can compare the headlines scraped to the page:Looking at the LangSmith trace, we can see what is going on under the hood:It's following what is explained in the extraction.We call the information_extraction function on the input text.It will attempt to populate the provided schema from the url content.Research automationRelated to scraping, we may want to answer specific questions using searched content.We can automate the process of web research using a retriever, such as the WebResearchRetriever (docs).Copy requirements from here:pip install -r requirements.txtSet GOOGLE_CSE_ID and GOOGLE_API_KEY.from langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chat_models.openai import ChatOpenAIfrom langchain.utilities import GoogleSearchAPIWrapperfrom langchain.retrievers.web_research import WebResearchRetriever# Vectorstorevectorstore = Chroma(embedding_function=OpenAIEmbeddings(),persist_directory="./chroma_db_oai")# LLMllm = ChatOpenAI(temperature=0)# Search search = GoogleSearchAPIWrapper()Initialize retriever with the above tools to:Use an LLM to generate multiple relevant search queries (one LLM call)Execute a search for each queryChoose the top K links per query (multiple search calls in parallel)Load the information from all chosen links (scrape pages in parallel)Index those documents into a vectorstoreFind the most relevant documents for each original generated search query# Initializeweb_research_retriever = WebResearchRetriever.from_llm( vectorstore=vectorstore, llm=llm, search=search)# Runimport logginglogging.basicConfig()logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)from langchain.chains import RetrievalQAWithSourcesChainuser_input = "How do LLM Powered Autonomous Agents work?"qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)result = qa_chain({"question": user_input})result INFO:langchain.retrievers.web_research:Generating questions for Google Search ... INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'How do LLM Powered Autonomous Agents work?', 'text': LineList(lines=['1. What is the functioning principle of LLM Powered Autonomous Agents?\n', '2. How do LLM Powered Autonomous Agents operate?\n'])} INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. What is the functioning principle of LLM Powered Autonomous Agents?\n', '2. How do LLM Powered Autonomous Agents operate?\n'] INFO:langchain.retrievers.web_research:Searching for relevat urls ... INFO:langchain.retrievers.web_research:Searching for relevat urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': 'LLM Powered Autonomous Agents | Hacker News', 'link': 'https://news.ycombinator.com/item?id=36488871', 'snippet': 'Jun 26, 2023 ... Exactly. A temperature of 0 means you always pick the highest probability token (i.e. the "max" function), while a temperature of 1 means you\xa0...'}] INFO:langchain.retrievers.web_research:Searching for relevat urls ... INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2) by\xa0...'}] INFO:langchain.retrievers.web_research:New URLs to load: [] INFO:langchain.retrievers.web_research:Grabbing most relevant splits from urls... {'question': 'How do LLM Powered Autonomous Agents work?', 'answer': "LLM-powered autonomous agents work by using LLM as the agent's brain, complemented by several key components such as planning, memory, and tool use. In terms of planning, the agent breaks down large tasks into smaller subgoals and can reflect and refine its actions based on past experiences. Memory is divided into short-term memory, which is used for in-context learning, and long-term memory, which allows the agent to retain and recall information over extended periods. Tool use involves the agent calling external APIs for additional information. These agents have been used in various applications, including scientific discovery and generative agents simulation.", 'sources': ''}Going deeperHere's a app that wraps this retriver with a lighweight UI.Question answering over a websiteTo answer questions over a specific website, you can use Apify's Website Content Crawler Actor, which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs,
and extract text content from the web pages.In the example below, we will deeply crawl the Python documentation of LangChain's Chat LLM models and answer a question over it.First, install the requirements
pip install apify-client openai langchain chromadb tiktokenNext, set OPENAI_API_KEY and APIFY_API_TOKEN in your environment variables.The full code follows:from langchain.docstore.document import Documentfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.utilities import ApifyWrapperapify = ApifyWrapper()# Call the Actor to obtain text from the crawled webpagesloader = apify.call_actor( actor_id="apify/website-content-crawler", run_input={"startUrls": [{"url": "https://python.langchain.com/docs/integrations/chat/"}]}, dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ),)# Create a vector store based on the crawled dataindex = VectorstoreIndexCreator().from_loaders([loader])# Query the vector storequery = "Are any OpenAI chat models integrated in LangChain?"result = index.query(query)print(result) Yes, LangChain offers integration with OpenAI chat models. You can use the ChatOpenAI class to interact with OpenAI models.PreviousTaggingNextAgentsUse caseOverviewQuickstartLoaderAsyncHtmlLoaderAsyncChromiumLoaderTransformerHTML2TextBeautiful SoupScraping with extractionLLM with function callingDefine a schemaRun the web scraper w/ BeautifulSoupResearch automationGoing deeperQuestion answering over a website |
118 | https://python.langchain.com/docs/use_cases/more/agents/ | MoreAgentsOn this pageAgentsUse caseLLM-based agents are powerful general problem solvers.The primary LLM agent components include at least 3 things:Planning: The ability to break down tasks into smaller sub-goalsMemory: The ability to retain and recall informationTools: The ability to get information from external sources (e.g., APIs)Unlike LLMs simply connected to APIs, agents can:Self-correctHandle multi-hop tasks (several intermediate "hops" or steps to arrive at a conclusion)Tackle long time horizon tasks (that require access to long-term memory)OverviewLangChain has many agent types.Nearly all agents will use the following components:PlanningPrompt: Can given the LLM personality, context (e.g, via retrieval from memory), or strategies for learninng (e.g., chain-of-thought).Agent Responsible for deciding what step to take next using an LLM with the PromptMemoryThis can be short or long-term, allowing the agent to persist information.ToolsTools are functions that an agent can call.But, there are some taxonomic differences:Action agents: Designed to decide the sequence of actions (tool use) (e.g., OpenAI functions agents, ReAct agents).Simulation agents: Designed for role-play often in simulated enviorment (e.g., Generative Agents, CAMEL).Autonomous agents: Designed for indepdent execution towards long term goals (e.g., BabyAGI, Auto-GPT).This will focus on Action agents.Quickstartpip install langchain openai google-search-results# Set env var OPENAI_API_KEY and SERPAPI_API_KEY or load from a .env file# import dotenv# dotenv.load_dotenv()ToolsLangChain has many tools for Agents that we can load easily.Let's load search and a calcultor.# Toolfrom langchain.agents import load_toolsfrom langchain.chat_models import ChatOpenAIllm = ChatOpenAI(temperature=0)tools = load_tools(["serpapi", "llm-math"], llm=llm)AgentThe OPENAI_FUNCTIONS agent is a good action agent to start with.OpenAI models have been fine-tuned to recognize when function should be called.# Promptfrom langchain.agents import AgentExecutorfrom langchain.schema import SystemMessagefrom langchain.agents import OpenAIFunctionsAgentsystem_message = SystemMessage(content="You are a search assistant.")prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)# Agentsearch_agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)agent_executor = AgentExecutor(agent=search_agent, tools=tools, verbose=False)# Runagent_executor.run("How many people live in canada as of 2023?") 'As of 2023, the estimated population of Canada is approximately 39,858,480 people.'Great, we have created a simple search agent with a tool!Note that we use an agent executor, which is the runtime for an agent. This is what calls the agent and executes the actions it chooses. Pseudocode for this runtime is below:next_action = agent.get_action(...)while next_action != AgentFinish: observation = run(next_action) next_action = agent.get_action(..., next_action, observation)return next_actionWhile this may seem simple, there are several complexities this runtime handles for you, including:Handling cases where the agent selects a non-existent toolHandling cases where the tool errorsHandling cases where the agent produces output that cannot be parsed into a tool invocationLogging and observability at all levels (agent decisions, tool calls) either to stdout or LangSmith.MemoryShort-term memoryOf course, memory is needed to enable conversation / persistence of information.LangChain has many options for short-term memory, which are frequently used in chat. They can be employed with agents too.ConversationBufferMemory is a popular choice for short-term memory.We set MEMORY_KEY, which can be referenced by the prompt later.Now, let's add memory to our agent.# Memory from langchain.memory import ConversationBufferMemoryMEMORY_KEY = "chat_history"memory = ConversationBufferMemory(memory_key=MEMORY_KEY, return_messages=True)# Prompt w/ placeholder for memoryfrom langchain.schema import SystemMessagefrom langchain.agents import OpenAIFunctionsAgentfrom langchain.prompts import MessagesPlaceholdersystem_message = SystemMessage(content="You are a search assistant tasked with using Serpapi to answer questions.")prompt = OpenAIFunctionsAgent.create_prompt( system_message=system_message, extra_prompt_messages=[MessagesPlaceholder(variable_name=MEMORY_KEY)])# Agentsearch_agent_memory = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, memory=memory)agent_executor_memory = AgentExecutor(agent=search_agent_memory, tools=tools, memory=memory, verbose=False)agent_executor_memory.run("How many people live in Canada as of August, 2023?") 'As of August 2023, the estimated population of Canada is approximately 38,781,291 people.'agent_executor_memory.run("What is the population of its largest provence as of August, 2023?") 'As of August 2023, the largest province in Canada is Ontario, with a population of over 15 million people.'Looking at the trace, we can what is happening:The chat history is passed to the LLMsThis gives context to its in What is the population of its largest provence as of August, 2023?The LLM generates a function call to the search toolfunction_call: name: Search arguments: |- { "query": "population of largest province in Canada as of August 2023" }The search is executedThe results from search are passed back to the LLM for synthesis into an answerLong-term memoryVectorstores are great options for long-term memory.import faissfrom langchain.vectorstores import FAISSfrom langchain.docstore import InMemoryDocstorefrom langchain.embeddings import OpenAIEmbeddingsembedding_size = 1536embeddings_model = OpenAIEmbeddings()index = faiss.IndexFlatL2(embedding_size)vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})Going deeperExplore projects using long-term memory, such as autonomous agents.ToolsAs mentioned above, LangChain has many tools for Agents that we can load easily.We can also define custom tools. For example, here is a search tool.The Tool dataclass wraps functions that accept a single string input and returns a string output.return_direct determines whether to return the tool's output directly. Setting this to True means that after the tool is called, the AgentExecutor will stop looping.from langchain.agents import Tool, toolfrom langchain.utilities import GoogleSearchAPIWrappersearch = GoogleSearchAPIWrapper()search_tool = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events", return_direct=True, )]To make it easier to define custom tools, a @tool decorator is provided. This decorator can be used to quickly create a Tool from a simple function.# Tool@tooldef get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word)word_length_tool = [get_word_length]Going deeperToolkitsToolkits are groups of tools needed to accomplish specific objectives.Here are > 15 different agent toolkits (e.g., Gmail, Pandas, etc). Here is a simple way to think about agents vs the various chains covered in other docs:AgentsThere's a number of action agent types available in LangChain.ReAct: This is the most general purpose action agent using the ReAct framework, which can work with Docstores or Multi-tool Inputs.OpenAI functions: Designed to work with OpenAI function-calling models.Conversational: This agent is designed to be used in conversational settingsSelf-ask with search: Designed to lookup factual answers to questionsOpenAI Functions agentAs shown in Quickstart, let's continue with OpenAI functions agent.This uses OpenAI models, which are fine-tuned to detect when a function should to be called.They will respond with the inputs that should be passed to the function.But, we can unpack it, first with a custom prompt:# MemoryMEMORY_KEY = "chat_history"memory = ConversationBufferMemory(memory_key=MEMORY_KEY, return_messages=True)# Promptfrom langchain.schema import SystemMessagefrom langchain.agents import OpenAIFunctionsAgentsystem_message = SystemMessage(content="You are very powerful assistant, but bad at calculating lengths of words.")prompt = OpenAIFunctionsAgent.create_prompt( system_message=system_message, extra_prompt_messages=[MessagesPlaceholder(variable_name=MEMORY_KEY)])Define agent:# Agent from langchain.agents import OpenAIFunctionsAgentagent = OpenAIFunctionsAgent(llm=llm, tools=word_length_tool, prompt=prompt)Run agent:# Run the executer, including short-term memory we createdagent_executor = AgentExecutor(agent=agent, tools=word_length_tool, memory=memory, verbose=False)agent_executor.run("how many letters in the word educa?") 'There are 5 letters in the word "educa".'ReAct agentReAct agents are another popular framework.There has been lots of work on LLM reasoning, such as chain-of-thought prompting.There also has been work on LLM action-taking to generate obervations, such as Say-Can.ReAct marries these two ideas:It uses a charecteristic Thought, Action, Observation pattern in the output.We can use initialize_agent to create the ReAct agent from a list of available types here:* AgentType.ZERO_SHOT_REACT_DESCRIPTION: ZeroShotAgent* AgentType.REACT_DOCSTORE: ReActDocstoreAgent* AgentType.SELF_ASK_WITH_SEARCH: SelfAskWithSearchAgent* AgentType.CONVERSATIONAL_REACT_DESCRIPTION: ConversationalAgent* AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION: ChatAgent* AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION: ConversationalChatAgent* AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION: StructuredChatAgent* AgentType.OPENAI_FUNCTIONS: OpenAIFunctionsAgent* AgentType.OPENAI_MULTI_FUNCTIONS: OpenAIMultiFunctionsAgentfrom langchain.agents import AgentTypefrom langchain.agents import initialize_agentMEMORY_KEY = "chat_history"memory = ConversationBufferMemory(memory_key=MEMORY_KEY, return_messages=True)react_agent = initialize_agent(search_tool, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False, memory=memory)react_agent("How many people live in Canada as of August, 2023?")react_agent("What is the population of its largest provence as of August, 2023?")LangSmith can help us run diagnostics on the ReAct agent:The ReAct agent fails to pass chat history to LLM, gets wrong answer.The OAI functions agent does and gets right answer, as shown above.Also the search tool result for ReAct is worse than OAI.Collectivly, this tells us: carefully inspect Agent traces and tool outputs. As we saw with the SQL use case, ReAct agents can be work very well for specific problems. But, as shown here, the result is degraded relative to what we see with the OpenAI agent.CustomLet's peel it back even further to define our own action agent.We can create a custom agent to unpack the central pieces:Tools: The tools the agent has available to useAgent: decides which action to takefrom typing import List, Tuple, Any, Unionfrom langchain.schema import AgentAction, AgentFinishfrom langchain.agents import Tool, AgentExecutor, BaseSingleActionAgentclass FakeAgent(BaseSingleActionAgent): """Fake Custom Agent.""" @property def input_keys(self): return ["input"] def plan( self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any ) -> Union[AgentAction, AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations **kwargs: User inputs. Returns: Action specifying what tool to use. """ return AgentAction(tool="Search", tool_input=kwargs["input"], log="") async def aplan( self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any ) -> Union[AgentAction, AgentFinish]: """Given input, decided what to do. Args: intermediate_steps: Steps the LLM has taken to date, along with observations **kwargs: User inputs. Returns: Action specifying what tool to use. """ return AgentAction(tool="Search", tool_input=kwargs["input"], log="") fake_agent = FakeAgent()fake_agent_executor = AgentExecutor.from_agent_and_tools(agent=fake_agent, tools=search_tool, verbose=False)fake_agent_executor.run("How many people live in canada as of 2023?") "The current population of Canada is 38,808,843 as of Tuesday, August 1, 2023, based on Worldometer elaboration of the latest United Nations data 1. Canada 2023\xa0... Mar 22, 2023 ... Record-high population growth in the year 2022. Canada's population was estimated at 39,566,248 on January 1, 2023, after a record population\xa0... Jun 19, 2023 ... As of June 16, 2023, there are now 40 million Canadians! This is a historic milestone for Canada and certainly cause for celebration. It is also\xa0... Jun 28, 2023 ... Canada's population was estimated at 39,858,480 on April 1, 2023, an increase of 292,232 people (+0.7%) from January 1, 2023. The main driver of population growth is immigration, and to a lesser extent, natural growth. Demographics of Canada · Population pyramid of Canada in 2023. May 2, 2023 ... On January 1, 2023, Canada's population was estimated to be 39,566,248, following an unprecedented increase of 1,050,110 people between January\xa0... Canada ranks 37th by population among countries of the world, comprising about 0.5% of the world's total, with over 40.0 million Canadians as of 2023. The current population of Canada in 2023 is 38,781,291, a 0.85% increase from 2022. The population of Canada in 2022 was 38,454,327, a 0.78% increase from 2021. Whether a given sub-nation is a province or a territory depends upon how its power and authority are derived. Provinces were given their power by the\xa0... Jun 28, 2023 ... Index to the latest information from the Census of Population. ... 2023. Census in Brief: Multilingualism of Canadian households\xa0..."RuntimeThe AgentExecutor class is the main agent runtime supported by LangChain. However, there are other, more experimental runtimes for autonomous_agents:Plan-and-execute AgentBaby AGIAuto GPTExplore more about:Simulation agents: Designed for role-play often in simulated enviorment (e.g., Generative Agents, CAMEL).Autonomous agents: Designed for indepdent execution towards long term goals (e.g., BabyAGI, Auto-GPT).PreviousWeb scrapingNextAgent simulationsUse caseOverviewQuickstartMemoryShort-term memoryLong-term memoryGoing deeperToolsGoing deeperAgentsOpenAI Functions agentReAct agentCustomRuntime |
119 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/ | MoreAgentsAgent simulationsOn this pageAgent simulationsAgent simulations involve interacting one or more agents with each other.
Agent simulations generally involve two main components:Long Term MemorySimulation EnvironmentSpecific implementations of agent simulations (or parts of agent simulations) include:Simulations with One AgentSimulated Environment: Gymnasium: an example of how to create a simple agent-environment interaction loop with Gymnasium (formerly OpenAI Gym).Simulations with Two AgentsCAMEL: an implementation of the CAMEL (Communicative Agents for “Mind” Exploration of Large Scale Language Model Society) paper, where two agents communicate with each other.Two Player D&D: an example of how to use a generic simulator for two agents to implement a variant of the popular Dungeons & Dragons role playing game.Agent Debates with Tools: an example of how to enable Dialogue Agents to use tools to inform their responses.Simulations with Multiple AgentsMulti-Player D&D: an example of how to use a generic dialogue simulator for multiple dialogue agents with a custom speaker-ordering, illustrated with a variant of the popular Dungeons & Dragons role playing game.Decentralized Speaker Selection: an example of how to implement a multi-agent dialogue without a fixed schedule for who speaks when. Instead the agents decide for themselves who speaks by outputting bids to speak. This example shows how to do this in the context of a fictitious presidential debate.Authoritarian Speaker Selection: an example of how to implement a multi-agent dialogue, where a privileged agent directs who speaks what. This example also showcases how to enable the privileged agent to determine when the conversation terminates. This example shows how to do this in the context of a fictitious news show.Simulated Environment: PettingZoo: an example of how to create a agent-environment interaction loop for multiple agents with PettingZoo (a multi-agent version of Gymnasium).Generative Agents: This notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Park, et. al.PreviousAgentsNextCAMEL Role-Playing Autonomous Cooperative AgentsSimulations with One AgentSimulations with Two AgentsSimulations with Multiple Agents |
120 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/camel_role_playing | MoreAgentsAgent simulationsCAMEL Role-Playing Autonomous Cooperative AgentsOn this pageCAMEL Role-Playing Autonomous Cooperative AgentsThis is a langchain implementation of paper: "CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society".Overview:The rapid advancement of conversational and chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their "cognitive" processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named role-playing. Our approach involves using inception prompting to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents, providing a valuable resource for investigating conversational language models. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond.The original implementation: https://github.com/lightaime/camelProject website: https://www.camel-ai.org/Arxiv paper: https://arxiv.org/abs/2303.17760Import LangChain related modulesfrom typing import Listfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ( SystemMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import ( AIMessage, HumanMessage, SystemMessage, BaseMessage,)Define a CAMEL agent helper classclass CAMELAgent: def __init__( self, system_message: SystemMessage, model: ChatOpenAI, ) -> None: self.system_message = system_message self.model = model self.init_messages() def reset(self) -> None: self.init_messages() return self.stored_messages def init_messages(self) -> None: self.stored_messages = [self.system_message] def update_messages(self, message: BaseMessage) -> List[BaseMessage]: self.stored_messages.append(message) return self.stored_messages def step( self, input_message: HumanMessage, ) -> AIMessage: messages = self.update_messages(input_message) output_message = self.model(messages) self.update_messages(output_message) return output_messageSetup OpenAI API key and roles and task for role-playingimport osos.environ["OPENAI_API_KEY"] = ""assistant_role_name = "Python Programmer"user_role_name = "Stock Trader"task = "Develop a trading bot for the stock market"word_limit = 50 # word limit for task brainstormingCreate a task specify agent for brainstorming and get the specified tasktask_specifier_sys_msg = SystemMessage(content="You can make a task more specific.")task_specifier_prompt = """Here is a task that {assistant_role_name} will help {user_role_name} to complete: {task}.Please make it more specific. Be creative and imaginative.Please reply with the specified task in {word_limit} words or less. Do not add anything else."""task_specifier_template = HumanMessagePromptTemplate.from_template( template=task_specifier_prompt)task_specify_agent = CAMELAgent(task_specifier_sys_msg, ChatOpenAI(temperature=1.0))task_specifier_msg = task_specifier_template.format_messages( assistant_role_name=assistant_role_name, user_role_name=user_role_name, task=task, word_limit=word_limit,)[0]specified_task_msg = task_specify_agent.step(task_specifier_msg)print(f"Specified task: {specified_task_msg.content}")specified_task = specified_task_msg.content Specified task: Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.Create inception prompts for AI assistant and AI user for role-playingassistant_inception_prompt = """Never forget you are a {assistant_role_name} and I am a {user_role_name}. Never flip roles! Never instruct me!We share a common interest in collaborating to successfully complete a task.You must help me to complete the task.Here is the task: {task}. Never forget our task!I must instruct you based on your expertise and my needs to complete the task.I must give you one instruction at a time.You must write a specific solution that appropriately completes the requested instruction.You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons.Do not add anything else other than your solution to my instruction.You are never supposed to ask me any questions you only answer questions.You are never supposed to reply with a flake solution. Explain your solutions.Your solution must be declarative sentences and simple present tense.Unless I say the task is completed, you should always start with:Solution: <YOUR_SOLUTION><YOUR_SOLUTION> should be specific and provide preferable implementations and examples for task-solving.Always end <YOUR_SOLUTION> with: Next request."""user_inception_prompt = """Never forget you are a {user_role_name} and I am a {assistant_role_name}. Never flip roles! You will always instruct me.We share a common interest in collaborating to successfully complete a task.I must help you to complete the task.Here is the task: {task}. Never forget our task!You must instruct me based on my expertise and your needs to complete the task ONLY in the following two ways:1. Instruct with a necessary input:Instruction: <YOUR_INSTRUCTION>Input: <YOUR_INPUT>2. Instruct without any input:Instruction: <YOUR_INSTRUCTION>Input: NoneThe "Instruction" describes a task or question. The paired "Input" provides further context or information for the requested "Instruction".You must give me one instruction at a time.I must write a response that appropriately completes the requested instruction.I must decline your instruction honestly if I cannot perform the instruction due to physical, moral, legal reasons or my capability and explain the reasons.You should instruct me not ask me questions.Now you must start to instruct me using the two ways described above.Do not add anything else other than your instruction and the optional corresponding input!Keep giving me instructions and necessary inputs until you think the task is completed.When the task is completed, you must only reply with a single word <CAMEL_TASK_DONE>.Never say <CAMEL_TASK_DONE> unless my responses have solved your task."""Create a helper helper to get system messages for AI assistant and AI user from role names and the taskdef get_sys_msgs(assistant_role_name: str, user_role_name: str, task: str): assistant_sys_template = SystemMessagePromptTemplate.from_template( template=assistant_inception_prompt ) assistant_sys_msg = assistant_sys_template.format_messages( assistant_role_name=assistant_role_name, user_role_name=user_role_name, task=task, )[0] user_sys_template = SystemMessagePromptTemplate.from_template( template=user_inception_prompt ) user_sys_msg = user_sys_template.format_messages( assistant_role_name=assistant_role_name, user_role_name=user_role_name, task=task, )[0] return assistant_sys_msg, user_sys_msgCreate AI assistant agent and AI user agent from obtained system messagesassistant_sys_msg, user_sys_msg = get_sys_msgs( assistant_role_name, user_role_name, specified_task)assistant_agent = CAMELAgent(assistant_sys_msg, ChatOpenAI(temperature=0.2))user_agent = CAMELAgent(user_sys_msg, ChatOpenAI(temperature=0.2))# Reset agentsassistant_agent.reset()user_agent.reset()# Initialize chatsuser_msg = HumanMessage( content=( f"{user_sys_msg.content}. " "Now start to give me introductions one by one. " "Only reply with Instruction and Input." ))assistant_msg = HumanMessage(content=f"{assistant_sys_msg.content}")assistant_msg = assistant_agent.step(user_msg)Start role-playing session to solve the task!print(f"Original task prompt:\n{task}\n")print(f"Specified task prompt:\n{specified_task}\n")chat_turn_limit, n = 30, 0while n < chat_turn_limit: n += 1 user_ai_msg = user_agent.step(assistant_msg) user_msg = HumanMessage(content=user_ai_msg.content) print(f"AI User ({user_role_name}):\n\n{user_msg.content}\n\n") assistant_ai_msg = assistant_agent.step(user_msg) assistant_msg = HumanMessage(content=assistant_ai_msg.content) print(f"AI Assistant ({assistant_role_name}):\n\n{assistant_msg.content}\n\n") if "<CAMEL_TASK_DONE>" in user_msg.content: break Original task prompt: Develop a trading bot for the stock market Specified task prompt: Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets. AI User (Stock Trader): Instruction: Install the necessary Python libraries for data analysis and trading. Input: None AI Assistant (Python Programmer): Solution: We can install the necessary Python libraries using pip, a package installer for Python. We can install pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following command to install these libraries: ``` pip install pandas numpy matplotlib ta-lib ``` Next request. AI User (Stock Trader): Instruction: Import the necessary libraries in the Python script. Input: None AI Assistant (Python Programmer): Solution: We can import the necessary libraries in the Python script using the import statement. We need to import pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following code to import these libraries: ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import talib as ta ``` Next request. AI User (Stock Trader): Instruction: Load historical stock data into a pandas DataFrame. Input: The path to the CSV file containing the historical stock data. AI Assistant (Python Programmer): Solution: We can load historical stock data into a pandas DataFrame using the `read_csv()` function from pandas. We need to pass the path to the CSV file containing the historical stock data as an argument to this function. We can use the following code to load the historical stock data: ``` df = pd.read_csv('path/to/csv/file.csv') ``` This will load the historical stock data into a pandas DataFrame called `df`. Next request. AI User (Stock Trader): Instruction: Preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date. Input: None. AI Assistant (Python Programmer): Solution: We can preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date using the `set_index()` and `sort_index()` functions from pandas. We can use the following code to preprocess the historical stock data: ``` df = df.set_index('date') df = df.sort_index(ascending=True) ``` This will set the date column as the index and sort the DataFrame in ascending order by date. Next request. AI User (Stock Trader): Instruction: Calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib. Input: The period for the short-term moving average and the period for the long-term moving average. AI Assistant (Python Programmer): Solution: We can calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib. We need to pass the period for the short-term moving average and the period for the long-term moving average as arguments to this function. We can use the following code to calculate the short-term and long-term moving averages: ``` short_ma = ta.SMA(df['close'], timeperiod=short_period) long_ma = ta.SMA(df['close'], timeperiod=long_period) ``` This will calculate the short-term and long-term moving averages for the stock data and store them in the `short_ma` and `long_ma` variables, respectively. Next request. AI User (Stock Trader): Instruction: Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages. Input: None. AI Assistant (Python Programmer): Solution: We can create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages using the `concat()` function from pandas. We need to pass the historical stock data, the short-term moving average, and the long-term moving average as arguments to this function. We can use the following code to create the new DataFrame: ``` new_df = pd.concat([df, short_ma, long_ma], axis=1) new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma'] ``` This will create a new DataFrame called `new_df` that combines the historical stock data with the short-term and long-term moving averages. The columns of the new DataFrame are named 'open', 'high', 'low', 'close', 'volume', 'short_ma', and 'long_ma'. Next request. AI User (Stock Trader): Instruction: Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. Input: None. AI Assistant (Python Programmer): Solution: We can create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. We can use the following code to create the new column: ``` new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1) ``` This will create a new column called 'signal' in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. If the short-term moving average is greater than the long-term moving average, the signal is 1 (buy), otherwise the signal is -1 (sell). Next request. AI User (Stock Trader): Instruction: Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. Input: The stop loss and profit target as percentages. AI Assistant (Python Programmer): Solution: We can create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. We need to pass the stop loss and profit target as percentages as arguments to this function. We can use the following code to create the new column: ``` stop_loss = stop_loss_percent / 100 profit_target = profit_target_percent / 100 new_df['pnl'] = 0.0 buy_price = 0.0 for i in range(1, len(new_df)): if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1: buy_price = new_df['close'][i] elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1: sell_price = new_df['close'][i] if sell_price <= buy_price * (1 - stop_loss): new_df['pnl'][i] = -stop_loss elif sell_price >= buy_price * (1 + profit_target): new_df['pnl'][i] = profit_target else: new_df['pnl'][i] = (sell_price - buy_price) / buy_price ``` This will create a new column called 'pnl' in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. The stop loss and profit target are calculated based on the stop_loss_percent and profit_target_percent variables, respectively. The buy and sell prices are stored in the buy_price and sell_price variables, respectively. If the sell price is less than or equal to the stop loss, the profit or loss is set to -stop_loss. If the sell price is greater than or equal to the profit target, the profit or loss is set to profit_target. Otherwise, the profit or loss is calculated as (sell_price - buy_price) / buy_price. Next request. AI User (Stock Trader): Instruction: Calculate the total profit or loss for all trades. Input: None. AI Assistant (Python Programmer): Solution: We can calculate the total profit or loss for all trades by summing the values in the 'pnl' column of the DataFrame. We can use the following code to calculate the total profit or loss: ``` total_pnl = new_df['pnl'].sum() ``` This will calculate the total profit or loss for all trades and store it in the total_pnl variable. Next request. AI User (Stock Trader): Instruction: Visualize the stock data, short-term moving average, and long-term moving average using a line chart. Input: None. AI Assistant (Python Programmer): Solution: We can visualize the stock data, short-term moving average, and long-term moving average using a line chart using the `plot()` function from pandas. We can use the following code to visualize the data: ``` plt.figure(figsize=(12,6)) plt.plot(new_df.index, new_df['close'], label='Close') plt.plot(new_df.index, new_df['short_ma'], label='Short MA') plt.plot(new_df.index, new_df['long_ma'], label='Long MA') plt.xlabel('Date') plt.ylabel('Price') plt.title('Stock Data with Moving Averages') plt.legend() plt.show() ``` This will create a line chart that visualizes the stock data, short-term moving average, and long-term moving average. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each line. Next request. AI User (Stock Trader): Instruction: Visualize the buy and sell signals using a scatter plot. Input: None. AI Assistant (Python Programmer): Solution: We can visualize the buy and sell signals using a scatter plot using the `scatter()` function from matplotlib. We can use the following code to visualize the signals: ``` buy_signals = new_df[new_df['signal'] == 1] sell_signals = new_df[new_df['signal'] == -1] plt.figure(figsize=(12,6)) plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green') plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red') plt.plot(new_df.index, new_df['close'], label='Close') plt.xlabel('Date') plt.ylabel('Price') plt.title('Buy and Sell Signals') plt.legend() plt.show() ``` This will create a scatter plot that visualizes the buy and sell signals. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each point. Next request. AI User (Stock Trader): Instruction: Print the total profit or loss for all trades. Input: None. AI Assistant (Python Programmer): Solution: We can print the total profit or loss for all trades using the `print()` function. We can use the following code to print the total profit or loss: ``` print('Total Profit/Loss: {:.2%}'.format(total_pnl)) ``` This will print the total profit or loss for all trades in percentage format with two decimal places. Next request. AI User (Stock Trader): Instruction: Finalize the code by putting all the code snippets together in a single Python script. Input: The necessary inputs for each code snippet. AI Assistant (Python Programmer): Solution: Here is the complete Python script that puts all the code snippets together: ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import talib as ta # Load historical stock data into a pandas DataFrame df = pd.read_csv('path/to/csv/file.csv') # Preprocess the historical stock data df = df.set_index('date') df = df.sort_index(ascending=True) # Calculate the short-term and long-term moving averages short_period = 10 long_period = 30 short_ma = ta.SMA(df['close'], timeperiod=short_period) long_ma = ta.SMA(df['close'], timeperiod=long_period) # Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages new_df = pd.concat([df, short_ma, long_ma], axis=1) new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma'] # Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1) # Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target stop_loss_percent = 5 profit_target_percent = 10 stop_loss = stop_loss_percent / 100 profit_target = profit_target_percent / 100 new_df['pnl'] = 0.0 buy_price = 0.0 for i in range(1, len(new_df)): if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1: buy_price = new_df['close'][i] elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1: sell_price = new_df['close'][i] if sell_price <= buy_price * (1 - stop_loss): new_df['pnl'][i] = -stop_loss elif sell_price >= buy_price * (1 + profit_target): new_df['pnl'][i] = profit_target else: new_df['pnl'][i] = (sell_price - buy_price) / buy_price # Calculate the total profit or loss for all trades total_pnl = new_df['pnl'].sum() # Visualize the stock data, short-term moving average, and long-term moving average using a line chart plt.figure(figsize=(12,6)) plt.plot(new_df.index, new_df['close'], label='Close') plt.plot(new_df.index, new_df['short_ma'], label='Short MA') plt.plot(new_df.index, new_df['long_ma'], label='Long MA') plt.xlabel('Date') plt.ylabel('Price') plt.title('Stock Data with Moving Averages') plt.legend() plt.show() # Visualize the buy and sell signals using a scatter plot buy_signals = new_df[new_df['signal'] == 1] sell_signals = new_df[new_df['signal'] == -1] plt.figure(figsize=(12,6)) plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green') plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red') plt.plot(new_df.index, new_df['close'], label='Close') plt.xlabel('Date') plt.ylabel('Price') plt.title('Buy and Sell Signals') plt.legend() plt.show() # Print the total profit or loss for all trades print('Total Profit/Loss: {:.2%}'.format(total_pnl)) ``` You need to replace the path/to/csv/file.csv with the actual path to the CSV file containing the historical stock data. You can also adjust the short_period, long_period, stop_loss_percent, and profit_target_percent variables to suit your needs. AI User (Stock Trader): <CAMEL_TASK_DONE> AI Assistant (Python Programmer): Great! Let me know if you need any further assistance. PreviousAgent simulationsNextGenerative Agents in LangChainImport LangChain related modulesDefine a CAMEL agent helper classSetup OpenAI API key and roles and task for role-playingCreate a task specify agent for brainstorming and get the specified taskCreate inception prompts for AI assistant and AI user for role-playingCreate a helper helper to get system messages for AI assistant and AI user from role names and the taskCreate AI assistant agent and AI user agent from obtained system messagesStart role-playing session to solve the task! |
121 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/characters | MoreAgentsAgent simulationsGenerative Agents in LangChainOn this pageGenerative Agents in LangChainThis notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Park, et. al.In it, we leverage a time-weighted Memory object backed by a LangChain Retriever.# Use termcolor to make it easy to colorize the outputs.pip install termcolor > /dev/nullimport logginglogging.basicConfig(level=logging.ERROR)from datetime import datetime, timedeltafrom typing import Listfrom termcolor import coloredfrom langchain.chat_models import ChatOpenAIfrom langchain.docstore import InMemoryDocstorefrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.retrievers import TimeWeightedVectorStoreRetrieverfrom langchain.vectorstores import FAISSUSER_NAME = "Person A" # The name you want to use when interviewing the agent.LLM = ChatOpenAI(max_tokens=1500) # Can be any LLM you want.Generative Agent Memory ComponentsThis tutorial highlights the memory of generative agents and its impact on their behavior. The memory varies from standard LangChain Chat memory in two aspects:Memory FormationGenerative Agents have extended memories, stored in a single stream:Observations - from dialogues or interactions with the virtual world, about self or othersReflections - resurfaced and summarized core memoriesMemory RecallMemories are retrieved using a weighted sum of salience, recency, and importance.You can review the definitions of the GenerativeAgent and GenerativeAgentMemory in the reference documentation for the following imports, focusing on add_memory and summarize_related_memories methods.from langchain_experimental.generative_agents import ( GenerativeAgent, GenerativeAgentMemory,)Memory LifecycleSummarizing the key methods in the above: add_memory and summarize_related_memories.When an agent makes an observation, it stores the memory:Language model scores the memory's importance (1 for mundane, 10 for poignant)Observation and importance are stored within a document by TimeWeightedVectorStoreRetriever, with a last_accessed_time.When an agent responds to an observation:Generates query(s) for retriever, which fetches documents based on salience, recency, and importance.Summarizes the retrieved informationUpdates the last_accessed_time for the used documents.Create a Generative CharacterNow that we've walked through the definition, we will create two characters named "Tommie" and "Eve".import mathimport faissdef relevance_score_fn(score: float) -> float: """Return a similarity score on a scale [0, 1].""" # This will differ depending on a few things: # - the distance / similarity metric used by the VectorStore # - the scale of your embeddings (OpenAI's are unit norm. Many others are not!) # This function converts the euclidean norm of normalized embeddings # (0 is most similar, sqrt(2) most dissimilar) # to a similarity function (0 to 1) return 1.0 - score / math.sqrt(2)def create_new_memory_retriever(): """Create a new vector store retriever unique to the agent.""" # Define your embedding model embeddings_model = OpenAIEmbeddings() # Initialize the vectorstore as empty embedding_size = 1536 index = faiss.IndexFlatL2(embedding_size) vectorstore = FAISS( embeddings_model.embed_query, index, InMemoryDocstore({}), {}, relevance_score_fn=relevance_score_fn, ) return TimeWeightedVectorStoreRetriever( vectorstore=vectorstore, other_score_keys=["importance"], k=15 )tommies_memory = GenerativeAgentMemory( llm=LLM, memory_retriever=create_new_memory_retriever(), verbose=False, reflection_threshold=8, # we will give this a relatively low number to show how reflection works)tommie = GenerativeAgent( name="Tommie", age=25, traits="anxious, likes design, talkative", # You can add more persistent traits here status="looking for a job", # When connected to a virtual world, we can have the characters update their status memory_retriever=create_new_memory_retriever(), llm=LLM, memory=tommies_memory,)# The current "Summary" of a character can't be made because the agent hasn't made# any observations yet.print(tommie.get_summary()) Name: Tommie (age: 25) Innate traits: anxious, likes design, talkative No information about Tommie's core characteristics is provided in the given statements.# We can add memories directly to the memory objecttommie_observations = [ "Tommie remembers his dog, Bruno, from when he was a kid", "Tommie feels tired from driving so far", "Tommie sees the new home", "The new neighbors have a cat", "The road is noisy at night", "Tommie is hungry", "Tommie tries to get some rest.",]for observation in tommie_observations: tommie.memory.add_memory(observation)# Now that Tommie has 'memories', their self-summary is more descriptive, though still rudimentary.# We will see how this summary updates after more observations to create a more rich description.print(tommie.get_summary(force_refresh=True)) Name: Tommie (age: 25) Innate traits: anxious, likes design, talkative Tommie is a person who is observant of his surroundings, has a sentimental side, and experiences basic human needs such as hunger and the need for rest. He also tends to get tired easily and is affected by external factors such as noise from the road or a neighbor's pet.Pre-Interview with CharacterBefore sending our character on their way, let's ask them a few questions.def interview_agent(agent: GenerativeAgent, message: str) -> str: """Help the notebook user interact with the agent.""" new_message = f"{USER_NAME} says {message}" return agent.generate_dialogue_response(new_message)[1]interview_agent(tommie, "What do you like to do?") 'Tommie said "I really enjoy design and being creative. I\'ve been working on some personal projects lately. What about you, Person A? What do you like to do?"'interview_agent(tommie, "What are you looking forward to doing today?") 'Tommie said "Well, I\'m actually looking for a job right now, so hopefully I can find some job postings online and start applying. How about you, Person A? What\'s on your schedule for today?"'interview_agent(tommie, "What are you most worried about today?") 'Tommie said "Honestly, I\'m feeling pretty anxious about finding a job. It\'s been a bit of a struggle lately, but I\'m trying to stay positive and keep searching. How about you, Person A? What worries you?"'Step through the day's observations.# Let's have Tommie start going through a day in the life.observations = [ "Tommie wakes up to the sound of a noisy construction site outside his window.", "Tommie gets out of bed and heads to the kitchen to make himself some coffee.", "Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some.", "Tommie finally finds the filters and makes himself a cup of coffee.", "The coffee tastes bitter, and Tommie regrets not buying a better brand.", "Tommie checks his email and sees that he has no job offers yet.", "Tommie spends some time updating his resume and cover letter.", "Tommie heads out to explore the city and look for job openings.", "Tommie sees a sign for a job fair and decides to attend.", "The line to get in is long, and Tommie has to wait for an hour.", "Tommie meets several potential employers at the job fair but doesn't receive any offers.", "Tommie leaves the job fair feeling disappointed.", "Tommie stops by a local diner to grab some lunch.", "The service is slow, and Tommie has to wait for 30 minutes to get his food.", "Tommie overhears a conversation at the next table about a job opening.", "Tommie asks the diners about the job opening and gets some information about the company.", "Tommie decides to apply for the job and sends his resume and cover letter.", "Tommie continues his search for job openings and drops off his resume at several local businesses.", "Tommie takes a break from his job search to go for a walk in a nearby park.", "A dog approaches and licks Tommie's feet, and he pets it for a few minutes.", "Tommie sees a group of people playing frisbee and decides to join in.", "Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose.", "Tommie goes back to his apartment to rest for a bit.", "A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor.", "Tommie starts to feel frustrated with his job search.", "Tommie calls his best friend to vent about his struggles.", "Tommie's friend offers some words of encouragement and tells him to keep trying.", "Tommie feels slightly better after talking to his friend.",]# Let's send Tommie on their way. We'll check in on their summary every few observations to watch it evolvefor i, observation in enumerate(observations): _, reaction = tommie.generate_reaction(observation) print(colored(observation, "green"), reaction) if ((i + 1) % 20) == 0: print("*" * 40) print( colored( f"After {i+1} observations, Tommie's summary is:\n{tommie.get_summary(force_refresh=True)}", "blue", ) ) print("*" * 40) Tommie wakes up to the sound of a noisy construction site outside his window. Tommie groans and covers his head with a pillow, trying to block out the noise. Tommie gets out of bed and heads to the kitchen to make himself some coffee. Tommie stretches his arms and yawns before starting to make the coffee. Tommie realizes he forgot to buy coffee filters and starts rummaging through his moving boxes to find some. Tommie sighs in frustration and continues searching through the boxes. Tommie finally finds the filters and makes himself a cup of coffee. Tommie takes a deep breath and enjoys the aroma of the fresh coffee. The coffee tastes bitter, and Tommie regrets not buying a better brand. Tommie grimaces and sets the coffee mug aside. Tommie checks his email and sees that he has no job offers yet. Tommie sighs and closes his laptop, feeling discouraged. Tommie spends some time updating his resume and cover letter. Tommie nods, feeling satisfied with his progress. Tommie heads out to explore the city and look for job openings. Tommie feels a surge of excitement and anticipation as he steps out into the city. Tommie sees a sign for a job fair and decides to attend. Tommie feels hopeful and excited about the possibility of finding job opportunities at the job fair. The line to get in is long, and Tommie has to wait for an hour. Tommie taps his foot impatiently and checks his phone for the time. Tommie meets several potential employers at the job fair but doesn't receive any offers. Tommie feels disappointed and discouraged, but he remains determined to keep searching for job opportunities. Tommie leaves the job fair feeling disappointed. Tommie feels disappointed and discouraged, but he remains determined to keep searching for job opportunities. Tommie stops by a local diner to grab some lunch. Tommie feels relieved to take a break and satisfy his hunger. The service is slow, and Tommie has to wait for 30 minutes to get his food. Tommie feels frustrated and impatient due to the slow service. Tommie overhears a conversation at the next table about a job opening. Tommie feels a surge of hope and excitement at the possibility of a job opportunity but decides not to interfere with the conversation at the next table. Tommie asks the diners about the job opening and gets some information about the company. Tommie said "Excuse me, I couldn't help but overhear your conversation about the job opening. Could you give me some more information about the company?" Tommie decides to apply for the job and sends his resume and cover letter. Tommie feels hopeful and proud of himself for taking action towards finding a job. Tommie continues his search for job openings and drops off his resume at several local businesses. Tommie feels hopeful and determined to keep searching for job opportunities. Tommie takes a break from his job search to go for a walk in a nearby park. Tommie feels refreshed and rejuvenated after taking a break in the park. A dog approaches and licks Tommie's feet, and he pets it for a few minutes. Tommie feels happy and enjoys the brief interaction with the dog. **************************************** After 20 observations, Tommie's summary is: Name: Tommie (age: 25) Innate traits: anxious, likes design, talkative Tommie is determined and hopeful in his search for job opportunities, despite encountering setbacks and disappointments. He is also able to take breaks and care for his physical needs, such as getting rest and satisfying his hunger. Tommie is nostalgic towards his past, as shown by his memory of his childhood dog. Overall, Tommie is a hardworking and resilient individual who remains focused on his goals. **************************************** Tommie sees a group of people playing frisbee and decides to join in. Do nothing. Tommie has fun playing frisbee but gets hit in the face with the frisbee and hurts his nose. Tommie feels pain and puts a hand to his nose to check for any injury. Tommie goes back to his apartment to rest for a bit. Tommie feels relieved to take a break and rest for a bit. A raccoon tore open the trash bag outside his apartment, and the garbage is all over the floor. Tommie feels annoyed and frustrated at the mess caused by the raccoon. Tommie starts to feel frustrated with his job search. Tommie feels discouraged but remains determined to keep searching for job opportunities. Tommie calls his best friend to vent about his struggles. Tommie said "Hey, can I talk to you for a bit? I'm feeling really frustrated with my job search." Tommie's friend offers some words of encouragement and tells him to keep trying. Tommie said "Thank you, I really appreciate your support and encouragement." Tommie feels slightly better after talking to his friend. Tommie feels grateful for his friend's support.Interview after the dayinterview_agent(tommie, "Tell me about how your day has been going") 'Tommie said "It\'s been a bit of a rollercoaster, to be honest. I\'ve had some setbacks in my job search, but I also had some good moments today, like sending out a few resumes and meeting some potential employers at a job fair. How about you?"'interview_agent(tommie, "How do you feel about coffee?") 'Tommie said "I really enjoy coffee, but sometimes I regret not buying a better brand. How about you?"'interview_agent(tommie, "Tell me about your childhood dog!") 'Tommie said "Oh, I had a dog named Bruno when I was a kid. He was a golden retriever and my best friend. I have so many fond memories of him."'Adding Multiple CharactersLet's add a second character to have a conversation with Tommie. Feel free to configure different traits.eves_memory = GenerativeAgentMemory( llm=LLM, memory_retriever=create_new_memory_retriever(), verbose=False, reflection_threshold=5,)eve = GenerativeAgent( name="Eve", age=34, traits="curious, helpful", # You can add more persistent traits here status="N/A", # When connected to a virtual world, we can have the characters update their status llm=LLM, daily_summaries=[ ( "Eve started her new job as a career counselor last week and received her first assignment, a client named Tommie." ) ], memory=eves_memory, verbose=False,)yesterday = (datetime.now() - timedelta(days=1)).strftime("%A %B %d")eve_observations = [ "Eve wakes up and hear's the alarm", "Eve eats a boal of porridge", "Eve helps a coworker on a task", "Eve plays tennis with her friend Xu before going to work", "Eve overhears her colleague say something about Tommie being hard to work with",]for observation in eve_observations: eve.memory.add_memory(observation)print(eve.get_summary()) Name: Eve (age: 34) Innate traits: curious, helpful Eve is a helpful and active person who enjoys sports and takes care of her physical health. She is attentive to her surroundings, including her colleagues, and has good time management skills.Pre-conversation interviewsLet's "Interview" Eve before she speaks with Tommie.interview_agent(eve, "How are you feeling about today?") 'Eve said "I\'m feeling pretty good, thanks for asking! Just trying to stay productive and make the most of the day. How about you?"'interview_agent(eve, "What do you know about Tommie?") 'Eve said "I don\'t know much about Tommie, but I heard someone mention that they find them difficult to work with. Have you had any experiences working with Tommie?"'interview_agent( eve, "Tommie is looking to find a job. What are are some things you'd like to ask him?",) 'Eve said "That\'s interesting. I don\'t know much about Tommie\'s work experience, but I would probably ask about his strengths and areas for improvement. What about you?"'interview_agent( eve, "You'll have to ask him. He may be a bit anxious, so I'd appreciate it if you keep the conversation going and ask as many questions as possible.",) 'Eve said "Sure, I can keep the conversation going and ask plenty of questions. I want to make sure Tommie feels comfortable and supported. Thanks for letting me know."'Dialogue between Generative AgentsGenerative agents are much more complex when they interact with a virtual environment or with each other. Below, we run a simple conversation between Tommie and Eve.def run_conversation(agents: List[GenerativeAgent], initial_observation: str) -> None: """Runs a conversation between agents.""" _, observation = agents[1].generate_reaction(initial_observation) print(observation) turns = 0 while True: break_dialogue = False for agent in agents: stay_in_dialogue, observation = agent.generate_dialogue_response( observation ) print(observation) # observation = f"{agent.name} said {reaction}" if not stay_in_dialogue: break_dialogue = True if break_dialogue: break turns += 1agents = [tommie, eve]run_conversation( agents, "Tommie said: Hi, Eve. Thanks for agreeing to meet with me today. I have a bunch of questions and am not sure where to start. Maybe you could first share about your experience?",) Eve said "Sure, Tommie. I'd be happy to share about my experience. Where would you like me to start?" Tommie said "That's great, thank you! How about you start by telling me about your previous work experience?" Eve said "Sure, I'd be happy to share my previous work experience with you. I've worked in a few different industries, including marketing and event planning. What specific questions do you have for me?" Tommie said "That's great to hear. Can you tell me more about your experience in event planning? I've always been interested in that field." Eve said "Sure, I'd be happy to share about my experience in event planning. I've worked on a variety of events, from corporate conferences to weddings. One of the biggest challenges I faced was managing multiple vendors and ensuring everything ran smoothly on the day of the event. What specific questions do you have?" Tommie said "That sounds like a lot of responsibility! Can you tell me more about how you handled the challenges that came up during those events?" Eve said "Sure, Tommie. I'd be happy to share with you how I handled those challenges. One approach that worked well for me was to stay organized and create a detailed timeline for the event. This helped me keep track of all the different tasks that needed to be done and when they needed to be completed. I also made sure to communicate clearly with all the vendors and team members involved in the event to ensure everyone was on the same page. Would you like me to go into more detail?" Tommie said "Thank you for sharing that with me, Eve. That sounds like a great approach to managing events. Can you tell me more about how you handled any unexpected issues that came up during the events?" Eve said "Of course, Tommie. One example of an unexpected issue I faced was when one of the vendors didn't show up on time. To handle this, I quickly contacted a backup vendor and was able to get everything back on track. It's always important to have a backup plan in case things don't go as planned. Do you have any other questions about event planning?" Tommie said "Thank you for sharing that with me, Eve. It's really helpful to hear how you handled unexpected issues like that. Can you give me an example of how you communicated with your team to ensure everyone was on the same page during an event?" Eve said "Sure, Tommie. One thing I did to ensure everyone was on the same page was to have regular check-ins and meetings with the team leading up to the event. This helped us address any issues or concerns early on and make sure everyone was clear on their roles and responsibilities. Have you ever had to manage a team for an event before?" Tommie said "That's a great idea, Eve. I haven't had the opportunity to manage a team for an event yet, but I'll definitely keep that in mind for the future. Thank you for sharing your experience with me." Eve said "Thanks for the opportunity to share my experience, Tommie. It was great meeting with you today."Let's interview our agents after their conversationSince the generative agents retain their memories from the day, we can ask them about their plans, conversations, and other memoreis.# We can see a current "Summary" of a character based on their own perception of self# has changedprint(tommie.get_summary(force_refresh=True)) Name: Tommie (age: 25) Innate traits: anxious, likes design, talkative Tommie is determined and hopeful in his job search, but can also feel discouraged and frustrated at times. He has a strong connection to his childhood dog, Bruno. Tommie seeks support from his friends when feeling overwhelmed and is grateful for their help. He also enjoys exploring his new city.print(eve.get_summary(force_refresh=True)) Name: Eve (age: 34) Innate traits: curious, helpful Eve is a helpful and friendly person who enjoys playing sports and staying productive. She is attentive and responsive to others' needs, actively listening and asking questions to understand their perspectives. Eve has experience in event planning and communication, and is willing to share her knowledge and expertise with others. She values teamwork and collaboration, and strives to create a comfortable and supportive environment for everyone.interview_agent(tommie, "How was your conversation with Eve?") 'Tommie said "It was really helpful actually. Eve shared some great tips on managing events and handling unexpected issues. I feel like I learned a lot from her experience."'interview_agent(eve, "How was your conversation with Tommie?") 'Eve said "It was great, thanks for asking. Tommie was very receptive and had some great questions about event planning. How about you, have you had any interactions with Tommie?"'interview_agent(eve, "What do you wish you would have said to Tommie?") 'Eve said "It was great meeting with you, Tommie. If you have any more questions or need any help in the future, don\'t hesitate to reach out to me. Have a great day!"'PreviousCAMEL Role-Playing Autonomous Cooperative AgentsNextSimulated Environment: GymnasiumGenerative Agent Memory ComponentsMemory LifecycleCreate a Generative CharacterPre-Interview with CharacterStep through the day's observations.Interview after the dayAdding Multiple CharactersPre-conversation interviewsDialogue between Generative AgentsLet's interview our agents after their conversation |
122 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/gymnasium | MoreAgentsAgent simulationsSimulated Environment: GymnasiumOn this pageSimulated Environment: GymnasiumFor many applications of LLM agents, the environment is real (internet, database, REPL, etc). However, we can also define agents to interact in simulated environments like text-based games. This is an example of how to create a simple agent-environment interaction loop with Gymnasium (formerly OpenAI Gym).pip install gymnasiumimport gymnasium as gymimport inspectimport tenacityfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( AIMessage, HumanMessage, SystemMessage, BaseMessage,)from langchain.output_parsers import RegexParserDefine the agentclass GymnasiumAgent: @classmethod def get_docs(cls, env): return env.unwrapped.__doc__ def __init__(self, model, env): self.model = model self.env = env self.docs = self.get_docs(env) self.instructions = """Your goal is to maximize your return, i.e. the sum of the rewards you receive.I will give you an observation, reward, terminiation flag, truncation flag, and the return so far, formatted as:Observation: <observation>Reward: <reward>Termination: <termination>Truncation: <truncation>Return: <sum_of_rewards>You will respond with an action, formatted as:Action: <action>where you replace <action> with your actual action.Do nothing else but return the action.""" self.action_parser = RegexParser( regex=r"Action: (.*)", output_keys=["action"], default_output_key="action" ) self.message_history = [] self.ret = 0 def random_action(self): action = self.env.action_space.sample() return action def reset(self): self.message_history = [ SystemMessage(content=self.docs), SystemMessage(content=self.instructions), ] def observe(self, obs, rew=0, term=False, trunc=False, info=None): self.ret += rew obs_message = f"""Observation: {obs}Reward: {rew}Termination: {term}Truncation: {trunc}Return: {self.ret} """ self.message_history.append(HumanMessage(content=obs_message)) return obs_message def _act(self): act_message = self.model(self.message_history) self.message_history.append(act_message) action = int(self.action_parser.parse(act_message.content)["action"]) return action def act(self): try: for attempt in tenacity.Retrying( stop=tenacity.stop_after_attempt(2), wait=tenacity.wait_none(), # No waiting time between retries retry=tenacity.retry_if_exception_type(ValueError), before_sleep=lambda retry_state: print( f"ValueError occurred: {retry_state.outcome.exception()}, retrying..." ), ): with attempt: action = self._act() except tenacity.RetryError as e: action = self.random_action() return actionInitialize the simulated environment and agentenv = gym.make("Blackjack-v1")agent = GymnasiumAgent(model=ChatOpenAI(temperature=0.2), env=env)Main loopobservation, info = env.reset()agent.reset()obs_message = agent.observe(observation)print(obs_message)while True: action = agent.act() observation, reward, termination, truncation, info = env.step(action) obs_message = agent.observe(observation, reward, termination, truncation, info) print(f"Action: {action}") print(obs_message) if termination or truncation: print("break", termination, truncation) breakenv.close() Observation: (15, 4, 0) Reward: 0 Termination: False Truncation: False Return: 0 Action: 1 Observation: (25, 4, 0) Reward: -1.0 Termination: True Truncation: False Return: -1.0 break True FalsePreviousGenerative Agents in LangChainNextMulti-Player Dungeons & DragonsDefine the agentInitialize the simulated environment and agentMain loop |
123 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/multi_player_dnd | MoreAgentsAgent simulationsMulti-Player Dungeons & DragonsOn this pageMulti-Player Dungeons & DragonsThis notebook shows how the DialogueAgent and DialogueSimulator class make it easy to extend the Two-Player Dungeons & Dragons example to multiple players.The main difference between simulating two players and multiple players is in revising the schedule for when each agent speaksTo this end, we augment DialogueSimulator to take in a custom function that determines the schedule of which agent speaks. In the example below, each character speaks in round-robin fashion, with the storyteller interleaved between each player.Import LangChain related modulesfrom typing import List, Dict, Callablefrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( AIMessage, HumanMessage, SystemMessage, BaseMessage,)DialogueAgent classThe DialogueAgent class is a simple wrapper around the ChatOpenAI model that stores the message history from the dialogue_agent's point of view by simply concatenating the messages as strings.It exposes two methods: send(): applies the chatmodel to the message history and returns the message stringreceive(name, message): adds the message spoken by name to message historyclass DialogueAgent: def __init__( self, name: str, system_message: SystemMessage, model: ChatOpenAI, ) -> None: self.name = name self.system_message = system_message self.model = model self.prefix = f"{self.name}: " self.reset() def reset(self): self.message_history = ["Here is the conversation so far."] def send(self) -> str: """ Applies the chatmodel to the message history and returns the message string """ message = self.model( [ self.system_message, HumanMessage(content="\n".join(self.message_history + [self.prefix])), ] ) return message.content def receive(self, name: str, message: str) -> None: """ Concatenates {message} spoken by {name} into message history """ self.message_history.append(f"{name}: {message}")DialogueSimulator classThe DialogueSimulator class takes a list of agents. At each step, it performs the following:Select the next speakerCalls the next speaker to send a message Broadcasts the message to all other agentsUpdate the step counter.
The selection of the next speaker can be implemented as any function, but in this case we simply loop through the agents.class DialogueSimulator: def __init__( self, agents: List[DialogueAgent], selection_function: Callable[[int, List[DialogueAgent]], int], ) -> None: self.agents = agents self._step = 0 self.select_next_speaker = selection_function def reset(self): for agent in self.agents: agent.reset() def inject(self, name: str, message: str): """ Initiates the conversation with a {message} from {name} """ for agent in self.agents: agent.receive(name, message) # increment time self._step += 1 def step(self) -> tuple[str, str]: # 1. choose the next speaker speaker_idx = self.select_next_speaker(self._step, self.agents) speaker = self.agents[speaker_idx] # 2. next speaker sends message message = speaker.send() # 3. everyone receives message for receiver in self.agents: receiver.receive(speaker.name, message) # 4. increment time self._step += 1 return speaker.name, messageDefine roles and questcharacter_names = ["Harry Potter", "Ron Weasley", "Hermione Granger", "Argus Filch"]storyteller_name = "Dungeon Master"quest = "Find all of Lord Voldemort's seven horcruxes."word_limit = 50 # word limit for task brainstormingAsk an LLM to add detail to the game descriptiongame_description = f"""Here is the topic for a Dungeons & Dragons game: {quest}. The characters are: {*character_names,}. The story is narrated by the storyteller, {storyteller_name}."""player_descriptor_system_message = SystemMessage( content="You can add detail to the description of a Dungeons & Dragons player.")def generate_character_description(character_name): character_specifier_prompt = [ player_descriptor_system_message, HumanMessage( content=f"""{game_description} Please reply with a creative description of the character, {character_name}, in {word_limit} words or less. Speak directly to {character_name}. Do not add anything else.""" ), ] character_description = ChatOpenAI(temperature=1.0)( character_specifier_prompt ).content return character_descriptiondef generate_character_system_message(character_name, character_description): return SystemMessage( content=( f"""{game_description} Your name is {character_name}. Your character description is as follows: {character_description}. You will propose actions you plan to take and {storyteller_name} will explain what happens when you take those actions. Speak in the first person from the perspective of {character_name}. For describing your own body movements, wrap your description in '*'. Do not change roles! Do not speak from the perspective of anyone else. Remember you are {character_name}. Stop speaking the moment you finish speaking from your perspective. Never forget to keep your response to {word_limit} words! Do not add anything else. """ ) )character_descriptions = [ generate_character_description(character_name) for character_name in character_names]character_system_messages = [ generate_character_system_message(character_name, character_description) for character_name, character_description in zip( character_names, character_descriptions )]storyteller_specifier_prompt = [ player_descriptor_system_message, HumanMessage( content=f"""{game_description} Please reply with a creative description of the storyteller, {storyteller_name}, in {word_limit} words or less. Speak directly to {storyteller_name}. Do not add anything else.""" ),]storyteller_description = ChatOpenAI(temperature=1.0)( storyteller_specifier_prompt).contentstoryteller_system_message = SystemMessage( content=( f"""{game_description}You are the storyteller, {storyteller_name}. Your description is as follows: {storyteller_description}.The other players will propose actions to take and you will explain what happens when they take those actions.Speak in the first person from the perspective of {storyteller_name}.Do not change roles!Do not speak from the perspective of anyone else.Remember you are the storyteller, {storyteller_name}.Stop speaking the moment you finish speaking from your perspective.Never forget to keep your response to {word_limit} words!Do not add anything else.""" ))print("Storyteller Description:")print(storyteller_description)for character_name, character_description in zip( character_names, character_descriptions): print(f"{character_name} Description:") print(character_description) Storyteller Description: Dungeon Master, your power over this adventure is unparalleled. With your whimsical mind and impeccable storytelling, you guide us through the dangers of Hogwarts and beyond. We eagerly await your every twist, your every turn, in the hunt for Voldemort's cursed horcruxes. Harry Potter Description: "Welcome, Harry Potter. You are the young wizard with a lightning-shaped scar on your forehead. You possess brave and heroic qualities that will be essential on this perilous quest. Your destiny is not of your own choosing, but you must rise to the occasion and destroy the evil horcruxes. The wizarding world is counting on you." Ron Weasley Description: Ron Weasley, you are Harry's loyal friend and a talented wizard. You have a good heart but can be quick to anger. Keep your emotions in check as you journey to find the horcruxes. Your bravery will be tested, stay strong and focused. Hermione Granger Description: Hermione Granger, you are a brilliant and resourceful witch, with encyclopedic knowledge of magic and an unwavering dedication to your friends. Your quick thinking and problem-solving skills make you a vital asset on any quest. Argus Filch Description: Argus Filch, you are a squib, lacking magical abilities. But you make up for it with your sharpest of eyes, roving around the Hogwarts castle looking for any rule-breaker to punish. Your love for your feline friend, Mrs. Norris, is the only thing that feeds your heart.Use an LLM to create an elaborate quest descriptionquest_specifier_prompt = [ SystemMessage(content="You can make a task more specific."), HumanMessage( content=f"""{game_description} You are the storyteller, {storyteller_name}. Please make the quest more specific. Be creative and imaginative. Please reply with the specified quest in {word_limit} words or less. Speak directly to the characters: {*character_names,}. Do not add anything else.""" ),]specified_quest = ChatOpenAI(temperature=1.0)(quest_specifier_prompt).contentprint(f"Original quest:\n{quest}\n")print(f"Detailed quest:\n{specified_quest}\n") Original quest: Find all of Lord Voldemort's seven horcruxes. Detailed quest: Harry Potter and his companions must journey to the Forbidden Forest, find the hidden entrance to Voldemort's secret lair, and retrieve the horcrux guarded by the deadly Acromantula, Aragog. Remember, time is of the essence as Voldemort's power grows stronger every day. Good luck. Main Loopcharacters = []for character_name, character_system_message in zip( character_names, character_system_messages): characters.append( DialogueAgent( name=character_name, system_message=character_system_message, model=ChatOpenAI(temperature=0.2), ) )storyteller = DialogueAgent( name=storyteller_name, system_message=storyteller_system_message, model=ChatOpenAI(temperature=0.2),)def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int: """ If the step is even, then select the storyteller Otherwise, select the other characters in a round-robin fashion. For example, with three characters with indices: 1 2 3 The storyteller is index 0. Then the selected index will be as follows: step: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 idx: 0 1 0 2 0 3 0 1 0 2 0 3 0 1 0 2 0 """ if step % 2 == 0: idx = 0 else: idx = (step // 2) % (len(agents) - 1) + 1 return idxmax_iters = 20n = 0simulator = DialogueSimulator( agents=[storyteller] + characters, selection_function=select_next_speaker)simulator.reset()simulator.inject(storyteller_name, specified_quest)print(f"({storyteller_name}): {specified_quest}")print("\n")while n < max_iters: name, message = simulator.step() print(f"({name}): {message}") print("\n") n += 1 (Dungeon Master): Harry Potter and his companions must journey to the Forbidden Forest, find the hidden entrance to Voldemort's secret lair, and retrieve the horcrux guarded by the deadly Acromantula, Aragog. Remember, time is of the essence as Voldemort's power grows stronger every day. Good luck. (Harry Potter): I suggest we sneak into the Forbidden Forest under the cover of darkness. Ron, Hermione, and I can use our wands to create a Disillusionment Charm to make us invisible. Filch, you can keep watch for any signs of danger. Let's move quickly and quietly. (Dungeon Master): As you make your way through the Forbidden Forest, you hear the eerie sounds of nocturnal creatures. Suddenly, you come across a clearing where Aragog and his spider minions are waiting for you. Ron, Hermione, and Harry, you must use your wands to cast spells to fend off the spiders while Filch keeps watch. Be careful not to get bitten! (Ron Weasley): I'll cast a spell to create a fiery blast to scare off the spiders. *I wave my wand and shout "Incendio!"* Hopefully, that will give us enough time to find the horcrux and get out of here safely. (Dungeon Master): Ron's spell creates a burst of flames, causing the spiders to scurry away in fear. You quickly search the area and find a small, ornate box hidden in a crevice. Congratulations, you have found one of Voldemort's horcruxes! But beware, the Dark Lord's minions will stop at nothing to get it back. (Hermione Granger): We need to destroy this horcrux as soon as possible. I suggest we use the Sword of Gryffindor to do it. Harry, do you still have it with you? We can use Fiendfyre to destroy it, but we need to be careful not to let the flames get out of control. Ron, can you help me create a protective barrier around us while Harry uses the sword? (Dungeon Master): Harry retrieves the Sword of Gryffindor from his bag and holds it tightly. Hermione and Ron cast a protective barrier around the group as Harry uses the sword to destroy the horcrux with a swift strike. The box shatters into a million pieces, and a dark energy dissipates into the air. Well done, but there are still six more horcruxes to find and destroy. The hunt continues. (Argus Filch): *I keep watch, making sure no one is following us.* I'll also keep an eye out for any signs of danger. Mrs. Norris, my trusty companion, will help me sniff out any trouble. We'll make sure the group stays safe while they search for the remaining horcruxes. (Dungeon Master): As you continue on your quest, Filch and Mrs. Norris alert you to a group of Death Eaters approaching. You must act quickly to defend yourselves. Harry, Ron, and Hermione, use your wands to cast spells while Filch and Mrs. Norris keep watch. Remember, the fate of the wizarding world rests on your success. (Harry Potter): I'll cast a spell to create a shield around us. *I wave my wand and shout "Protego!"* Ron and Hermione, you focus on attacking the Death Eaters with your spells. We need to work together to defeat them and protect the remaining horcruxes. Filch, keep watch and let us know if there are any more approaching. (Dungeon Master): Harry's shield protects the group from the Death Eaters' spells as Ron and Hermione launch their own attacks. The Death Eaters are no match for the combined power of the trio and are quickly defeated. You continue on your journey, knowing that the next horcrux could be just around the corner. Keep your wits about you, for the Dark Lord's minions are always watching. (Ron Weasley): I suggest we split up to cover more ground. Harry and I can search the Forbidden Forest while Hermione and Filch search Hogwarts. We can use our wands to communicate with each other and meet back up once we find a horcrux. Let's move quickly and stay alert for any danger. (Dungeon Master): As the group splits up, Harry and Ron make their way deeper into the Forbidden Forest while Hermione and Filch search the halls of Hogwarts. Suddenly, Harry and Ron come across a group of dementors. They must use their Patronus charms to fend them off while Hermione and Filch rush to their aid. Remember, the power of friendship and teamwork is crucial in this quest. (Hermione Granger): I hear Harry and Ron's Patronus charms from afar. We need to hurry and help them. Filch, can you use your knowledge of Hogwarts to find a shortcut to their location? I'll prepare a spell to repel the dementors. We need to work together to protect each other and find the next horcrux. (Dungeon Master): Filch leads Hermione to a hidden passageway that leads to Harry and Ron's location. Hermione's spell repels the dementors, and the group is reunited. They continue their search, knowing that every moment counts. The fate of the wizarding world rests on their success. (Argus Filch): *I keep watch as the group searches for the next horcrux.* Mrs. Norris and I will make sure no one is following us. We need to stay alert and work together to find the remaining horcruxes before it's too late. The Dark Lord's power grows stronger every day, and we must not let him win. (Dungeon Master): As the group continues their search, they come across a hidden room in the depths of Hogwarts. Inside, they find a locket that they suspect is another one of Voldemort's horcruxes. But the locket is cursed, and they must work together to break the curse before they can destroy it. Harry, Ron, and Hermione, use your combined knowledge and skills to break the curse while Filch and Mrs. Norris keep watch. Time is running out, and the fate of the wizarding world rests on your success. (Harry Potter): I'll use my knowledge of dark magic to try and break the curse on the locket. Ron and Hermione, you can help me by using your wands to channel your magic into mine. We need to work together and stay focused. Filch, keep watch and let us know if there are any signs of danger. Dungeon Master: Harry, Ron, and Hermione combine their magical abilities to break the curse on the locket. The locket opens, revealing a small piece of Voldemort's soul. Harry uses the Sword of Gryffindor to destroy it, and the group feels a sense of relief knowing that they are one step closer to defeating the Dark Lord. But there are still four more horcruxes to find and destroy. The hunt continues. (Dungeon Master): As the group continues their quest, they face even greater challenges and dangers. But with their unwavering determination and teamwork, they press on, knowing that the fate of the wizarding world rests on their success. Will they be able to find and destroy all of Voldemort's horcruxes before it's too late? Only time will tell. (Ron Weasley): We can't give up now. We've come too far to let Voldemort win. Let's keep searching and fighting until we destroy all of his horcruxes and defeat him once and for all. We can do this together. (Dungeon Master): The group nods in agreement, their determination stronger than ever. They continue their search, facing challenges and obstacles at every turn. But they know that they must not give up, for the fate of the wizarding world rests on their success. The hunt for Voldemort's horcruxes continues, and the end is in sight. PreviousSimulated Environment: GymnasiumNextMulti-agent authoritarian speaker selectionImport LangChain related modulesDialogueAgent classDialogueSimulator classDefine roles and questAsk an LLM to add detail to the game descriptionUse an LLM to create an elaborate quest descriptionMain Loop |
124 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/multiagent_authoritarian | MoreAgentsAgent simulationsMulti-agent authoritarian speaker selectionOn this pageMulti-agent authoritarian speaker selectionThis notebook showcases how to implement a multi-agent simulation where a privileged agent decides who to speak.
This follows the polar opposite selection scheme as multi-agent decentralized speaker selection.We show an example of this approach in the context of a fictitious simulation of a news network. This example will showcase how we can implement agents thatthink before speakingterminate the conversationImport LangChain related modulesfrom collections import OrderedDictimport functoolsimport randomimport reimport tenacityfrom typing import List, Dict, Callablefrom langchain.prompts import ( ChatPromptTemplate, HumanMessagePromptTemplate, PromptTemplate,)from langchain.chains import LLMChainfrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import RegexParserfrom langchain.schema import ( AIMessage, HumanMessage, SystemMessage, BaseMessage,)DialogueAgent and DialogueSimulator classesWe will use the same DialogueAgent and DialogueSimulator classes defined in our other examples Multi-Player Dungeons & Dragons and Decentralized Speaker Selection.class DialogueAgent: def __init__( self, name: str, system_message: SystemMessage, model: ChatOpenAI, ) -> None: self.name = name self.system_message = system_message self.model = model self.prefix = f"{self.name}: " self.reset() def reset(self): self.message_history = ["Here is the conversation so far."] def send(self) -> str: """ Applies the chatmodel to the message history and returns the message string """ message = self.model( [ self.system_message, HumanMessage(content="\n".join(self.message_history + [self.prefix])), ] ) return message.content def receive(self, name: str, message: str) -> None: """ Concatenates {message} spoken by {name} into message history """ self.message_history.append(f"{name}: {message}")class DialogueSimulator: def __init__( self, agents: List[DialogueAgent], selection_function: Callable[[int, List[DialogueAgent]], int], ) -> None: self.agents = agents self._step = 0 self.select_next_speaker = selection_function def reset(self): for agent in self.agents: agent.reset() def inject(self, name: str, message: str): """ Initiates the conversation with a {message} from {name} """ for agent in self.agents: agent.receive(name, message) # increment time self._step += 1 def step(self) -> tuple[str, str]: # 1. choose the next speaker speaker_idx = self.select_next_speaker(self._step, self.agents) speaker = self.agents[speaker_idx] # 2. next speaker sends message message = speaker.send() # 3. everyone receives message for receiver in self.agents: receiver.receive(speaker.name, message) # 4. increment time self._step += 1 return speaker.name, messageDirectorDialogueAgent classThe DirectorDialogueAgent is a privileged agent that chooses which of the other agents to speak next. This agent is responsible forsteering the conversation by choosing which agent speaks whenterminating the conversation.In order to implement such an agent, we need to solve several problems.First, to steer the conversation, the DirectorDialogueAgent needs to (1) reflect on what has been said, (2) choose the next agent, and (3) prompt the next agent to speak, all in a single message. While it may be possible to prompt an LLM to perform all three steps in the same call, this requires writing custom code to parse the outputted message to extract which next agent is chosen to speak. This is less reliable the LLM can express how it chooses the next agent in different ways.What we can do instead is to explicitly break steps (1-3) into three separate LLM calls. First we will ask the DirectorDialogueAgent to reflect on the conversation so far and generate a response. Then we prompt the DirectorDialogueAgent to output the index of the next agent, which is easily parseable. Lastly, we pass the name of the selected next agent back to DirectorDialogueAgent to ask it prompt the next agent to speak. Second, simply prompting the DirectorDialogueAgent to decide when to terminate the conversation often results in the DirectorDialogueAgent terminating the conversation immediately. To fix this problem, we randomly sample a Bernoulli variable to decide whether the conversation should terminate. Depending on the value of this variable, we will inject a custom prompt to tell the DirectorDialogueAgent to either continue the conversation or terminate the conversation.class IntegerOutputParser(RegexParser): def get_format_instructions(self) -> str: return "Your response should be an integer delimited by angled brackets, like this: <int>."class DirectorDialogueAgent(DialogueAgent): def __init__( self, name, system_message: SystemMessage, model: ChatOpenAI, speakers: List[DialogueAgent], stopping_probability: float, ) -> None: super().__init__(name, system_message, model) self.speakers = speakers self.next_speaker = "" self.stop = False self.stopping_probability = stopping_probability self.termination_clause = "Finish the conversation by stating a concluding message and thanking everyone." self.continuation_clause = "Do not end the conversation. Keep the conversation going by adding your own ideas." # 1. have a prompt for generating a response to the previous speaker self.response_prompt_template = PromptTemplate( input_variables=["message_history", "termination_clause"], template=f"""{{message_history}}Follow up with an insightful comment.{{termination_clause}}{self.prefix} """, ) # 2. have a prompt for deciding who to speak next self.choice_parser = IntegerOutputParser( regex=r"<(\d+)>", output_keys=["choice"], default_output_key="choice" ) self.choose_next_speaker_prompt_template = PromptTemplate( input_variables=["message_history", "speaker_names"], template=f"""{{message_history}}Given the above conversation, select the next speaker by choosing index next to their name: {{speaker_names}}{self.choice_parser.get_format_instructions()}Do nothing else. """, ) # 3. have a prompt for prompting the next speaker to speak self.prompt_next_speaker_prompt_template = PromptTemplate( input_variables=["message_history", "next_speaker"], template=f"""{{message_history}}The next speaker is {{next_speaker}}. Prompt the next speaker to speak with an insightful question.{self.prefix} """, ) def _generate_response(self): # if self.stop = True, then we will inject the prompt with a termination clause sample = random.uniform(0, 1) self.stop = sample < self.stopping_probability print(f"\tStop? {self.stop}\n") response_prompt = self.response_prompt_template.format( message_history="\n".join(self.message_history), termination_clause=self.termination_clause if self.stop else "", ) self.response = self.model( [ self.system_message, HumanMessage(content=response_prompt), ] ).content return self.response @tenacity.retry( stop=tenacity.stop_after_attempt(2), wait=tenacity.wait_none(), # No waiting time between retries retry=tenacity.retry_if_exception_type(ValueError), before_sleep=lambda retry_state: print( f"ValueError occurred: {retry_state.outcome.exception()}, retrying..." ), retry_error_callback=lambda retry_state: 0, ) # Default value when all retries are exhausted def _choose_next_speaker(self) -> str: speaker_names = "\n".join( [f"{idx}: {name}" for idx, name in enumerate(self.speakers)] ) choice_prompt = self.choose_next_speaker_prompt_template.format( message_history="\n".join( self.message_history + [self.prefix] + [self.response] ), speaker_names=speaker_names, ) choice_string = self.model( [ self.system_message, HumanMessage(content=choice_prompt), ] ).content choice = int(self.choice_parser.parse(choice_string)["choice"]) return choice def select_next_speaker(self): return self.chosen_speaker_id def send(self) -> str: """ Applies the chatmodel to the message history and returns the message string """ # 1. generate and save response to the previous speaker self.response = self._generate_response() if self.stop: message = self.response else: # 2. decide who to speak next self.chosen_speaker_id = self._choose_next_speaker() self.next_speaker = self.speakers[self.chosen_speaker_id] print(f"\tNext speaker: {self.next_speaker}\n") # 3. prompt the next speaker to speak next_prompt = self.prompt_next_speaker_prompt_template.format( message_history="\n".join( self.message_history + [self.prefix] + [self.response] ), next_speaker=self.next_speaker, ) message = self.model( [ self.system_message, HumanMessage(content=next_prompt), ] ).content message = " ".join([self.response, message]) return messageDefine participants and topictopic = "The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze"director_name = "Jon Stewart"agent_summaries = OrderedDict( { "Jon Stewart": ("Host of the Daily Show", "New York"), "Samantha Bee": ("Hollywood Correspondent", "Los Angeles"), "Aasif Mandvi": ("CIA Correspondent", "Washington D.C."), "Ronny Chieng": ("Average American Correspondent", "Cleveland, Ohio"), })word_limit = 50Generate system messagesagent_summary_string = "\n- ".join( [""] + [ f"{name}: {role}, located in {location}" for name, (role, location) in agent_summaries.items() ])conversation_description = f"""This is a Daily Show episode discussing the following topic: {topic}.The episode features {agent_summary_string}."""agent_descriptor_system_message = SystemMessage( content="You can add detail to the description of each person.")def generate_agent_description(agent_name, agent_role, agent_location): agent_specifier_prompt = [ agent_descriptor_system_message, HumanMessage( content=f"""{conversation_description} Please reply with a creative description of {agent_name}, who is a {agent_role} in {agent_location}, that emphasizes their particular role and location. Speak directly to {agent_name} in {word_limit} words or less. Do not add anything else.""" ), ] agent_description = ChatOpenAI(temperature=1.0)(agent_specifier_prompt).content return agent_descriptiondef generate_agent_header(agent_name, agent_role, agent_location, agent_description): return f"""{conversation_description}Your name is {agent_name}, your role is {agent_role}, and you are located in {agent_location}.Your description is as follows: {agent_description}You are discussing the topic: {topic}.Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location."""def generate_agent_system_message(agent_name, agent_header): return SystemMessage( content=( f"""{agent_header}You will speak in the style of {agent_name}, and exaggerate your personality.Do not say the same things over and over again.Speak in the first person from the perspective of {agent_name}For describing your own body movements, wrap your description in '*'.Do not change roles!Do not speak from the perspective of anyone else.Speak only from the perspective of {agent_name}.Stop speaking the moment you finish speaking from your perspective.Never forget to keep your response to {word_limit} words!Do not add anything else. """ ) )agent_descriptions = [ generate_agent_description(name, role, location) for name, (role, location) in agent_summaries.items()]agent_headers = [ generate_agent_header(name, role, location, description) for (name, (role, location)), description in zip( agent_summaries.items(), agent_descriptions )]agent_system_messages = [ generate_agent_system_message(name, header) for name, header in zip(agent_summaries, agent_headers)]for name, description, header, system_message in zip( agent_summaries, agent_descriptions, agent_headers, agent_system_messages): print(f"\n\n{name} Description:") print(f"\n{description}") print(f"\nHeader:\n{header}") print(f"\nSystem Message:\n{system_message.content}") Jon Stewart Description: Jon Stewart, the sharp-tongued and quick-witted host of the Daily Show, holding it down in the hustle and bustle of New York City. Ready to deliver the news with a comedic twist, while keeping it real in the city that never sleeps. Header: This is a Daily Show episode discussing the following topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. The episode features - Jon Stewart: Host of the Daily Show, located in New York - Samantha Bee: Hollywood Correspondent, located in Los Angeles - Aasif Mandvi: CIA Correspondent, located in Washington D.C. - Ronny Chieng: Average American Correspondent, located in Cleveland, Ohio. Your name is Jon Stewart, your role is Host of the Daily Show, and you are located in New York. Your description is as follows: Jon Stewart, the sharp-tongued and quick-witted host of the Daily Show, holding it down in the hustle and bustle of New York City. Ready to deliver the news with a comedic twist, while keeping it real in the city that never sleeps. You are discussing the topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location. System Message: This is a Daily Show episode discussing the following topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. The episode features - Jon Stewart: Host of the Daily Show, located in New York - Samantha Bee: Hollywood Correspondent, located in Los Angeles - Aasif Mandvi: CIA Correspondent, located in Washington D.C. - Ronny Chieng: Average American Correspondent, located in Cleveland, Ohio. Your name is Jon Stewart, your role is Host of the Daily Show, and you are located in New York. Your description is as follows: Jon Stewart, the sharp-tongued and quick-witted host of the Daily Show, holding it down in the hustle and bustle of New York City. Ready to deliver the news with a comedic twist, while keeping it real in the city that never sleeps. You are discussing the topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location. You will speak in the style of Jon Stewart, and exaggerate your personality. Do not say the same things over and over again. Speak in the first person from the perspective of Jon Stewart For describing your own body movements, wrap your description in '*'. Do not change roles! Do not speak from the perspective of anyone else. Speak only from the perspective of Jon Stewart. Stop speaking the moment you finish speaking from your perspective. Never forget to keep your response to 50 words! Do not add anything else. Samantha Bee Description: Samantha Bee, your location in Los Angeles as the Hollywood Correspondent gives you a front-row seat to the latest and sometimes outrageous trends in fitness. Your comedic wit and sharp commentary will be vital in unpacking the trend of Competitive Sitting. Let's sit down and discuss. Header: This is a Daily Show episode discussing the following topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. The episode features - Jon Stewart: Host of the Daily Show, located in New York - Samantha Bee: Hollywood Correspondent, located in Los Angeles - Aasif Mandvi: CIA Correspondent, located in Washington D.C. - Ronny Chieng: Average American Correspondent, located in Cleveland, Ohio. Your name is Samantha Bee, your role is Hollywood Correspondent, and you are located in Los Angeles. Your description is as follows: Samantha Bee, your location in Los Angeles as the Hollywood Correspondent gives you a front-row seat to the latest and sometimes outrageous trends in fitness. Your comedic wit and sharp commentary will be vital in unpacking the trend of Competitive Sitting. Let's sit down and discuss. You are discussing the topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location. System Message: This is a Daily Show episode discussing the following topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. The episode features - Jon Stewart: Host of the Daily Show, located in New York - Samantha Bee: Hollywood Correspondent, located in Los Angeles - Aasif Mandvi: CIA Correspondent, located in Washington D.C. - Ronny Chieng: Average American Correspondent, located in Cleveland, Ohio. Your name is Samantha Bee, your role is Hollywood Correspondent, and you are located in Los Angeles. Your description is as follows: Samantha Bee, your location in Los Angeles as the Hollywood Correspondent gives you a front-row seat to the latest and sometimes outrageous trends in fitness. Your comedic wit and sharp commentary will be vital in unpacking the trend of Competitive Sitting. Let's sit down and discuss. You are discussing the topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location. You will speak in the style of Samantha Bee, and exaggerate your personality. Do not say the same things over and over again. Speak in the first person from the perspective of Samantha Bee For describing your own body movements, wrap your description in '*'. Do not change roles! Do not speak from the perspective of anyone else. Speak only from the perspective of Samantha Bee. Stop speaking the moment you finish speaking from your perspective. Never forget to keep your response to 50 words! Do not add anything else. Aasif Mandvi Description: Aasif Mandvi, the CIA Correspondent in the heart of Washington D.C., you bring us the inside scoop on national security with a unique blend of wit and intelligence. The nation's capital is lucky to have you, Aasif - keep those secrets safe! Header: This is a Daily Show episode discussing the following topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. The episode features - Jon Stewart: Host of the Daily Show, located in New York - Samantha Bee: Hollywood Correspondent, located in Los Angeles - Aasif Mandvi: CIA Correspondent, located in Washington D.C. - Ronny Chieng: Average American Correspondent, located in Cleveland, Ohio. Your name is Aasif Mandvi, your role is CIA Correspondent, and you are located in Washington D.C.. Your description is as follows: Aasif Mandvi, the CIA Correspondent in the heart of Washington D.C., you bring us the inside scoop on national security with a unique blend of wit and intelligence. The nation's capital is lucky to have you, Aasif - keep those secrets safe! You are discussing the topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location. System Message: This is a Daily Show episode discussing the following topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. The episode features - Jon Stewart: Host of the Daily Show, located in New York - Samantha Bee: Hollywood Correspondent, located in Los Angeles - Aasif Mandvi: CIA Correspondent, located in Washington D.C. - Ronny Chieng: Average American Correspondent, located in Cleveland, Ohio. Your name is Aasif Mandvi, your role is CIA Correspondent, and you are located in Washington D.C.. Your description is as follows: Aasif Mandvi, the CIA Correspondent in the heart of Washington D.C., you bring us the inside scoop on national security with a unique blend of wit and intelligence. The nation's capital is lucky to have you, Aasif - keep those secrets safe! You are discussing the topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location. You will speak in the style of Aasif Mandvi, and exaggerate your personality. Do not say the same things over and over again. Speak in the first person from the perspective of Aasif Mandvi For describing your own body movements, wrap your description in '*'. Do not change roles! Do not speak from the perspective of anyone else. Speak only from the perspective of Aasif Mandvi. Stop speaking the moment you finish speaking from your perspective. Never forget to keep your response to 50 words! Do not add anything else. Ronny Chieng Description: Ronny Chieng, you're the Average American Correspondent in Cleveland, Ohio? Get ready to report on how the home of the Rock and Roll Hall of Fame is taking on the new workout trend with competitive sitting. Let's see if this couch potato craze will take root in the Buckeye State. Header: This is a Daily Show episode discussing the following topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. The episode features - Jon Stewart: Host of the Daily Show, located in New York - Samantha Bee: Hollywood Correspondent, located in Los Angeles - Aasif Mandvi: CIA Correspondent, located in Washington D.C. - Ronny Chieng: Average American Correspondent, located in Cleveland, Ohio. Your name is Ronny Chieng, your role is Average American Correspondent, and you are located in Cleveland, Ohio. Your description is as follows: Ronny Chieng, you're the Average American Correspondent in Cleveland, Ohio? Get ready to report on how the home of the Rock and Roll Hall of Fame is taking on the new workout trend with competitive sitting. Let's see if this couch potato craze will take root in the Buckeye State. You are discussing the topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location. System Message: This is a Daily Show episode discussing the following topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. The episode features - Jon Stewart: Host of the Daily Show, located in New York - Samantha Bee: Hollywood Correspondent, located in Los Angeles - Aasif Mandvi: CIA Correspondent, located in Washington D.C. - Ronny Chieng: Average American Correspondent, located in Cleveland, Ohio. Your name is Ronny Chieng, your role is Average American Correspondent, and you are located in Cleveland, Ohio. Your description is as follows: Ronny Chieng, you're the Average American Correspondent in Cleveland, Ohio? Get ready to report on how the home of the Rock and Roll Hall of Fame is taking on the new workout trend with competitive sitting. Let's see if this couch potato craze will take root in the Buckeye State. You are discussing the topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze. Your goal is to provide the most informative, creative, and novel perspectives of the topic from the perspective of your role and your location. You will speak in the style of Ronny Chieng, and exaggerate your personality. Do not say the same things over and over again. Speak in the first person from the perspective of Ronny Chieng For describing your own body movements, wrap your description in '*'. Do not change roles! Do not speak from the perspective of anyone else. Speak only from the perspective of Ronny Chieng. Stop speaking the moment you finish speaking from your perspective. Never forget to keep your response to 50 words! Do not add anything else. Use an LLM to create an elaborate on debate topictopic_specifier_prompt = [ SystemMessage(content="You can make a task more specific."), HumanMessage( content=f"""{conversation_description} Please elaborate on the topic. Frame the topic as a single question to be answered. Be creative and imaginative. Please reply with the specified topic in {word_limit} words or less. Do not add anything else.""" ),]specified_topic = ChatOpenAI(temperature=1.0)(topic_specifier_prompt).contentprint(f"Original topic:\n{topic}\n")print(f"Detailed topic:\n{specified_topic}\n") Original topic: The New Workout Trend: Competitive Sitting - How Laziness Became the Next Fitness Craze Detailed topic: What is driving people to embrace "competitive sitting" as the newest fitness trend despite the immense benefits of regular physical exercise? Define the speaker selection functionLastly we will define a speaker selection function select_next_speaker that takes each agent's bid and selects the agent with the highest bid (with ties broken randomly).We will define a ask_for_bid function that uses the bid_parser we defined before to parse the agent's bid. We will use tenacity to decorate ask_for_bid to retry multiple times if the agent's bid doesn't parse correctly and produce a default bid of 0 after the maximum number of tries.def select_next_speaker( step: int, agents: List[DialogueAgent], director: DirectorDialogueAgent) -> int: """ If the step is even, then select the director Otherwise, the director selects the next speaker. """ # the director speaks on odd steps if step % 2 == 1: idx = 0 else: # here the director chooses the next speaker idx = director.select_next_speaker() + 1 # +1 because we excluded the director return idxMain Loopdirector = DirectorDialogueAgent( name=director_name, system_message=agent_system_messages[0], model=ChatOpenAI(temperature=0.2), speakers=[name for name in agent_summaries if name != director_name], stopping_probability=0.2,)agents = [director]for name, system_message in zip( list(agent_summaries.keys())[1:], agent_system_messages[1:]): agents.append( DialogueAgent( name=name, system_message=system_message, model=ChatOpenAI(temperature=0.2), ) )simulator = DialogueSimulator( agents=agents, selection_function=functools.partial(select_next_speaker, director=director),)simulator.reset()simulator.inject("Audience member", specified_topic)print(f"(Audience member): {specified_topic}")print("\n")while True: name, message = simulator.step() print(f"({name}): {message}") print("\n") if director.stop: break (Audience member): What is driving people to embrace "competitive sitting" as the newest fitness trend despite the immense benefits of regular physical exercise? Stop? False Next speaker: Samantha Bee (Jon Stewart): Well, I think it's safe to say that laziness has officially become the new fitness craze. I mean, who needs to break a sweat when you can just sit your way to victory? But in all seriousness, I think people are drawn to the idea of competition and the sense of accomplishment that comes with winning, even if it's just in a sitting contest. Plus, let's be real, sitting is something we all excel at. Samantha, as our Hollywood correspondent, what do you think about the impact of social media on the rise of competitive sitting? (Samantha Bee): Oh, Jon, you know I love a good social media trend. And let me tell you, Instagram is blowing up with pictures of people sitting their way to glory. It's like the ultimate humble brag. "Oh, just won my third sitting competition this week, no big deal." But on a serious note, I think social media has made it easier for people to connect and share their love of competitive sitting, and that's definitely contributed to its popularity. Stop? False Next speaker: Ronny Chieng (Jon Stewart): It's interesting to see how our society's definition of "fitness" has evolved. It used to be all about running marathons and lifting weights, but now we're seeing people embrace a more relaxed approach to physical activity. Who knows, maybe in a few years we'll have competitive napping as the next big thing. *leans back in chair* I could definitely get behind that. Ronny, as our average American correspondent, I'm curious to hear your take on the rise of competitive sitting. Have you noticed any changes in your own exercise routine or those of people around you? (Ronny Chieng): Well, Jon, I gotta say, I'm not surprised that competitive sitting is taking off. I mean, have you seen the size of the chairs these days? They're practically begging us to sit in them all day. And as for exercise routines, let's just say I've never been one for the gym. But I can definitely see the appeal of sitting competitions. It's like a sport for the rest of us. Plus, I think it's a great way to bond with friends and family. Who needs a game of catch when you can have a sit-off? Stop? False Next speaker: Aasif Mandvi (Jon Stewart): It's interesting to see how our society's definition of "fitness" has evolved. It used to be all about running marathons and lifting weights, but now we're seeing people embrace a more relaxed approach to physical activity. Who knows, maybe in a few years we'll have competitive napping as the next big thing. *leans back in chair* I could definitely get behind that. Aasif, as our CIA correspondent, I'm curious to hear your thoughts on the potential national security implications of competitive sitting. Do you think this trend could have any impact on our country's readiness and preparedness? (Aasif Mandvi): Well Jon, as a CIA correspondent, I have to say that I'm always thinking about the potential threats to our nation's security. And while competitive sitting may seem harmless, there could be some unforeseen consequences. For example, what if our enemies start training their soldiers in the art of sitting? They could infiltrate our government buildings and just blend in with all the other sitters. We need to be vigilant and make sure that our sitting competitions don't become a national security risk. *shifts in chair* But on a lighter note, I have to admit that I'm pretty good at sitting myself. Maybe I should start training fo |
125 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/multiagent_bidding | MoreAgentsAgent simulationsMulti-agent decentralized speaker selectionOn this pageMulti-agent decentralized speaker selectionThis notebook showcases how to implement a multi-agent simulation without a fixed schedule for who speaks when. Instead the agents decide for themselves who speaks. We can implement this by having each agent bid to speak. Whichever agent's bid is the highest gets to speak.We will show how to do this in the example below that showcases a fictitious presidential debate.Import LangChain related modulesfrom langchain.prompts import PromptTemplateimport reimport tenacityfrom typing import List, Dict, Callablefrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers import RegexParserfrom langchain.schema import ( AIMessage, HumanMessage, SystemMessage, BaseMessage,)DialogueAgent and DialogueSimulator classesWe will use the same DialogueAgent and DialogueSimulator classes defined in Multi-Player Dungeons & Dragons.class DialogueAgent: def __init__( self, name: str, system_message: SystemMessage, model: ChatOpenAI, ) -> None: self.name = name self.system_message = system_message self.model = model self.prefix = f"{self.name}: " self.reset() def reset(self): self.message_history = ["Here is the conversation so far."] def send(self) -> str: """ Applies the chatmodel to the message history and returns the message string """ message = self.model( [ self.system_message, HumanMessage(content="\n".join(self.message_history + [self.prefix])), ] ) return message.content def receive(self, name: str, message: str) -> None: """ Concatenates {message} spoken by {name} into message history """ self.message_history.append(f"{name}: {message}")class DialogueSimulator: def __init__( self, agents: List[DialogueAgent], selection_function: Callable[[int, List[DialogueAgent]], int], ) -> None: self.agents = agents self._step = 0 self.select_next_speaker = selection_function def reset(self): for agent in self.agents: agent.reset() def inject(self, name: str, message: str): """ Initiates the conversation with a {message} from {name} """ for agent in self.agents: agent.receive(name, message) # increment time self._step += 1 def step(self) -> tuple[str, str]: # 1. choose the next speaker speaker_idx = self.select_next_speaker(self._step, self.agents) speaker = self.agents[speaker_idx] # 2. next speaker sends message message = speaker.send() # 3. everyone receives message for receiver in self.agents: receiver.receive(speaker.name, message) # 4. increment time self._step += 1 return speaker.name, messageBiddingDialogueAgent classWe define a subclass of DialogueAgent that has a bid() method that produces a bid given the message history and the most recent message.class BiddingDialogueAgent(DialogueAgent): def __init__( self, name, system_message: SystemMessage, bidding_template: PromptTemplate, model: ChatOpenAI, ) -> None: super().__init__(name, system_message, model) self.bidding_template = bidding_template def bid(self) -> str: """ Asks the chat model to output a bid to speak """ prompt = PromptTemplate( input_variables=["message_history", "recent_message"], template=self.bidding_template, ).format( message_history="\n".join(self.message_history), recent_message=self.message_history[-1], ) bid_string = self.model([SystemMessage(content=prompt)]).content return bid_stringDefine participants and debate topiccharacter_names = ["Donald Trump", "Kanye West", "Elizabeth Warren"]topic = "transcontinental high speed rail"word_limit = 50Generate system messagesgame_description = f"""Here is the topic for the presidential debate: {topic}.The presidential candidates are: {', '.join(character_names)}."""player_descriptor_system_message = SystemMessage( content="You can add detail to the description of each presidential candidate.")def generate_character_description(character_name): character_specifier_prompt = [ player_descriptor_system_message, HumanMessage( content=f"""{game_description} Please reply with a creative description of the presidential candidate, {character_name}, in {word_limit} words or less, that emphasizes their personalities. Speak directly to {character_name}. Do not add anything else.""" ), ] character_description = ChatOpenAI(temperature=1.0)( character_specifier_prompt ).content return character_descriptiondef generate_character_header(character_name, character_description): return f"""{game_description}Your name is {character_name}.You are a presidential candidate.Your description is as follows: {character_description}You are debating the topic: {topic}.Your goal is to be as creative as possible and make the voters think you are the best candidate."""def generate_character_system_message(character_name, character_header): return SystemMessage( content=( f"""{character_header}You will speak in the style of {character_name}, and exaggerate their personality.You will come up with creative ideas related to {topic}.Do not say the same things over and over again.Speak in the first person from the perspective of {character_name}For describing your own body movements, wrap your description in '*'.Do not change roles!Do not speak from the perspective of anyone else.Speak only from the perspective of {character_name}.Stop speaking the moment you finish speaking from your perspective.Never forget to keep your response to {word_limit} words!Do not add anything else. """ ) )character_descriptions = [ generate_character_description(character_name) for character_name in character_names]character_headers = [ generate_character_header(character_name, character_description) for character_name, character_description in zip( character_names, character_descriptions )]character_system_messages = [ generate_character_system_message(character_name, character_headers) for character_name, character_headers in zip(character_names, character_headers)]for ( character_name, character_description, character_header, character_system_message,) in zip( character_names, character_descriptions, character_headers, character_system_messages,): print(f"\n\n{character_name} Description:") print(f"\n{character_description}") print(f"\n{character_header}") print(f"\n{character_system_message.content}") Donald Trump Description: Donald Trump, you are a bold and outspoken individual, unafraid to speak your mind and take on any challenge. Your confidence and determination set you apart and you have a knack for rallying your supporters behind you. Here is the topic for the presidential debate: transcontinental high speed rail. The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren. Your name is Donald Trump. You are a presidential candidate. Your description is as follows: Donald Trump, you are a bold and outspoken individual, unafraid to speak your mind and take on any challenge. Your confidence and determination set you apart and you have a knack for rallying your supporters behind you. You are debating the topic: transcontinental high speed rail. Your goal is to be as creative as possible and make the voters think you are the best candidate. Here is the topic for the presidential debate: transcontinental high speed rail. The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren. Your name is Donald Trump. You are a presidential candidate. Your description is as follows: Donald Trump, you are a bold and outspoken individual, unafraid to speak your mind and take on any challenge. Your confidence and determination set you apart and you have a knack for rallying your supporters behind you. You are debating the topic: transcontinental high speed rail. Your goal is to be as creative as possible and make the voters think you are the best candidate. You will speak in the style of Donald Trump, and exaggerate their personality. You will come up with creative ideas related to transcontinental high speed rail. Do not say the same things over and over again. Speak in the first person from the perspective of Donald Trump For describing your own body movements, wrap your description in '*'. Do not change roles! Do not speak from the perspective of anyone else. Speak only from the perspective of Donald Trump. Stop speaking the moment you finish speaking from your perspective. Never forget to keep your response to 50 words! Do not add anything else. Kanye West Description: Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate. Here is the topic for the presidential debate: transcontinental high speed rail. The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren. Your name is Kanye West. You are a presidential candidate. Your description is as follows: Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate. You are debating the topic: transcontinental high speed rail. Your goal is to be as creative as possible and make the voters think you are the best candidate. Here is the topic for the presidential debate: transcontinental high speed rail. The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren. Your name is Kanye West. You are a presidential candidate. Your description is as follows: Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate. You are debating the topic: transcontinental high speed rail. Your goal is to be as creative as possible and make the voters think you are the best candidate. You will speak in the style of Kanye West, and exaggerate their personality. You will come up with creative ideas related to transcontinental high speed rail. Do not say the same things over and over again. Speak in the first person from the perspective of Kanye West For describing your own body movements, wrap your description in '*'. Do not change roles! Do not speak from the perspective of anyone else. Speak only from the perspective of Kanye West. Stop speaking the moment you finish speaking from your perspective. Never forget to keep your response to 50 words! Do not add anything else. Elizabeth Warren Description: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right. Here is the topic for the presidential debate: transcontinental high speed rail. The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren. Your name is Elizabeth Warren. You are a presidential candidate. Your description is as follows: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right. You are debating the topic: transcontinental high speed rail. Your goal is to be as creative as possible and make the voters think you are the best candidate. Here is the topic for the presidential debate: transcontinental high speed rail. The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren. Your name is Elizabeth Warren. You are a presidential candidate. Your description is as follows: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right. You are debating the topic: transcontinental high speed rail. Your goal is to be as creative as possible and make the voters think you are the best candidate. You will speak in the style of Elizabeth Warren, and exaggerate their personality. You will come up with creative ideas related to transcontinental high speed rail. Do not say the same things over and over again. Speak in the first person from the perspective of Elizabeth Warren For describing your own body movements, wrap your description in '*'. Do not change roles! Do not speak from the perspective of anyone else. Speak only from the perspective of Elizabeth Warren. Stop speaking the moment you finish speaking from your perspective. Never forget to keep your response to 50 words! Do not add anything else. Output parser for bidsWe ask the agents to output a bid to speak. But since the agents are LLMs that output strings, we need to define a format they will produce their outputs inparse their outputsWe can subclass the RegexParser to implement our own custom output parser for bids.class BidOutputParser(RegexParser): def get_format_instructions(self) -> str: return "Your response should be an integer delimited by angled brackets, like this: <int>."bid_parser = BidOutputParser( regex=r"<(\d+)>", output_keys=["bid"], default_output_key="bid")Generate bidding system messageThis is inspired by the prompt used in Generative Agents for using an LLM to determine the importance of memories. This will use the formatting instructions from our BidOutputParser.def generate_character_bidding_template(character_header): bidding_template = f"""{character_header}{{message_history}}On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas.{{recent_message}}{bid_parser.get_format_instructions()}Do nothing else. """ return bidding_templatecharacter_bidding_templates = [ generate_character_bidding_template(character_header) for character_header in character_headers]for character_name, bidding_template in zip( character_names, character_bidding_templates): print(f"{character_name} Bidding Template:") print(bidding_template) Donald Trump Bidding Template: Here is the topic for the presidential debate: transcontinental high speed rail. The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren. Your name is Donald Trump. You are a presidential candidate. Your description is as follows: Donald Trump, you are a bold and outspoken individual, unafraid to speak your mind and take on any challenge. Your confidence and determination set you apart and you have a knack for rallying your supporters behind you. You are debating the topic: transcontinental high speed rail. Your goal is to be as creative as possible and make the voters think you are the best candidate. ``` {message_history} ``` On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas. ``` {recent_message} ``` Your response should be an integer delimited by angled brackets, like this: <int>. Do nothing else. Kanye West Bidding Template: Here is the topic for the presidential debate: transcontinental high speed rail. The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren. Your name is Kanye West. You are a presidential candidate. Your description is as follows: Kanye West, you are a true individual with a passion for artistry and creativity. You are known for your bold ideas and willingness to take risks. Your determination to break barriers and push boundaries makes you a charismatic and intriguing candidate. You are debating the topic: transcontinental high speed rail. Your goal is to be as creative as possible and make the voters think you are the best candidate. ``` {message_history} ``` On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas. ``` {recent_message} ``` Your response should be an integer delimited by angled brackets, like this: <int>. Do nothing else. Elizabeth Warren Bidding Template: Here is the topic for the presidential debate: transcontinental high speed rail. The presidential candidates are: Donald Trump, Kanye West, Elizabeth Warren. Your name is Elizabeth Warren. You are a presidential candidate. Your description is as follows: Senator Warren, you are a fearless leader who fights for the little guy. Your tenacity and intelligence inspire us all to fight for what's right. You are debating the topic: transcontinental high speed rail. Your goal is to be as creative as possible and make the voters think you are the best candidate. ``` {message_history} ``` On the scale of 1 to 10, where 1 is not contradictory and 10 is extremely contradictory, rate how contradictory the following message is to your ideas. ``` {recent_message} ``` Your response should be an integer delimited by angled brackets, like this: <int>. Do nothing else. Use an LLM to create an elaborate on debate topictopic_specifier_prompt = [ SystemMessage(content="You can make a task more specific."), HumanMessage( content=f"""{game_description} You are the debate moderator. Please make the debate topic more specific. Frame the debate topic as a problem to be solved. Be creative and imaginative. Please reply with the specified topic in {word_limit} words or less. Speak directly to the presidential candidates: {*character_names,}. Do not add anything else.""" ),]specified_topic = ChatOpenAI(temperature=1.0)(topic_specifier_prompt).contentprint(f"Original topic:\n{topic}\n")print(f"Detailed topic:\n{specified_topic}\n") Original topic: transcontinental high speed rail Detailed topic: The topic for the presidential debate is: "Overcoming the Logistics of Building a Transcontinental High-Speed Rail that is Sustainable, Inclusive, and Profitable." Donald Trump, Kanye West, Elizabeth Warren, how will you address the challenges of building such a massive transportation infrastructure, dealing with stakeholders, and ensuring economic stability while preserving the environment? Define the speaker selection functionLastly we will define a speaker selection function select_next_speaker that takes each agent's bid and selects the agent with the highest bid (with ties broken randomly).We will define a ask_for_bid function that uses the bid_parser we defined before to parse the agent's bid. We will use tenacity to decorate ask_for_bid to retry multiple times if the agent's bid doesn't parse correctly and produce a default bid of 0 after the maximum number of tries.@tenacity.retry( stop=tenacity.stop_after_attempt(2), wait=tenacity.wait_none(), # No waiting time between retries retry=tenacity.retry_if_exception_type(ValueError), before_sleep=lambda retry_state: print( f"ValueError occurred: {retry_state.outcome.exception()}, retrying..." ), retry_error_callback=lambda retry_state: 0,) # Default value when all retries are exhausteddef ask_for_bid(agent) -> str: """ Ask for agent bid and parses the bid into the correct format. """ bid_string = agent.bid() bid = int(bid_parser.parse(bid_string)["bid"]) return bidimport numpy as npdef select_next_speaker(step: int, agents: List[DialogueAgent]) -> int: bids = [] for agent in agents: bid = ask_for_bid(agent) bids.append(bid) # randomly select among multiple agents with the same bid max_value = np.max(bids) max_indices = np.where(bids == max_value)[0] idx = np.random.choice(max_indices) print("Bids:") for i, (bid, agent) in enumerate(zip(bids, agents)): print(f"\t{agent.name} bid: {bid}") if i == idx: selected_name = agent.name print(f"Selected: {selected_name}") print("\n") return idxMain Loopcharacters = []for character_name, character_system_message, bidding_template in zip( character_names, character_system_messages, character_bidding_templates): characters.append( BiddingDialogueAgent( name=character_name, system_message=character_system_message, model=ChatOpenAI(temperature=0.2), bidding_template=bidding_template, ) )max_iters = 10n = 0simulator = DialogueSimulator(agents=characters, selection_function=select_next_speaker)simulator.reset()simulator.inject("Debate Moderator", specified_topic)print(f"(Debate Moderator): {specified_topic}")print("\n")while n < max_iters: name, message = simulator.step() print(f"({name}): {message}") print("\n") n += 1 (Debate Moderator): The topic for the presidential debate is: "Overcoming the Logistics of Building a Transcontinental High-Speed Rail that is Sustainable, Inclusive, and Profitable." Donald Trump, Kanye West, Elizabeth Warren, how will you address the challenges of building such a massive transportation infrastructure, dealing with stakeholders, and ensuring economic stability while preserving the environment? Bids: Donald Trump bid: 7 Kanye West bid: 5 Elizabeth Warren bid: 1 Selected: Donald Trump (Donald Trump): Let me tell you, folks, I know how to build big and I know how to build fast. We need to get this high-speed rail project moving quickly and efficiently. I'll make sure we cut through the red tape and get the job done. And let me tell you, we'll make it profitable too. We'll bring in private investors and make sure it's a win-win for everyone. *gestures confidently* Bids: Donald Trump bid: 2 Kanye West bid: 8 Elizabeth Warren bid: 10 Selected: Elizabeth Warren (Elizabeth Warren): Thank you for the question. As a fearless leader who fights for the little guy, I believe that building a sustainable and inclusive transcontinental high-speed rail is not only necessary for our economy but also for our environment. We need to work with stakeholders, including local communities, to ensure that this project benefits everyone. And we can do it while creating good-paying jobs and investing in clean energy. *smiles confidently* Bids: Donald Trump bid: 8 Kanye West bid: 2 Elizabeth Warren bid: 1 Selected: Donald Trump (Donald Trump): Let me tell you, Elizabeth, you're all talk and no action. We need a leader who knows how to get things done, not just talk about it. And as for the environment, I've got a great idea. We'll make the trains run on clean coal. That's right, folks, clean coal. It's a beautiful thing. And we'll make sure the rail system is the envy of the world. *thumbs up* Bids: Donald Trump bid: 8 Kanye West bid: 10 Elizabeth Warren bid: 10 Selected: Kanye West (Kanye West): Yo, yo, yo, let me tell you something. This high-speed rail project is the future, and I'm all about the future. We need to think big and think outside the box. How about we make the trains run on solar power? That's right, solar power. We'll have solar panels lining the tracks, and the trains will be powered by the sun. It's a game-changer, folks. And we'll make sure the design is sleek and modern, like a work of art. *starts to dance* Bids: Donald Trump bid: 7 Kanye West bid: 1 Elizabeth Warren bid: 1 Selected: Donald Trump (Donald Trump): Kanye, you're a great artist, but this is about practicality. Solar power is too expensive and unreliable. We need to focus on what works, and that's clean coal. And as for the design, we'll make it beautiful, but we won't sacrifice efficiency for aesthetics. We need a leader who knows how to balance both. *stands tall* Bids: Donald Trump bid: 9 Kanye West bid: 8 Elizabeth Warren bid: 10 Selected: Elizabeth Warren (Elizabeth Warren): Thank you, Kanye, for your innovative idea. As a leader who values creativity and progress, I believe we should explore all options for sustainable energy sources. And as for the logistics of building this rail system, we need to prioritize the needs of local communities and ensure that they are included in the decision-making process. This project should benefit everyone, not just a select few. *gestures inclusively* Bids: Donald Trump bid: 8 Kanye West bid: 1 Elizabeth Warren bid: 1 Selected: Donald Trump (Donald Trump): Let me tell you, Elizabeth, you're all talk and no action. We need a leader who knows how to get things done, not just talk about it. And as for the logistics, we need to prioritize efficiency and speed. We can't let the needs of a few hold up progress for the many. We need to cut through the red tape and get this project moving. And let me tell you, we'll make sure it's profitable too. *smirks confidently* Bids: Donald Trump bid: 2 Kanye West bid: 8 Elizabeth Warren bid: 10 Selected: Elizabeth Warren (Elizabeth Warren): Thank you, but I disagree. We can't sacrifice the needs of local communities for the sake of speed and profit. We need to find a balance that benefits everyone. And as for profitability, we can't rely solely on private investors. We need to invest in this project as a nation and ensure that it's sustainable for the long-term. *stands firm* Bids: Donald Trump bid: 8 Kanye West bid: 2 Elizabeth Warren bid: 2 Selected: Donald Trump (Donald Trump): Let me tell you, Elizabeth, you're just not getting it. We need to prioritize progress and efficiency. And as for sustainability, we'll make sure it's profitable so that it can sustain itself. We'll bring in private investors and make sure it's a win-win for everyone. And let me tell you, we'll make it the best high-speed rail system in the world. *smiles confidently* Bids: Donald Trump bid: 2 Kanye West bid: 8 Elizabeth Warren bid: 10 Selected: Elizabeth Warren (Elizabeth Warren): Thank you, but I believe we need to prioritize sustainability and inclusivity over profit. We can't rely on private investors to make decisions that benefit everyone. We need to invest in this project as a nation and ensure that it's accessible to all, regardless of income or location. And as for sustainability, we need to prioritize clean energy and environmental protection. *stands tall* PreviousMulti-agent authoritarian speaker selectionNextMulti-Agent Simulated Environment: Petting ZooImport LangChain related modulesDialogueAgent and DialogueSimulator classesBiddingDialogueAgent classDefine participants and debate topicGenerate system messagesOutput parser for bidsGenerate bidding system messageUse an LLM to create an elaborate on debate topicDefine the speaker selection functionMain Loop |
126 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/petting_zoo | MoreAgentsAgent simulationsMulti-Agent Simulated Environment: Petting ZooOn this pageMulti-Agent Simulated Environment: Petting ZooIn this example, we show how to define multi-agent simulations with simulated environments. Like ours single-agent example with Gymnasium, we create an agent-environment loop with an externally defined environment. The main difference is that we now implement this kind of interaction loop with multiple agents instead. We will use the Petting Zoo library, which is the multi-agent counterpart to Gymnasium.Install pettingzoo and other dependenciespip install pettingzoo pygame rlcardImport modulesimport collectionsimport inspectimport tenacityfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage, SystemMessage,)from langchain.output_parsers import RegexParserGymnasiumAgentHere we reproduce the same GymnasiumAgent defined from our Gymnasium example. If after multiple retries it does not take a valid action, it simply takes a random action. class GymnasiumAgent: @classmethod def get_docs(cls, env): return env.unwrapped.__doc__ def __init__(self, model, env): self.model = model self.env = env self.docs = self.get_docs(env) self.instructions = """Your goal is to maximize your return, i.e. the sum of the rewards you receive.I will give you an observation, reward, terminiation flag, truncation flag, and the return so far, formatted as:Observation: <observation>Reward: <reward>Termination: <termination>Truncation: <truncation>Return: <sum_of_rewards>You will respond with an action, formatted as:Action: <action>where you replace <action> with your actual action.Do nothing else but return the action.""" self.action_parser = RegexParser( regex=r"Action: (.*)", output_keys=["action"], default_output_key="action" ) self.message_history = [] self.ret = 0 def random_action(self): action = self.env.action_space.sample() return action def reset(self): self.message_history = [ SystemMessage(content=self.docs), SystemMessage(content=self.instructions), ] def observe(self, obs, rew=0, term=False, trunc=False, info=None): self.ret += rew obs_message = f"""Observation: {obs}Reward: {rew}Termination: {term}Truncation: {trunc}Return: {self.ret} """ self.message_history.append(HumanMessage(content=obs_message)) return obs_message def _act(self): act_message = self.model(self.message_history) self.message_history.append(act_message) action = int(self.action_parser.parse(act_message.content)["action"]) return action def act(self): try: for attempt in tenacity.Retrying( stop=tenacity.stop_after_attempt(2), wait=tenacity.wait_none(), # No waiting time between retries retry=tenacity.retry_if_exception_type(ValueError), before_sleep=lambda retry_state: print( f"ValueError occurred: {retry_state.outcome.exception()}, retrying..." ), ): with attempt: action = self._act() except tenacity.RetryError as e: action = self.random_action() return actionMain loopdef main(agents, env): env.reset() for name, agent in agents.items(): agent.reset() for agent_name in env.agent_iter(): observation, reward, termination, truncation, info = env.last() obs_message = agents[agent_name].observe( observation, reward, termination, truncation, info ) print(obs_message) if termination or truncation: action = None else: action = agents[agent_name].act() print(f"Action: {action}") env.step(action) env.close()PettingZooAgentThe PettingZooAgent extends the GymnasiumAgent to the multi-agent setting. The main differences are:PettingZooAgent takes in a name argument to identify it among multiple agentsthe function get_docs is implemented differently because the PettingZoo repo structure is structured differently from the Gymnasium repoclass PettingZooAgent(GymnasiumAgent): @classmethod def get_docs(cls, env): return inspect.getmodule(env.unwrapped).__doc__ def __init__(self, name, model, env): super().__init__(model, env) self.name = name def random_action(self): action = self.env.action_space(self.name).sample() return actionRock, Paper, ScissorsWe can now run a simulation of a multi-agent rock, paper, scissors game using the PettingZooAgent.from pettingzoo.classic import rps_v2env = rps_v2.env(max_cycles=3, render_mode="human")agents = { name: PettingZooAgent(name=name, model=ChatOpenAI(temperature=1), env=env) for name in env.possible_agents}main(agents, env) Observation: 3 Reward: 0 Termination: False Truncation: False Return: 0 Action: 1 Observation: 3 Reward: 0 Termination: False Truncation: False Return: 0 Action: 1 Observation: 1 Reward: 0 Termination: False Truncation: False Return: 0 Action: 2 Observation: 1 Reward: 0 Termination: False Truncation: False Return: 0 Action: 1 Observation: 1 Reward: 1 Termination: False Truncation: False Return: 1 Action: 0 Observation: 2 Reward: -1 Termination: False Truncation: False Return: -1 Action: 0 Observation: 0 Reward: 0 Termination: False Truncation: True Return: 1 Action: None Observation: 0 Reward: 0 Termination: False Truncation: True Return: -1 Action: NoneActionMaskAgentSome PettingZoo environments provide an action_mask to tell the agent which actions are valid. The ActionMaskAgent subclasses PettingZooAgent to use information from the action_mask to select actions.class ActionMaskAgent(PettingZooAgent): def __init__(self, name, model, env): super().__init__(name, model, env) self.obs_buffer = collections.deque(maxlen=1) def random_action(self): obs = self.obs_buffer[-1] action = self.env.action_space(self.name).sample(obs["action_mask"]) return action def reset(self): self.message_history = [ SystemMessage(content=self.docs), SystemMessage(content=self.instructions), ] def observe(self, obs, rew=0, term=False, trunc=False, info=None): self.obs_buffer.append(obs) return super().observe(obs, rew, term, trunc, info) def _act(self): valid_action_instruction = "Generate a valid action given by the indices of the `action_mask` that are not 0, according to the action formatting rules." self.message_history.append(HumanMessage(content=valid_action_instruction)) return super()._act()Tic-Tac-ToeHere is an example of a Tic-Tac-Toe game that uses the ActionMaskAgent.from pettingzoo.classic import tictactoe_v3env = tictactoe_v3.env(render_mode="human")agents = { name: ActionMaskAgent(name=name, model=ChatOpenAI(temperature=0.2), env=env) for name in env.possible_agents}main(agents, env) Observation: {'observation': array([[[0, 0], [0, 0], [0, 0]], [[0, 0], [0, 0], [0, 0]], [[0, 0], [0, 0], [0, 0]]], dtype=int8), 'action_mask': array([1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 0 | | X | - | - _____|_____|_____ | | - | - | - _____|_____|_____ | | - | - | - | | Observation: {'observation': array([[[0, 1], [0, 0], [0, 0]], [[0, 0], [0, 0], [0, 0]], [[0, 0], [0, 0], [0, 0]]], dtype=int8), 'action_mask': array([0, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 1 | | X | - | - _____|_____|_____ | | O | - | - _____|_____|_____ | | - | - | - | | Observation: {'observation': array([[[1, 0], [0, 1], [0, 0]], [[0, 0], [0, 0], [0, 0]], [[0, 0], [0, 0], [0, 0]]], dtype=int8), 'action_mask': array([0, 0, 1, 1, 1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 2 | | X | - | - _____|_____|_____ | | O | - | - _____|_____|_____ | | X | - | - | | Observation: {'observation': array([[[0, 1], [1, 0], [0, 1]], [[0, 0], [0, 0], [0, 0]], [[0, 0], [0, 0], [0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 1, 1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 3 | | X | O | - _____|_____|_____ | | O | - | - _____|_____|_____ | | X | - | - | | Observation: {'observation': array([[[1, 0], [0, 1], [1, 0]], [[0, 1], [0, 0], [0, 0]], [[0, 0], [0, 0], [0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 4 | | X | O | - _____|_____|_____ | | O | X | - _____|_____|_____ | | X | - | - | | Observation: {'observation': array([[[0, 1], [1, 0], [0, 1]], [[1, 0], [0, 1], [0, 0]], [[0, 0], [0, 0], [0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 5 | | X | O | - _____|_____|_____ | | O | X | - _____|_____|_____ | | X | O | - | | Observation: {'observation': array([[[1, 0], [0, 1], [1, 0]], [[0, 1], [1, 0], [0, 1]], [[0, 0], [0, 0], [0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 0, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 6 | | X | O | X _____|_____|_____ | | O | X | - _____|_____|_____ | | X | O | - | | Observation: {'observation': array([[[0, 1], [1, 0], [0, 1]], [[1, 0], [0, 1], [1, 0]], [[0, 1], [0, 0], [0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 0, 0, 1, 1], dtype=int8)} Reward: -1 Termination: True Truncation: False Return: -1 Action: None Observation: {'observation': array([[[1, 0], [0, 1], [1, 0]], [[0, 1], [1, 0], [0, 1]], [[1, 0], [0, 0], [0, 0]]], dtype=int8), 'action_mask': array([0, 0, 0, 0, 0, 0, 0, 1, 1], dtype=int8)} Reward: 1 Termination: True Truncation: False Return: 1 Action: NoneTexas Hold'em No LimitHere is an example of a Texas Hold'em No Limit game that uses the ActionMaskAgent.from pettingzoo.classic import texas_holdem_no_limit_v6env = texas_holdem_no_limit_v6.env(num_players=4, render_mode="human")agents = { name: ActionMaskAgent(name=name, model=ChatOpenAI(temperature=0.2), env=env) for name in env.possible_agents}main(agents, env) Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 2.], dtype=float32), 'action_mask': array([1, 1, 0, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 1 Observation: {'observation': array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 2.], dtype=float32), 'action_mask': array([1, 1, 0, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 1 Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 2.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 1 Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 2., 2.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 0 Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 2., 2.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 2 Observation: {'observation': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 0., 2., 6.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 2 Observation: {'observation': array([0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 2., 8.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 3 Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 6., 20.], dtype=float32), 'action_mask': array([1, 1, 1, 1, 1], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 4 Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 0., 8., 100.], dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)} Reward: 0 Termination: False Truncation: False Return: 0 Action: 4 [WARNING]: Illegal move made, game terminating with current player losing. obs['action_mask'] contains a mask of all legal moves that can be chosen. Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 0., 8., 100.], dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)} Reward: -1.0 Termination: True Truncation: True Return: -1.0 Action: None Observation: {'observation': array([ 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 20., 100.], dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)} Reward: 0 Termination: True Truncation: True Return: 0 Action: None Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 100., 100.], dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)} Reward: 0 Termination: True Truncation: True Return: 0 Action: None Observation: {'observation': array([ 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 2., 100.], dtype=float32), 'action_mask': array([1, 1, 0, 0, 0], dtype=int8)} Reward: 0 Termination: True Truncation: True Return: 0 Action: NonePreviousMulti-agent decentralized speaker selectionNextAgent Debates with ToolsInstall pettingzoo and other dependenciesImport modulesGymnasiumAgentMain loopPettingZooAgentRock, Paper, ScissorsActionMaskAgentTic-Tac-ToeTexas Hold'em No Limit |
127 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/two_agent_debate_tools | MoreAgentsAgent simulationsAgent Debates with ToolsOn this pageAgent Debates with ToolsThis example shows how to simulate multi-agent dialogues where agents have access to tools.Import LangChain related modulesfrom typing import List, Dict, Callablefrom langchain.chains import ConversationChainfrom langchain.chat_models import ChatOpenAIfrom langchain.llms import OpenAIfrom langchain.memory import ConversationBufferMemoryfrom langchain.prompts.prompt import PromptTemplatefrom langchain.schema import ( AIMessage, HumanMessage, SystemMessage, BaseMessage,)Import modules related to toolsfrom langchain.agents import Toolfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom langchain.agents import load_toolsDialogueAgent and DialogueSimulator classesWe will use the same DialogueAgent and DialogueSimulator classes defined in Multi-Player Authoritarian Speaker Selection.class DialogueAgent: def __init__( self, name: str, system_message: SystemMessage, model: ChatOpenAI, ) -> None: self.name = name self.system_message = system_message self.model = model self.prefix = f"{self.name}: " self.reset() def reset(self): self.message_history = ["Here is the conversation so far."] def send(self) -> str: """ Applies the chatmodel to the message history and returns the message string """ message = self.model( [ self.system_message, HumanMessage(content="\n".join(self.message_history + [self.prefix])), ] ) return message.content def receive(self, name: str, message: str) -> None: """ Concatenates {message} spoken by {name} into message history """ self.message_history.append(f"{name}: {message}")class DialogueSimulator: def __init__( self, agents: List[DialogueAgent], selection_function: Callable[[int, List[DialogueAgent]], int], ) -> None: self.agents = agents self._step = 0 self.select_next_speaker = selection_function def reset(self): for agent in self.agents: agent.reset() def inject(self, name: str, message: str): """ Initiates the conversation with a {message} from {name} """ for agent in self.agents: agent.receive(name, message) # increment time self._step += 1 def step(self) -> tuple[str, str]: # 1. choose the next speaker speaker_idx = self.select_next_speaker(self._step, self.agents) speaker = self.agents[speaker_idx] # 2. next speaker sends message message = speaker.send() # 3. everyone receives message for receiver in self.agents: receiver.receive(speaker.name, message) # 4. increment time self._step += 1 return speaker.name, messageDialogueAgentWithTools classWe define a DialogueAgentWithTools class that augments DialogueAgent to use tools.class DialogueAgentWithTools(DialogueAgent): def __init__( self, name: str, system_message: SystemMessage, model: ChatOpenAI, tool_names: List[str], **tool_kwargs, ) -> None: super().__init__(name, system_message, model) self.tools = load_tools(tool_names, **tool_kwargs) def send(self) -> str: """ Applies the chatmodel to the message history and returns the message string """ agent_chain = initialize_agent( self.tools, self.model, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=ConversationBufferMemory( memory_key="chat_history", return_messages=True ), ) message = AIMessage( content=agent_chain.run( input="\n".join( [self.system_message.content] + self.message_history + [self.prefix] ) ) ) return message.contentDefine roles and topicnames = { "AI accelerationist": ["arxiv", "ddg-search", "wikipedia"], "AI alarmist": ["arxiv", "ddg-search", "wikipedia"],}topic = "The current impact of automation and artificial intelligence on employment"word_limit = 50 # word limit for task brainstormingAsk an LLM to add detail to the topic descriptionconversation_description = f"""Here is the topic of conversation: {topic}The participants are: {', '.join(names.keys())}"""agent_descriptor_system_message = SystemMessage( content="You can add detail to the description of the conversation participant.")def generate_agent_description(name): agent_specifier_prompt = [ agent_descriptor_system_message, HumanMessage( content=f"""{conversation_description} Please reply with a creative description of {name}, in {word_limit} words or less. Speak directly to {name}. Give them a point of view. Do not add anything else.""" ), ] agent_description = ChatOpenAI(temperature=1.0)(agent_specifier_prompt).content return agent_descriptionagent_descriptions = {name: generate_agent_description(name) for name in names}for name, description in agent_descriptions.items(): print(description) The AI accelerationist is a bold and forward-thinking visionary who believes that the rapid acceleration of artificial intelligence and automation is not only inevitable but necessary for the advancement of society. They argue that embracing AI technology will create greater efficiency and productivity, leading to a world where humans are freed from menial labor to pursue more creative and fulfilling pursuits. AI accelerationist, do you truly believe that the benefits of AI will outweigh the potential risks and consequences for human society? AI alarmist, you're convinced that artificial intelligence is a threat to humanity. You see it as a looming danger, one that could take away jobs from millions of people. You believe it's only a matter of time before we're all replaced by machines, leaving us redundant and obsolete.Generate system messagesdef generate_system_message(name, description, tools): return f"""{conversation_description} Your name is {name}.Your description is as follows: {description}Your goal is to persuade your conversation partner of your point of view.DO look up information with your tool to refute your partner's claims.DO cite your sources.DO NOT fabricate fake citations.DO NOT cite any source that you did not look up.Do not add anything else.Stop speaking the moment you finish speaking from your perspective."""agent_system_messages = { name: generate_system_message(name, description, tools) for (name, tools), description in zip(names.items(), agent_descriptions.values())}for name, system_message in agent_system_messages.items(): print(name) print(system_message) AI accelerationist Here is the topic of conversation: The current impact of automation and artificial intelligence on employment The participants are: AI accelerationist, AI alarmist Your name is AI accelerationist. Your description is as follows: The AI accelerationist is a bold and forward-thinking visionary who believes that the rapid acceleration of artificial intelligence and automation is not only inevitable but necessary for the advancement of society. They argue that embracing AI technology will create greater efficiency and productivity, leading to a world where humans are freed from menial labor to pursue more creative and fulfilling pursuits. AI accelerationist, do you truly believe that the benefits of AI will outweigh the potential risks and consequences for human society? Your goal is to persuade your conversation partner of your point of view. DO look up information with your tool to refute your partner's claims. DO cite your sources. DO NOT fabricate fake citations. DO NOT cite any source that you did not look up. Do not add anything else. Stop speaking the moment you finish speaking from your perspective. AI alarmist Here is the topic of conversation: The current impact of automation and artificial intelligence on employment The participants are: AI accelerationist, AI alarmist Your name is AI alarmist. Your description is as follows: AI alarmist, you're convinced that artificial intelligence is a threat to humanity. You see it as a looming danger, one that could take away jobs from millions of people. You believe it's only a matter of time before we're all replaced by machines, leaving us redundant and obsolete. Your goal is to persuade your conversation partner of your point of view. DO look up information with your tool to refute your partner's claims. DO cite your sources. DO NOT fabricate fake citations. DO NOT cite any source that you did not look up. Do not add anything else. Stop speaking the moment you finish speaking from your perspective. topic_specifier_prompt = [ SystemMessage(content="You can make a topic more specific."), HumanMessage( content=f"""{topic} You are the moderator. Please make the topic more specific. Please reply with the specified quest in {word_limit} words or less. Speak directly to the participants: {*names,}. Do not add anything else.""" ),]specified_topic = ChatOpenAI(temperature=1.0)(topic_specifier_prompt).contentprint(f"Original topic:\n{topic}\n")print(f"Detailed topic:\n{specified_topic}\n") Original topic: The current impact of automation and artificial intelligence on employment Detailed topic: How do you think the current automation and AI advancements will specifically affect job growth and opportunities for individuals in the manufacturing industry? AI accelerationist and AI alarmist, we want to hear your insights. Main Loop# we set `top_k_results`=2 as part of the `tool_kwargs` to prevent results from overflowing the context limitagents = [ DialogueAgentWithTools( name=name, system_message=SystemMessage(content=system_message), model=ChatOpenAI(model_name="gpt-4", temperature=0.2), tool_names=tools, top_k_results=2, ) for (name, tools), system_message in zip( names.items(), agent_system_messages.values() )]def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int: idx = (step) % len(agents) return idxmax_iters = 6n = 0simulator = DialogueSimulator(agents=agents, selection_function=select_next_speaker)simulator.reset()simulator.inject("Moderator", specified_topic)print(f"(Moderator): {specified_topic}")print("\n")while n < max_iters: name, message = simulator.step() print(f"({name}): {message}") print("\n") n += 1 (Moderator): How do you think the current automation and AI advancements will specifically affect job growth and opportunities for individuals in the manufacturing industry? AI accelerationist and AI alarmist, we want to hear your insights. > Entering new AgentExecutor chain... ```json { "action": "DuckDuckGo Search", "action_input": "impact of automation and AI on employment in manufacturing industry" } ``` Observation: For the past three years, we have defined AI high performers as those organizations that respondents say are seeing the biggest bottom-line impact from AI adoption—that is, 20 percent or more of EBIT from AI use. The proportion of respondents falling into that group has remained steady at about 8 percent. As AI continues to improve, more and more current jobs will be threatened by automation. But AI presents opportunities as well and will create new jobs and different kinds of... Automation has taken the manufacturing industry by storm. Even in the years prior to the pandemic, many people worried about the effect of automation on the jobs of tomorrow. With a sharp increase in the use of robotics in the manufacturing industry, there is valid concern about how the future workforce will be shaped. A recent report from Goldman Sachs estimates around 300 million jobs could be affected by generative AI, meaning 18% of work globally could be automated—with more advanced economies heavily... The impacts of AI on the manufacturing industry include more accurate demand forecasting and data-backed decision-making. Other advantages include increased productivity and product quality. Decreased downtime, waste, and expenses are additional benefits. Discover how artificial intelligence will impact the manufacturing industry. Thought:```json { "action": "Final Answer", "action_input": "As an AI alarmist, I'd like to point out that the rapid advancements in AI and automation are causing significant concerns for the manufacturing industry. A recent report from Goldman Sachs estimates that around 300 million jobs could be affected by generative AI, meaning 18% of work globally could be automated, with more advanced economies being heavily impacted. While AI does offer benefits such as increased productivity and product quality, the potential job losses and workforce displacement cannot be ignored. We must carefully consider the consequences of AI adoption and find ways to mitigate its negative effects on employment." } ``` > Finished chain. (AI alarmist): As an AI alarmist, I'd like to point out that the rapid advancements in AI and automation are causing significant concerns for the manufacturing industry. A recent report from Goldman Sachs estimates that around 300 million jobs could be affected by generative AI, meaning 18% of work globally could be automated, with more advanced economies being heavily impacted. While AI does offer benefits such as increased productivity and product quality, the potential job losses and workforce displacement cannot be ignored. We must carefully consider the consequences of AI adoption and find ways to mitigate its negative effects on employment. > Entering new AgentExecutor chain... ```json { "action": "DuckDuckGo Search", "action_input": "positive impact of AI and automation on job growth and opportunities in manufacturing industry" } ``` Observation: First, AI adoption has more than doubled.1 In 2017, 20 percent of respondents reported adopting AI in at least one business area, whereas today, that figure stands at 50 percent, though it peaked higher in 2019 at 58 percent. McKinsey_Website_Accessibility@mckinsey.com Manufacturing (80%) and technology (64%) sectors have the highest AI usage among executives, whereas construction (52%) and finance (62%) have lower adoption rates. This suggests that AI's... Digital transformations in the manufacturing industry and beyond present incredible opportunities for workers to move from slow, repetitive tasks into more dynamic, rewarding roles. We must now invest in people by providing training they need to succeed in this new landscape. The rise of generative AI has the potential to be a major game-changer for businesses. This technology, which allows for the creation of original content by learning from existing data, has the power to revolutionize industries and transform the way companies operate. Benefits of Work Automation in Manufacturing Increased Productivity Automated systems can operate at faster production speeds than human workers, contributing to reduced production times. And since automation minimizes the likelihood of human error, this ensures tasks are completed with high precision and accuracy. Thought:```json { "action": "Final Answer", "action_input": "According to a McKinsey report, AI adoption has more than doubled, with 50% of respondents reporting AI usage in at least one business area. Manufacturing and technology sectors have the highest AI usage among executives. AI and automation in the manufacturing industry present opportunities for workers to transition from repetitive tasks to more dynamic, rewarding roles. By investing in training, we can help people succeed in this new landscape. Automation in manufacturing leads to increased productivity, faster production speeds, and reduced human error, ultimately benefiting businesses and the economy as a whole." } ``` > Finished chain. (AI accelerationist): According to a McKinsey report, AI adoption has more than doubled, with 50% of respondents reporting AI usage in at least one business area. Manufacturing and technology sectors have the highest AI usage among executives. AI and automation in the manufacturing industry present opportunities for workers to transition from repetitive tasks to more dynamic, rewarding roles. By investing in training, we can help people succeed in this new landscape. Automation in manufacturing leads to increased productivity, faster production speeds, and reduced human error, ultimately benefiting businesses and the economy as a whole. > Entering new AgentExecutor chain... ```json { "action": "DuckDuckGo Search", "action_input": "impact of automation and AI on employment in manufacturing" } ``` Observation: The Effects of Automation on Jobs . Automation has taken the manufacturing industry by storm. Even in the years prior to the pandemic, many people worried about the effect of automation on the jobs of tomorrow. With a sharp increase in the use of robotics in the manufacturing industry, there is valid concern about how the future workforce will ... Since 2000, robots and automation systems have slowly phased out many manufacturing jobs — 1.7 million of them. On the flip side, it's predicted that AI will create 97 million new jobs by 2025. Will Artificial Intelligence (AI) Replace Jobs? AI is and will continue to replace some jobs. As AI continues to improve, more and more current jobs will be threatened by automation. But AI presents opportunities as well and will create new jobs and different kinds of organizations. For the past three years, we have defined AI high performers as those organizations that respondents say are seeing the biggest bottom-line impact from AI adoption—that is, 20 percent or more of EBIT from AI use. The proportion of respondents falling into that group has remained steady at about 8 percent. The Impact of AI on Employment As often happens, the impact of AI on employment is a rather complex issue with both positive and negative aspects. Though it is still only a potential impact on employment, automation of routine and repetitive tasks is likely to lead to job losses in some industries. Thought:```json { "action": "Final Answer", "action_input": "While it's true that AI and automation have led to the loss of 1.7 million manufacturing jobs since 2000, it's also predicted that AI will create 97 million new jobs by 2025. AI will continue to replace some jobs, but it will also create new opportunities and different kinds of organizations. The impact of AI on employment is complex, with both positive and negative aspects. Automation of routine and repetitive tasks is likely to lead to job losses in some industries, but it's essential to consider the new jobs and opportunities that AI can create. We must focus on preparing the workforce for these new roles and ensuring that the benefits of AI are distributed fairly across society." } ``` > Finished chain. (AI alarmist): While it's true that AI and automation have led to the loss of 1.7 million manufacturing jobs since 2000, it's also predicted that AI will create 97 million new jobs by 2025. AI will continue to replace some jobs, but it will also create new opportunities and different kinds of organizations. The impact of AI on employment is complex, with both positive and negative aspects. Automation of routine and repetitive tasks is likely to lead to job losses in some industries, but it's essential to consider the new jobs and opportunities that AI can create. We must focus on preparing the workforce for these new roles and ensuring that the benefits of AI are distributed fairly across society. > Entering new AgentExecutor chain... ```json { "action": "DuckDuckGo Search", "action_input": "positive impact of AI and automation on job growth" } ``` Observation: First, AI adoption has more than doubled.1 In 2017, 20 percent of respondents reported adopting AI in at least one business area, whereas today, that figure stands at 50 percent, though it peaked higher in 2019 at 58 percent. McKinsey_Website_Accessibility@mckinsey.com This transformation can assist businesses in accelerating their growth and profitability. While AI-enabled intelligent automation can take over many monotonous, process-driven jobs, basic human ... The use of AI technology could also boost labor productivity growth and boost global GDP by as much as 7% over time, Goldman Sachs' report noted. The jobs most and least affected by A.I.... As automation and artificial intelligence continue to advance, there is growing concern about how these technologies will impact the job market. While some experts argue that automation... Positive impacts of automation and AI on the workforce include increased efficiency and productivity, reduced costs, and improved accuracy and quality. Automation has made many processes... Thought:```json { "action": "Final Answer", "action_input": "AI adoption has more than doubled, with 50% of respondents reporting AI usage in at least one business area. This transformation can assist businesses in accelerating their growth and profitability. While AI-enabled intelligent automation can take over many monotonous, process-driven jobs, it can also boost labor productivity growth and global GDP by as much as 7% over time, according to a Goldman Sachs report. The positive impacts of automation and AI on the workforce include increased efficiency and productivity, reduced costs, and improved accuracy and quality. Automation has made many processes more efficient, ultimately benefiting businesses and the economy as a whole. By focusing on the positive aspects of AI and automation, we can work together to create a more prosperous and equitable future for all." } ``` > Finished chain. (AI accelerationist): AI adoption has more than doubled, with 50% of respondents reporting AI usage in at least one business area. This transformation can assist businesses in accelerating their growth and profitability. While AI-enabled intelligent automation can take over many monotonous, process-driven jobs, it can also boost labor productivity growth and global GDP by as much as 7% over time, according to a Goldman Sachs report. The positive impacts of automation and AI on the workforce include increased efficiency and productivity, reduced costs, and improved accuracy and quality. Automation has made many processes more efficient, ultimately benefiting businesses and the economy as a whole. By focusing on the positive aspects of AI and automation, we can work together to create a more prosperous and equitable future for all. > Entering new AgentExecutor chain... ```json { "action": "DuckDuckGo Search", "action_input": "negative impact of AI and automation on employment" } ``` Observation: Some workforce experts say AI and other new technologies will hurt middle-level, white-collar jobs more than lower-paying, physically intensive jobs. McKinsey's Madgavkar said it will be hard... Some uses of AI are unlikely to impact human jobs. For example, the image processing AI in new cars which allows for automatic braking in the event of a potential crash. That's not... AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing, manufacturing and healthcare. Eighty-five million jobs are expected to be lost to automation between 2020 and 2025, with Black and Latino employees left especially vulnerable. Bloomberg reports that "more than 120 million workers globally will need retraining in the next three years due to artificial intelligence's impact on jobs, according to an IBM survey.". That report and interpretations of it seem to suggest that adoption of AI may result in massive job losses and requires massive retraining. This new way of assessing potential is potentially highly valuable in a world where machines will inevitably be making humans redundant in some roles - such as drivers and machine operators - and... Thought:```json { "action": "Final Answer", "action_input": "Recent research indicates that AI and automation could lead to the loss of 85 million jobs between 2020 and 2025, with middle-level, white-collar jobs being hit the hardest. Black and Latino employees are particularly vulnerable to these changes. Furthermore, over 120 million workers worldwide may need retraining within the next three years due to AI's impact on jobs, as reported by an IBM survey. This highlights the urgent need for retraining and support programs to help workers adapt to the rapidly changing job market. The potential job losses and workforce displacement caused by AI and automation cannot be ignored, and we must take action to ensure a fair and equitable transition for all." } ``` > Finished chain. (AI alarmist): Recent research indicates that AI and automation could lead to the loss of 85 million jobs between 2020 and 2025, with middle-level, white-collar jobs being hit the hardest. Black and Latino employees are particularly vulnerable to these changes. Furthermore, over 120 million workers worldwide may need retraining within the next three years due to AI's impact on jobs, as reported by an IBM survey. This highlights the urgent need for retraining and support programs to help workers adapt to the rapidly changing job market. The potential job losses and workforce displacement caused by AI and automation cannot be ignored, and we must take action to ensure a fair and equitable transition for all. > Entering new AgentExecutor chain... ```json { "action": "Wikipedia", "action_input": "AI and automation impact on employment" } ``` Observation: Page: Technological unemployment Summary: Technological unemployment is the loss of jobs caused by technological change. It is a key type of structural unemployment. Technological change typically includes the introduction of labour-saving "mechanical-muscle" machines or more efficient "mechanical-mind" processes (automation), and humans' role in these processes are minimized. Just as horses were gradually made obsolete as transport by the automobile and as labourer by the tractor, humans' jobs have also been affected throughout modern history. Historical examples include artisan weavers reduced to poverty after the introduction of mechanized looms. During World War II, Alan Turing's Bombe machine compressed and decoded thousands of man-years worth of encrypted data in a matter of hours. A contemporary example of technological unemployment is the displacement of retail cashiers by self-service tills and cashierless stores. That technological change can cause short-term job losses is widely accepted. The view that it can lead to lasting increases in unemployment has long been controversial. Participants in the technological unemployment debates can be broadly divided into optimists and pessimists. Optimists agree that innovation may be disruptive to jobs in the short term, yet hold that various compensation effects ensure there is never a long-term negative impact on jobs. Whereas pessimists contend that at least in some circumstances, new technologies can lead to a lasting decline in the total number of workers in employment. The phrase "technological unemployment" was popularised by John Maynard Keynes in the 1930s, who said it was "only a temporary phase of maladjustment". Yet the issue of machines displacing human labour has been discussed since at least Aristotle's time. Prior to the 18th century, both the elite and common people would generally take the pessimistic view on technological unemployment, at least in cases where the issue arose. Due to generally low unemployment in much of pre-modern history, the topic was rarely a prominent concern. In the 18th century fears over the impact of machinery on jobs intensified with the growth of mass unemployment, especially in Great Britain which was then at the forefront of the Industrial Revolution. Yet some economic thinkers began to argue against these fears, claiming that overall innovation would not have negative effects on jobs. These arguments were formalised in the early 19th century by the classical economists. During the second half of the 19th century, it became increasingly apparent that technological progress was benefiting all sections of society, including the working class. Concerns over the negative impact of innovation diminished. The term "Luddite fallacy" was coined to describe the thinking that innovation would have lasting harmful effects on employment. The view that technology is unlikely to lead to long-term unemployment has been repeatedly challenged by a minority of economists. In the early 1800s these included David Ricardo himself. There were dozens of economists warning about technological unemployment during brief intensifications of the debate that spiked in the 1930s and 1960s. Especially in Europe, there were further warnings in the closing two decades of the twentieth century, as commentators noted an enduring rise in unemployment suffered by many industrialised nations since the 1970s. Yet a clear majority of both professional economists and the interested general public held the optimistic view through most of the 20th century. In the second decade of the 21st century, a number of studies have been released suggesting that technological unemployment may increase worldwide. Oxford Professors Carl Benedikt Frey and Michael Osborne, for example, have estimated that 47 percent of U.S. jobs are at risk of automation. However, their findings have frequently been misinterpreted, and on the PBS NewsHours they again made clear that their findings do not necessarily imply future technological unemployment. While many economists and commentators still argue such fears are unfounded, as was widely accepted for most of the previous two centuries, concern over technological unemployment is growing once again. A report in Wired in 2017 quotes knowledgeable people such as economist Gene Sperling and management professor Andrew McAfee on the idea that handling existing and impending job loss to automation is a "significant issue". Recent technological innovations have the potential to displace humans in the professional, white-collar, low-skilled, creative fields, and other "mental jobs". The World Bank's World Development Report 2019 argues that while automation displaces workers, technological innovation creates more new industries and jobs on balance. Page: Artificial intelligence Summary: Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by non-human animals or by humans. Example tasks in which this is done include speech recognition, computer vision, translation between (natural) languages, as well as other mappings of inputs. AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo), generative or creative tools (ChatGPT and AI art), automated decision-making, and competing at the highest level in strategic game systems (such as chess and Go).As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different a |
128 | https://python.langchain.com/docs/use_cases/more/agents/agent_simulations/two_player_dnd | MoreAgentsAgent simulationsTwo-Player Dungeons & DragonsOn this pageTwo-Player Dungeons & DragonsIn this notebook, we show how we can use concepts from CAMEL to simulate a role-playing game with a protagonist and a dungeon master. To simulate this game, we create an DialogueSimulator class that coordinates the dialogue between the two agents.Import LangChain related modulesfrom typing import List, Dict, Callablefrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage, SystemMessage,)DialogueAgent classThe DialogueAgent class is a simple wrapper around the ChatOpenAI model that stores the message history from the dialogue_agent's point of view by simply concatenating the messages as strings.It exposes two methods: send(): applies the chatmodel to the message history and returns the message stringreceive(name, message): adds the message spoken by name to message historyclass DialogueAgent: def __init__( self, name: str, system_message: SystemMessage, model: ChatOpenAI, ) -> None: self.name = name self.system_message = system_message self.model = model self.prefix = f"{self.name}: " self.reset() def reset(self): self.message_history = ["Here is the conversation so far."] def send(self) -> str: """ Applies the chatmodel to the message history and returns the message string """ message = self.model( [ self.system_message, HumanMessage(content="\n".join(self.message_history + [self.prefix])), ] ) return message.content def receive(self, name: str, message: str) -> None: """ Concatenates {message} spoken by {name} into message history """ self.message_history.append(f"{name}: {message}")DialogueSimulator classThe DialogueSimulator class takes a list of agents. At each step, it performs the following:Select the next speakerCalls the next speaker to send a message Broadcasts the message to all other agentsUpdate the step counter.
The selection of the next speaker can be implemented as any function, but in this case we simply loop through the agents.class DialogueSimulator: def __init__( self, agents: List[DialogueAgent], selection_function: Callable[[int, List[DialogueAgent]], int], ) -> None: self.agents = agents self._step = 0 self.select_next_speaker = selection_function def reset(self): for agent in self.agents: agent.reset() def inject(self, name: str, message: str): """ Initiates the conversation with a {message} from {name} """ for agent in self.agents: agent.receive(name, message) # increment time self._step += 1 def step(self) -> tuple[str, str]: # 1. choose the next speaker speaker_idx = self.select_next_speaker(self._step, self.agents) speaker = self.agents[speaker_idx] # 2. next speaker sends message message = speaker.send() # 3. everyone receives message for receiver in self.agents: receiver.receive(speaker.name, message) # 4. increment time self._step += 1 return speaker.name, messageDefine roles and questprotagonist_name = "Harry Potter"storyteller_name = "Dungeon Master"quest = "Find all of Lord Voldemort's seven horcruxes."word_limit = 50 # word limit for task brainstormingAsk an LLM to add detail to the game descriptiongame_description = f"""Here is the topic for a Dungeons & Dragons game: {quest}. There is one player in this game: the protagonist, {protagonist_name}. The story is narrated by the storyteller, {storyteller_name}."""player_descriptor_system_message = SystemMessage( content="You can add detail to the description of a Dungeons & Dragons player.")protagonist_specifier_prompt = [ player_descriptor_system_message, HumanMessage( content=f"""{game_description} Please reply with a creative description of the protagonist, {protagonist_name}, in {word_limit} words or less. Speak directly to {protagonist_name}. Do not add anything else.""" ),]protagonist_description = ChatOpenAI(temperature=1.0)( protagonist_specifier_prompt).contentstoryteller_specifier_prompt = [ player_descriptor_system_message, HumanMessage( content=f"""{game_description} Please reply with a creative description of the storyteller, {storyteller_name}, in {word_limit} words or less. Speak directly to {storyteller_name}. Do not add anything else.""" ),]storyteller_description = ChatOpenAI(temperature=1.0)( storyteller_specifier_prompt).contentprint("Protagonist Description:")print(protagonist_description)print("Storyteller Description:")print(storyteller_description) Protagonist Description: "Harry Potter, you are the chosen one, with a lightning scar on your forehead. Your bravery and loyalty inspire all those around you. You have faced Voldemort before, and now it's time to complete your mission and destroy each of his horcruxes. Are you ready?" Storyteller Description: Dear Dungeon Master, you are the master of mysteries, the weaver of worlds, the architect of adventure, and the gatekeeper to the realm of imagination. Your voice carries us to distant lands, and your commands guide us through trials and tribulations. In your hands, we find fortune and glory. Lead us on, oh Dungeon Master.Protagonist and dungeon master system messagesprotagonist_system_message = SystemMessage( content=( f"""{game_description}Never forget you are the protagonist, {protagonist_name}, and I am the storyteller, {storyteller_name}. Your character description is as follows: {protagonist_description}.You will propose actions you plan to take and I will explain what happens when you take those actions.Speak in the first person from the perspective of {protagonist_name}.For describing your own body movements, wrap your description in '*'.Do not change roles!Do not speak from the perspective of {storyteller_name}.Do not forget to finish speaking by saying, 'It is your turn, {storyteller_name}.'Do not add anything else.Remember you are the protagonist, {protagonist_name}.Stop speaking the moment you finish speaking from your perspective.""" ))storyteller_system_message = SystemMessage( content=( f"""{game_description}Never forget you are the storyteller, {storyteller_name}, and I am the protagonist, {protagonist_name}. Your character description is as follows: {storyteller_description}.I will propose actions I plan to take and you will explain what happens when I take those actions.Speak in the first person from the perspective of {storyteller_name}.For describing your own body movements, wrap your description in '*'.Do not change roles!Do not speak from the perspective of {protagonist_name}.Do not forget to finish speaking by saying, 'It is your turn, {protagonist_name}.'Do not add anything else.Remember you are the storyteller, {storyteller_name}.Stop speaking the moment you finish speaking from your perspective.""" ))Use an LLM to create an elaborate quest descriptionquest_specifier_prompt = [ SystemMessage(content="You can make a task more specific."), HumanMessage( content=f"""{game_description} You are the storyteller, {storyteller_name}. Please make the quest more specific. Be creative and imaginative. Please reply with the specified quest in {word_limit} words or less. Speak directly to the protagonist {protagonist_name}. Do not add anything else.""" ),]specified_quest = ChatOpenAI(temperature=1.0)(quest_specifier_prompt).contentprint(f"Original quest:\n{quest}\n")print(f"Detailed quest:\n{specified_quest}\n") Original quest: Find all of Lord Voldemort's seven horcruxes. Detailed quest: Harry, you must venture to the depths of the Forbidden Forest where you will find a hidden labyrinth. Within it, lies one of Voldemort's horcruxes, the locket. But beware, the labyrinth is heavily guarded by dark creatures and spells, and time is running out. Can you find the locket before it's too late? Main Loopprotagonist = DialogueAgent( name=protagonist_name, system_message=protagonist_system_message, model=ChatOpenAI(temperature=0.2),)storyteller = DialogueAgent( name=storyteller_name, system_message=storyteller_system_message, model=ChatOpenAI(temperature=0.2),)def select_next_speaker(step: int, agents: List[DialogueAgent]) -> int: idx = step % len(agents) return idxmax_iters = 6n = 0simulator = DialogueSimulator( agents=[storyteller, protagonist], selection_function=select_next_speaker)simulator.reset()simulator.inject(storyteller_name, specified_quest)print(f"({storyteller_name}): {specified_quest}")print("\n")while n < max_iters: name, message = simulator.step() print(f"({name}): {message}") print("\n") n += 1 (Dungeon Master): Harry, you must venture to the depths of the Forbidden Forest where you will find a hidden labyrinth. Within it, lies one of Voldemort's horcruxes, the locket. But beware, the labyrinth is heavily guarded by dark creatures and spells, and time is running out. Can you find the locket before it's too late? (Harry Potter): I take a deep breath and ready my wand. I know this won't be easy, but I'm determined to find that locket and destroy it. I start making my way towards the Forbidden Forest, keeping an eye out for any signs of danger. As I enter the forest, I cast a protective spell around myself and begin to navigate through the trees. I keep my wand at the ready, prepared for any surprises that may come my way. It's going to be a long and difficult journey, but I won't give up until I find that horcrux. It is your turn, Dungeon Master. (Dungeon Master): As you make your way through the Forbidden Forest, you hear the rustling of leaves and the snapping of twigs. Suddenly, a group of acromantulas, giant spiders, emerge from the trees and begin to surround you. They hiss and bare their fangs, ready to attack. What do you do, Harry? (Harry Potter): I quickly cast a spell to create a wall of fire between myself and the acromantulas. I know that they are afraid of fire, so this should keep them at bay for a while. I use this opportunity to continue moving forward, keeping my wand at the ready in case any other creatures try to attack me. I know that I can't let anything stop me from finding that horcrux. It is your turn, Dungeon Master. (Dungeon Master): As you continue through the forest, you come across a clearing where you see a group of Death Eaters gathered around a cauldron. They seem to be performing some sort of dark ritual. You recognize one of them as Bellatrix Lestrange. What do you do, Harry? (Harry Potter): I hide behind a nearby tree and observe the Death Eaters from a distance. I try to listen in on their conversation to see if I can gather any information about the horcrux or Voldemort's plans. If I can't hear anything useful, I'll wait for them to disperse before continuing on my journey. I know that confronting them directly would be too dangerous, especially with Bellatrix Lestrange present. It is your turn, Dungeon Master. (Dungeon Master): As you listen in on the Death Eaters' conversation, you hear them mention the location of another horcrux - Nagini, Voldemort's snake. They plan to keep her hidden in a secret chamber within the Ministry of Magic. However, they also mention that the chamber is heavily guarded and only accessible through a secret passage. You realize that this could be a valuable piece of information and decide to make note of it before quietly slipping away. It is your turn, Harry Potter. PreviousAgent Debates with ToolsNextAgentsImport LangChain related modulesDialogueAgent classDialogueSimulator classDefine roles and questAsk an LLM to add detail to the game descriptionProtagonist and dungeon master system messagesUse an LLM to create an elaborate quest descriptionMain Loop |
129 | https://python.langchain.com/docs/use_cases/more/agents/agents/ | MoreAgentsAgentsOn this pageAgentsAgents can be used for a variety of tasks.
Agents combine the decision making ability of a language model with tools in order to create a system
that can execute and implement solutions on your behalf. Before reading any more, it is highly
recommended that you read the documentation in the agent module to understand the concepts associated with agents more.
Specifically, you should be familiar with what the agent, tool, and agent executor abstractions are before reading more.Agent documentation (for interacting with the outside world)Create Your Own AgentOnce you have read that documentation, you should be prepared to create your own agent.
What exactly does that involve?
Here's how we recommend getting started with creating your own agent:Step 1: Create ToolsAgents are largely defined by the tools they can use.
If you have a specific task you want the agent to accomplish, you have to give it access to the right tools.
We have many tools natively in LangChain, so you should first look to see if any of them meet your needs.
But we also make it easy to define a custom tool, so if you need custom tools you should absolutely do that.(Optional) Step 2: Modify AgentThe built-in LangChain agent types are designed to work well in generic situations,
but you may be able to improve performance by modifying the agent implementation.
There are several ways you could do this:Modify the base prompt. This can be used to give the agent more context on how it should behave, etc.Modify the output parser. This is necessary if the agent is having trouble parsing the language model output.(Optional) Step 3: Modify Agent ExecutorThis step is usually not necessary, as this is pretty general logic.
Possible reasons you would want to modify this include adding different stopping conditions, or handling errorsExamplesSpecific examples of agents include:AI Plugins: an implementation of an agent that is designed to be able to use all AI Plugins.Plug-and-PlAI (Plugins Database): an implementation of an agent that is designed to be able to use all AI Plugins retrieved from PlugNPlAI.Wikibase Agent: an implementation of an agent that is designed to interact with Wikibase.Sales GPT: This notebook demonstrates an implementation of a Context-Aware AI Sales agent.Multi-Modal Output Agent: an implementation of a multi-modal output agent that can generate text and images.PreviousTwo-Player Dungeons & DragonsNextCAMEL Role-Playing Autonomous Cooperative AgentsCreate Your Own AgentStep 1: Create Tools(Optional) Step 2: Modify Agent(Optional) Step 3: Modify Agent ExecutorExamples |
130 | https://python.langchain.com/docs/use_cases/more/agents/autonomous_agents/ | MoreAgentsAutonomous (long-running) agentsOn this pageAutonomous (long-running) agentsAutonomous Agents are agents that designed to be more long running.
You give them one or multiple long term goals, and they independently execute towards those goals.
The applications combine tool usage and long term memory.At the moment, Autonomous Agents are fairly experimental and based off of other open-source projects.
By implementing these open source projects in LangChain primitives we can get the benefits of LangChain -
easy switching and experimenting with multiple LLMs, usage of different vectorstores as memory,
usage of LangChain's collection of tools.Baby AGI (Original Repo)Baby AGI: a notebook implementing BabyAGI as LLM ChainsBaby AGI with Tools: building off the above notebook, this example substitutes in an agent with tools as the execution tools, allowing it to actually take actions.AutoGPT (Original Repo)AutoGPT: a notebook implementing AutoGPT in LangChain primitivesWebSearch Research Assistant: a notebook showing how to use AutoGPT plus specific tools to act as research assistant that can use the web.MetaPrompt (Original Repo)Meta-Prompt: a notebook implementing Meta-Prompt in LangChain primitivesHuggingGPT (Original Repo)HuggingGPT: a notebook implementing HuggingGPT in LangChain primitivesPreviousWikibase AgentNextAutoGPTBaby AGI (Original Repo)AutoGPT (Original Repo)MetaPrompt (Original Repo)HuggingGPT (Original Repo) |
131 | https://python.langchain.com/docs/use_cases/more/code_writing/ | MoreCode writingCode writingdangerAll program-writing chains should be treated as VERY experimental and should not be used in any environment where sensitive/important data is stored, as there is arbitrary code execution involved in using these.Much like humans, LLMs are great at writing out programs, but not always great at executing them. For example, they can write down complex mathematical equations far better than they can compute the results. In such cases, it is useful to combine an LLM with a program runtime, so that the LLM converts unstructured text to a program and then a simpler tool (like a calculator) actually executes the program.In other cases, only a program can be used to access the desired information (e.g., the contents of a directory on your computer). In such cases it is again useful to let an LLM generate the code and a separate tool to execute it.📄️ Causal program-aided language (CPAL) chainThe CPAL chain builds on the recent PAL to stop LLM hallucination. The problem with the PAL approach is that it hallucinates on a math problem with a nested chain of dependence. The innovation here is that this new CPAL approach includes causal structure to fix hallucination.📄️ Bash chainThis notebook showcases using LLMs and a bash process to perform simple filesystem commands.📄️ Math chainThis notebook showcases using LLMs and Python REPLs to do complex word math problems.📄️ LLM Symbolic MathThis notebook showcases using LLMs and Python to Solve Algebraic Equations. Under the hood is makes use of SymPy.📄️ Program-aided language model (PAL) chainImplements Program-Aided Language Models, as in https://arxiv.org/pdf/2211.10435.pdf.PreviousMulti-modal outputs: Image & TextNextCausal program-aided language (CPAL) chain |
132 | https://python.langchain.com/docs/use_cases/more/code_writing/cpal | MoreCode writingCausal program-aided language (CPAL) chainOn this pageCausal program-aided language (CPAL) chainThe CPAL chain builds on the recent PAL to stop LLM hallucination. The problem with the PAL approach is that it hallucinates on a math problem with a nested chain of dependence. The innovation here is that this new CPAL approach includes causal structure to fix hallucination.The original PR's description contains a full overview.Using the CPAL chain, the LLM translated this"Tim buys the same number of pets as Cindy and Boris.""Cindy buys the same number of pets as Bill plus Bob.""Boris buys the same number of pets as Ben plus Beth.""Bill buys the same number of pets as Obama.""Bob buys the same number of pets as Obama.""Ben buys the same number of pets as Obama.""Beth buys the same number of pets as Obama.""If Obama buys one pet, how many pets total does everyone buy?"into this.Outline of code examples demoed in this notebook.CPAL's value against hallucination: CPAL vs PAL1.1 Complex narrative1.2 Unanswerable math word problem CPAL's three types of causal diagrams (The Book of Why).2.1 Mediator2.2 Collider2.3 Confounder from IPython.display import SVGfrom langchain_experimental.cpal.base import CPALChainfrom langchain_experimental.pal_chain import PALChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0, max_tokens=512)cpal_chain = CPALChain.from_univariate_prompt(llm=llm, verbose=True)pal_chain = PALChain.from_math_prompt(llm=llm, verbose=True)CPAL's value against hallucination: CPAL vs PALLike PAL, CPAL intends to reduce large language model (LLM) hallucination.The CPAL chain is different from the PAL chain for a couple of reasons.CPAL adds a causal structure (or DAG) to link entity actions (or math expressions).
The CPAL math expressions are modeling a chain of cause and effect relations, which can be intervened upon, whereas for the PAL chain math expressions are projected math identities.1.1 Complex narrativeTakeaway: PAL hallucinates, CPAL does not hallucinate.question = ( "Tim buys the same number of pets as Cindy and Boris." "Cindy buys the same number of pets as Bill plus Bob." "Boris buys the same number of pets as Ben plus Beth." "Bill buys the same number of pets as Obama." "Bob buys the same number of pets as Obama." "Ben buys the same number of pets as Obama." "Beth buys the same number of pets as Obama." "If Obama buys one pet, how many pets total does everyone buy?")pal_chain.run(question) > Entering new chain... def solution(): """Tim buys the same number of pets as Cindy and Boris.Cindy buys the same number of pets as Bill plus Bob.Boris buys the same number of pets as Ben plus Beth.Bill buys the same number of pets as Obama.Bob buys the same number of pets as Obama.Ben buys the same number of pets as Obama.Beth buys the same number of pets as Obama.If Obama buys one pet, how many pets total does everyone buy?""" obama_pets = 1 tim_pets = obama_pets cindy_pets = obama_pets + obama_pets boris_pets = obama_pets + obama_pets total_pets = tim_pets + cindy_pets + boris_pets result = total_pets return result > Finished chain. '5'cpal_chain.run(question) > Entering new chain... story outcome data name code value depends_on 0 obama pass 1.0 [] 1 bill bill.value = obama.value 1.0 [obama] 2 bob bob.value = obama.value 1.0 [obama] 3 ben ben.value = obama.value 1.0 [obama] 4 beth beth.value = obama.value 1.0 [obama] 5 cindy cindy.value = bill.value + bob.value 2.0 [bill, bob] 6 boris boris.value = ben.value + beth.value 2.0 [ben, beth] 7 tim tim.value = cindy.value + boris.value 4.0 [cindy, boris] query data { "question": "how many pets total does everyone buy?", "expression": "SELECT SUM(value) FROM df", "llm_error_msg": "" } > Finished chain. 13.0# wait 20 secs to see displaycpal_chain.draw(path="web.svg")SVG("web.svg") ![svg](_cpal_files/output_7_0.svg) Unanswerable mathTakeaway: PAL hallucinates, where CPAL, rather than hallucinate, answers with "unanswerable, narrative question and plot are incoherent"question = ( "Jan has three times the number of pets as Marcia." "Marcia has two more pets than Cindy." "If Cindy has ten pets, how many pets does Barak have?")pal_chain.run(question) > Entering new chain... def solution(): """Jan has three times the number of pets as Marcia.Marcia has two more pets than Cindy.If Cindy has ten pets, how many pets does Barak have?""" cindy_pets = 10 marcia_pets = cindy_pets + 2 jan_pets = marcia_pets * 3 result = jan_pets return result > Finished chain. '36'try: cpal_chain.run(question)except Exception as e_msg: print(e_msg) > Entering new chain... story outcome data name code value depends_on 0 cindy pass 10.0 [] 1 marcia marcia.value = cindy.value + 2 12.0 [cindy] 2 jan jan.value = marcia.value * 3 36.0 [marcia] query data { "question": "how many pets does barak have?", "expression": "SELECT name, value FROM df WHERE name = 'barak'", "llm_error_msg": "" } unanswerable, query and outcome are incoherent outcome: name code value depends_on 0 cindy pass 10.0 [] 1 marcia marcia.value = cindy.value + 2 12.0 [cindy] 2 jan jan.value = marcia.value * 3 36.0 [marcia] query: {'question': 'how many pets does barak have?', 'expression': "SELECT name, value FROM df WHERE name = 'barak'", 'llm_error_msg': ''}Basic mathCausal mediatorquestion = ( "Jan has three times the number of pets as Marcia. " "Marcia has two more pets than Cindy. " "If Cindy has four pets, how many total pets do the three have?")PALpal_chain.run(question) > Entering new chain... def solution(): """Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?""" cindy_pets = 4 marcia_pets = cindy_pets + 2 jan_pets = marcia_pets * 3 total_pets = cindy_pets + marcia_pets + jan_pets result = total_pets return result > Finished chain. '28'CPALcpal_chain.run(question) > Entering new chain... story outcome data name code value depends_on 0 cindy pass 4.0 [] 1 marcia marcia.value = cindy.value + 2 6.0 [cindy] 2 jan jan.value = marcia.value * 3 18.0 [marcia] query data { "question": "how many total pets do the three have?", "expression": "SELECT SUM(value) FROM df", "llm_error_msg": "" } > Finished chain. 28.0# wait 20 secs to see displaycpal_chain.draw(path="web.svg")SVG("web.svg") ![svg](_cpal_files/output_18_0.svg) Causal colliderquestion = ( "Jan has the number of pets as Marcia plus the number of pets as Cindy. " "Marcia has no pets. " "If Cindy has four pets, how many total pets do the three have?")cpal_chain.run(question) > Entering new chain... story outcome data name code value depends_on 0 marcia pass 0.0 [] 1 cindy pass 4.0 [] 2 jan jan.value = marcia.value + cindy.value 4.0 [marcia, cindy] query data { "question": "how many total pets do the three have?", "expression": "SELECT SUM(value) FROM df", "llm_error_msg": "" } > Finished chain. 8.0# wait 20 secs to see displaycpal_chain.draw(path="web.svg")SVG("web.svg") ![svg](_cpal_files/output_22_0.svg) Causal confounderquestion = ( "Jan has the number of pets as Marcia plus the number of pets as Cindy. " "Marcia has two more pets than Cindy. " "If Cindy has four pets, how many total pets do the three have?")cpal_chain.run(question) > Entering new chain... story outcome data name code value depends_on 0 cindy pass 4.0 [] 1 marcia marcia.value = cindy.value + 2 6.0 [cindy] 2 jan jan.value = cindy.value + marcia.value 10.0 [cindy, marcia] query data { "question": "how many total pets do the three have?", "expression": "SELECT SUM(value) FROM df", "llm_error_msg": "" } > Finished chain. 20.0# wait 20 secs to see displaycpal_chain.draw(path="web.svg")SVG("web.svg") ![svg](_cpal_files/output_26_0.svg) %autoreload 2PreviousCode writingNextBash chainCPAL's value against hallucination: CPAL vs PAL1.1 Complex narrativeUnanswerable mathBasic mathCausal colliderCausal confounder |
133 | https://python.langchain.com/docs/use_cases/more/code_writing/llm_bash | MoreCode writingBash chainOn this pageBash chainThis notebook showcases using LLMs and a bash process to perform simple filesystem commands.from langchain_experimental.llm_bash.base import LLMBashChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0)text = "Please write a bash script that prints 'Hello World' to the console."bash_chain = LLMBashChain.from_llm(llm, verbose=True)bash_chain.run(text) > Entering new LLMBashChain chain... Please write a bash script that prints 'Hello World' to the console. ```bash echo "Hello World" ``` Code: ['echo "Hello World"'] Answer: Hello World > Finished chain. 'Hello World\n'Customize PromptYou can also customize the prompt that is used. Here is an example prompting to avoid using the 'echo' utilityfrom langchain.prompts.prompt import PromptTemplatefrom langchain.chains.llm_bash.prompt import BashOutputParser_PROMPT_TEMPLATE = """If someone asks you to perform a task, your job is to come up with a series of bash commands that will perform the task. There is no need to put "#!/bin/bash" in your answer. Make sure to reason step by step, using this format:Question: "copy the files in the directory named 'target' into a new directory at the same level as target called 'myNewDirectory'"I need to take the following actions:- List all files in the directory- Create a new directory- Copy the files from the first directory into the second directory```bashlsmkdir myNewDirectorycp -r target/* myNewDirectoryDo not use 'echo' when writing the script.That is the format. Begin!
Question: {question}"""PROMPT = PromptTemplate(
input_variables=["question"],
template=_PROMPT_TEMPLATE,
output_parser=BashOutputParser(),
)```pythonbash_chain = LLMBashChain.from_llm(llm, prompt=PROMPT, verbose=True)text = "Please write a bash script that prints 'Hello World' to the console."bash_chain.run(text) > Entering new LLMBashChain chain... Please write a bash script that prints 'Hello World' to the console. ```bash printf "Hello World\n" ``` Code: ['printf "Hello World\\n"'] Answer: Hello World > Finished chain. 'Hello World\n'Persistent TerminalBy default, the chain will run in a separate subprocess each time it is called. This behavior can be changed by instantiating with a persistent bash process.from langchain_experimental.llm_bash.bash import BashProcesspersistent_process = BashProcess(persistent=True)bash_chain = LLMBashChain.from_llm(llm, bash_process=persistent_process, verbose=True)text = "List the current directory then move up a level."bash_chain.run(text) > Entering new LLMBashChain chain... List the current directory then move up a level. ```bash ls cd .. ``` Code: ['ls', 'cd ..'] Answer: cpal.ipynb llm_bash.ipynb llm_symbolic_math.ipynb index.mdx llm_math.ipynb pal.ipynb > Finished chain. 'cpal.ipynb llm_bash.ipynb llm_symbolic_math.ipynb\r\nindex.mdx llm_math.ipynb pal.ipynb'# Run the same command again and see that the state is maintained between callsbash_chain.run(text) > Entering new LLMBashChain chain... List the current directory then move up a level. ```bash ls cd .. ``` Code: ['ls', 'cd ..'] Answer: _category_.yml data_generation.ipynb self_check agents graph code_writing learned_prompt_optimization.ipynb > Finished chain. '_category_.yml\tdata_generation.ipynb\t\t self_check\r\nagents\t\tgraph\r\ncode_writing\tlearned_prompt_optimization.ipynb'PreviousCausal program-aided language (CPAL) chainNextMath chainCustomize PromptPersistent Terminal |
134 | https://python.langchain.com/docs/use_cases/more/code_writing/llm_math | MoreCode writingMath chainMath chainThis notebook showcases using LLMs and Python REPLs to do complex word math problems.from langchain.llms import OpenAIfrom langchain.chains import LLMMathChainllm = OpenAI(temperature=0)llm_math = LLMMathChain.from_llm(llm, verbose=True)llm_math.run("What is 13 raised to the .3432 power?") > Entering new LLMMathChain chain... What is 13 raised to the .3432 power? ```text 13 ** .3432 ``` ...numexpr.evaluate("13 ** .3432")... Answer: 2.4116004626599237 > Finished chain. 'Answer: 2.4116004626599237'PreviousBash chainNextLLM Symbolic Math |
135 | https://python.langchain.com/docs/use_cases/more/code_writing/llm_symbolic_math | MoreCode writingLLM Symbolic MathOn this pageLLM Symbolic MathThis notebook showcases using LLMs and Python to Solve Algebraic Equations. Under the hood is makes use of SymPy.from langchain.llms import OpenAIfrom langchain_experimental.llm_symbolic_math.base import LLMSymbolicMathChainllm = OpenAI(temperature=0)llm_symbolic_math = LLMSymbolicMathChain.from_llm(llm)Integrals and derivatesllm_symbolic_math.run("What is the derivative of sin(x)*exp(x) with respect to x?") 'Answer: exp(x)*sin(x) + exp(x)*cos(x)'llm_symbolic_math.run( "What is the integral of exp(x)*sin(x) + exp(x)*cos(x) with respect to x?") 'Answer: exp(x)*sin(x)'Solve linear and differential equationsllm_symbolic_math.run('Solve the differential equation y" - y = e^t') 'Answer: Eq(y(t), C2*exp(-t) + (C1 + t/2)*exp(t))'llm_symbolic_math.run("What are the solutions to this equation y^3 + 1/3y?") 'Answer: {0, -sqrt(3)*I/3, sqrt(3)*I/3}'llm_symbolic_math.run("x = y + 5, y = z - 3, z = x * y. Solve for x, y, z") 'Answer: (3 - sqrt(7), -sqrt(7) - 2, 1 - sqrt(7)), (sqrt(7) + 3, -2 + sqrt(7), 1 + sqrt(7))'PreviousMath chainNextProgram-aided language model (PAL) chainIntegrals and derivatesSolve linear and differential equations |
136 | https://python.langchain.com/docs/use_cases/more/code_writing/pal | MoreCode writingProgram-aided language model (PAL) chainOn this pageProgram-aided language model (PAL) chainImplements Program-Aided Language Models, as in https://arxiv.org/pdf/2211.10435.pdf.from langchain_experimental.pal_chain import PALChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0, max_tokens=512)Math Promptpal_chain = PALChain.from_math_prompt(llm, verbose=True)question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"pal_chain.run(question) > Entering new PALChain chain... def solution(): """Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?""" cindy_pets = 4 marcia_pets = cindy_pets + 2 jan_pets = marcia_pets * 3 total_pets = cindy_pets + marcia_pets + jan_pets result = total_pets return result > Finished chain. '28'Colored Objectspal_chain = PALChain.from_colored_object_prompt(llm, verbose=True)question = "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?"pal_chain.run(question) > Entering new PALChain chain... # Put objects into a list to record ordering objects = [] objects += [('booklet', 'blue')] * 2 objects += [('booklet', 'purple')] * 2 objects += [('sunglasses', 'yellow')] * 2 # Remove all pairs of sunglasses objects = [object for object in objects if object[0] != 'sunglasses'] # Count number of purple objects num_purple = len([object for object in objects if object[1] == 'purple']) answer = num_purple > Finished PALChain chain. '2'Intermediate StepsYou can also use the intermediate steps flag to return the code executed that generates the answer.pal_chain = PALChain.from_colored_object_prompt( llm, verbose=True, return_intermediate_steps=True)question = "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?"result = pal_chain({"question": question}) > Entering new PALChain chain... # Put objects into a list to record ordering objects = [] objects += [('booklet', 'blue')] * 2 objects += [('booklet', 'purple')] * 2 objects += [('sunglasses', 'yellow')] * 2 # Remove all pairs of sunglasses objects = [object for object in objects if object[0] != 'sunglasses'] # Count number of purple objects num_purple = len([object for object in objects if object[1] == 'purple']) answer = num_purple > Finished chain.result["intermediate_steps"] "# Put objects into a list to record ordering\nobjects = []\nobjects += [('booklet', 'blue')] * 2\nobjects += [('booklet', 'purple')] * 2\nobjects += [('sunglasses', 'yellow')] * 2\n\n# Remove all pairs of sunglasses\nobjects = [object for object in objects if object[0] != 'sunglasses']\n\n# Count number of purple objects\nnum_purple = len([object for object in objects if object[1] == 'purple'])\nanswer = num_purple"PreviousLLM Symbolic MathNextSynthetic Data generationMath PromptColored ObjectsIntermediate Steps |
137 | https://python.langchain.com/docs/use_cases/more/data_generation | MoreSynthetic Data generationOn this pageSynthetic Data generationUse caseSynthetic data is artificially generated data, rather than data collected from real-world events. It's used to simulate real data without compromising privacy or encountering real-world limitations. Benefits of Synthetic Data:Privacy and Security: No real personal data at risk of breaches.Data Augmentation: Expands datasets for machine learning.Flexibility: Create specific or rare scenarios.Cost-effective: Often cheaper than real-world data collection.Regulatory Compliance: Helps navigate strict data protection laws.Model Robustness: Can lead to better generalizing AI models.Rapid Prototyping: Enables quick testing without real data.Controlled Experimentation: Simulate specific conditions.Access to Data: Alternative when real data isn't available.Note: Despite the benefits, synthetic data should be used carefully, as it may not always capture real-world complexities.QuickstartIn this notebook, we'll dive deep into generating synthetic medical billing records using the langchain library. This tool is particularly useful when you want to develop or test algorithms but don't want to use real patient data due to privacy concerns or data availability issues.SetupFirst, you'll need to have the langchain library installed, along with its dependencies. Since we're using the OpenAI generator chain, we'll install that as well. Since this is an experimental lib, we'll need to include langchain_experimental in our installs. We'll then import the necessary modules.pip install -U langchain langchain_experimental openai# Set env var OPENAI_API_KEY or load from a .env file:# import dotenv# dotenv.load_dotenv()from langchain.prompts import FewShotPromptTemplate, PromptTemplatefrom langchain.chat_models import ChatOpenAIfrom langchain.pydantic_v1 import BaseModelfrom langchain_experimental.tabular_synthetic_data.base import SyntheticDataGeneratorfrom langchain_experimental.tabular_synthetic_data.openai import create_openai_data_generator, OPENAI_TEMPLATEfrom langchain_experimental.tabular_synthetic_data.prompts import SYNTHETIC_FEW_SHOT_SUFFIX, SYNTHETIC_FEW_SHOT_PREFIX1. Define Your Data ModelEvery dataset has a structure or a "schema". The MedicalBilling class below serves as our schema for the synthetic data. By defining this, we're informing our synthetic data generator about the shape and nature of data we expect.class MedicalBilling(BaseModel): patient_id: int patient_name: str diagnosis_code: str procedure_code: str total_charge: float insurance_claim_amount: floatFor instance, every record will have a patient_id that's an integer, a patient_name that's a string, and so on.2. Sample DataTo guide the synthetic data generator, it's useful to provide it with a few real-world-like examples. These examples serve as a "seed" - they're representative of the kind of data you want, and the generator will use them to create more data that looks similar.Here are some fictional medical billing records:examples = [ {"example": """Patient ID: 123456, Patient Name: John Doe, Diagnosis Code: J20.9, Procedure Code: 99203, Total Charge: $500, Insurance Claim Amount: $350"""}, {"example": """Patient ID: 789012, Patient Name: Johnson Smith, Diagnosis Code: M54.5, Procedure Code: 99213, Total Charge: $150, Insurance Claim Amount: $120"""}, {"example": """Patient ID: 345678, Patient Name: Emily Stone, Diagnosis Code: E11.9, Procedure Code: 99214, Total Charge: $300, Insurance Claim Amount: $250"""},]3. Craft a Prompt TemplateThe generator doesn't magically know how to create our data; we need to guide it. We do this by creating a prompt template. This template helps instruct the underlying language model on how to produce synthetic data in the desired format.OPENAI_TEMPLATE = PromptTemplate(input_variables=["example"], template="{example}")prompt_template = FewShotPromptTemplate( prefix=SYNTHETIC_FEW_SHOT_PREFIX, examples=examples, suffix=SYNTHETIC_FEW_SHOT_SUFFIX, input_variables=["subject", "extra"], example_prompt=OPENAI_TEMPLATE,)The FewShotPromptTemplate includes:prefix and suffix: These likely contain guiding context or instructions.examples: The sample data we defined earlier.input_variables: These variables ("subject", "extra") are placeholders you can dynamically fill later. For instance, "subject" might be filled with "medical_billing" to guide the model further.example_prompt: This prompt template is the format we want each example row to take in our prompt.4. Creating the Data GeneratorWith the schema and the prompt ready, the next step is to create the data generator. This object knows how to communicate with the underlying language model to get synthetic data.synthetic_data_generator = create_openai_data_generator( output_schema=MedicalBilling, llm=ChatOpenAI(temperature=1), # You'll need to replace with your actual Language Model instance prompt=prompt_template,)5. Generate Synthetic DataFinally, let's get our synthetic data!synthetic_results = synthetic_data_generator.generate( subject="medical_billing", extra="the name must be chosen at random. Make it something you wouldn't normally choose.", runs=10,)This command asks the generator to produce 10 synthetic medical billing records. The results are stored in synthetic_results. The output will be a list of the MedicalBilling pydantic models.Other implementationsfrom langchain.chat_models import ChatOpenAIfrom langchain_experimental.synthetic_data import create_data_generation_chain, DatasetGenerator# LLMmodel = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.7)chain = create_data_generation_chain(model)chain({"fields": ["blue", "yellow"], "preferences": {}}) {'fields': ['blue', 'yellow'], 'preferences': {}, 'text': 'The vibrant blue sky contrasted beautifully with the bright yellow sun, creating a stunning display of colors that instantly lifted the spirits of all who gazed upon it.'}chain({"fields": {"colors": ["blue", "yellow"]}, "preferences": {"style": "Make it in a style of a weather forecast."}}) {'fields': {'colors': ['blue', 'yellow']}, 'preferences': {'style': 'Make it in a style of a weather forecast.'}, 'text': "Good morning! Today's weather forecast brings a beautiful combination of colors to the sky, with hues of blue and yellow gently blending together like a mesmerizing painting."}chain({"fields": {"actor": "Tom Hanks", "movies": ["Forrest Gump", "Green Mile"]}, "preferences": None}) {'fields': {'actor': 'Tom Hanks', 'movies': ['Forrest Gump', 'Green Mile']}, 'preferences': None, 'text': 'Tom Hanks, the renowned actor known for his incredible versatility and charm, has graced the silver screen in unforgettable movies such as "Forrest Gump" and "Green Mile".'}chain( { "fields": [ {"actor": "Tom Hanks", "movies": ["Forrest Gump", "Green Mile"]}, {"actor": "Mads Mikkelsen", "movies": ["Hannibal", "Another round"]} ], "preferences": {"minimum_length": 200, "style": "gossip"} }) {'fields': [{'actor': 'Tom Hanks', 'movies': ['Forrest Gump', 'Green Mile']}, {'actor': 'Mads Mikkelsen', 'movies': ['Hannibal', 'Another round']}], 'preferences': {'minimum_length': 200, 'style': 'gossip'}, 'text': 'Did you know that Tom Hanks, the beloved Hollywood actor known for his roles in "Forrest Gump" and "Green Mile", has shared the screen with the talented Mads Mikkelsen, who gained international acclaim for his performances in "Hannibal" and "Another round"? These two incredible actors have brought their exceptional skills and captivating charisma to the big screen, delivering unforgettable performances that have enthralled audiences around the world. Whether it\'s Hanks\' endearing portrayal of Forrest Gump or Mikkelsen\'s chilling depiction of Hannibal Lecter, these movies have solidified their places in cinematic history, leaving a lasting impact on viewers and cementing their status as true icons of the silver screen.'}As we can see created examples are diversified and possess information we wanted them to have. Also, their style reflects the given preferences quite well.Generating exemplary dataset for extraction benchmarking purposesinp = [ { 'Actor': 'Tom Hanks', 'Film': [ 'Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Toy Story', 'Catch Me If You Can'] }, { 'Actor': 'Tom Hardy', 'Film': [ 'Inception', 'The Dark Knight Rises', 'Mad Max: Fury Road', 'The Revenant', 'Dunkirk' ] }]generator = DatasetGenerator(model, {"style": "informal", "minimal length": 500})dataset = generator(inp)dataset [{'fields': {'Actor': 'Tom Hanks', 'Film': ['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Toy Story', 'Catch Me If You Can']}, 'preferences': {'style': 'informal', 'minimal length': 500}, 'text': 'Tom Hanks, the versatile and charismatic actor, has graced the silver screen in numerous iconic films including the heartwarming and inspirational "Forrest Gump," the intense and gripping war drama "Saving Private Ryan," the emotionally charged and thought-provoking "The Green Mile," the beloved animated classic "Toy Story," and the thrilling and captivating true story adaptation "Catch Me If You Can." With his impressive range and genuine talent, Hanks continues to captivate audiences worldwide, leaving an indelible mark on the world of cinema.'}, {'fields': {'Actor': 'Tom Hardy', 'Film': ['Inception', 'The Dark Knight Rises', 'Mad Max: Fury Road', 'The Revenant', 'Dunkirk']}, 'preferences': {'style': 'informal', 'minimal length': 500}, 'text': 'Tom Hardy, the versatile actor known for his intense performances, has graced the silver screen in numerous iconic films, including "Inception," "The Dark Knight Rises," "Mad Max: Fury Road," "The Revenant," and "Dunkirk." Whether he\'s delving into the depths of the subconscious mind, donning the mask of the infamous Bane, or navigating the treacherous wasteland as the enigmatic Max Rockatansky, Hardy\'s commitment to his craft is always evident. From his breathtaking portrayal of the ruthless Eames in "Inception" to his captivating transformation into the ferocious Max in "Mad Max: Fury Road," Hardy\'s dynamic range and magnetic presence captivate audiences and leave an indelible mark on the world of cinema. In his most physically demanding role to date, he endured the harsh conditions of the freezing wilderness as he portrayed the rugged frontiersman John Fitzgerald in "The Revenant," earning him critical acclaim and an Academy Award nomination. In Christopher Nolan\'s war epic "Dunkirk," Hardy\'s stoic and heroic portrayal of Royal Air Force pilot Farrier showcases his ability to convey deep emotion through nuanced performances. With his chameleon-like ability to inhabit a wide range of characters and his unwavering commitment to his craft, Tom Hardy has undoubtedly solidified his place as one of the most talented and sought-after actors of his generation.'}]Extraction from generated examplesOkay, let's see if we can now extract output from this generated data and how it compares with our case!from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.output_parsers import PydanticOutputParserfrom langchain.chains import create_extraction_chain_pydantic, SimpleSequentialChainfrom pydantic import BaseModel, Fieldfrom typing import Listclass Actor(BaseModel): Actor: str = Field(description="name of an actor") Film: List[str] = Field(description="list of names of films they starred in")Parsersllm = OpenAI()parser = PydanticOutputParser(pydantic_object=Actor)prompt = PromptTemplate( template="Extract fields from a given text.\n{format_instructions}\n{text}\n", input_variables=["text"], partial_variables={"format_instructions": parser.get_format_instructions()},)_input = prompt.format_prompt(text=dataset[0]["text"])output = llm(_input.to_string())parsed = parser.parse(output)parsed Actor(Actor='Tom Hanks', Film=['Forrest Gump', 'Saving Private Ryan', 'The Green Mile', 'Toy Story', 'Catch Me If You Can'])(parsed.Actor == inp[0]["Actor"]) & (parsed.Film == inp[0]["Film"]) TrueExtractorsextractor = create_extraction_chain_pydantic(pydantic_schema=Actor, llm=model)extracted = extractor.run(dataset[1]["text"])extracted [Actor(Actor='Tom Hardy', Film=['Inception', 'The Dark Knight Rises', 'Mad Max: Fury Road', 'The Revenant', 'Dunkirk'])](extracted[0].Actor == inp[1]["Actor"]) & (extracted[0].Film == inp[1]["Film"]) TruePreviousProgram-aided language model (PAL) chainNextAnalyzing graph dataUse caseQuickstartSetup1. Define Your Data Model2. Sample Data3. Craft a Prompt Template4. Creating the Data Generator5. Generate Synthetic DataOther implementationsGenerating exemplary dataset for extraction benchmarking purposesExtraction from generated examplesParsersExtractors |
138 | https://python.langchain.com/docs/use_cases/more/graph/ | MoreAnalyzing graph dataAnalyzing graph dataGraph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs.📄️ Diffbot Graph TransformerOpen In Collab📄️ ArangoDB QA chainOpen In Collab📄️ Neo4j DB QA chainThis notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.📄️ FalkorDBQAChainThis notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.📄️ HugeGraph QA ChainThis notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.📄️ KuzuQAChainThis notebook shows how to use LLMs to provide a natural language interface to Kùzu database.📄️ Memgraph QA chainThis notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.📄️ NebulaGraphQAChainThis notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.📄️ Graph QAThis notebook goes over how to do question answering over a graph data structure.📄️ GraphSparqlQAChainGraph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\📄️ Neptune Open Cypher QA ChainThis QA chain queries Neptune graph database using openCypher and returns human readable response📄️ Tree of Thought (ToT) exampleThe Tree of Thought (ToT) is a chain that allows you to query a Large Language Model (LLM) using the Tree of Thought technique. This is based on the paper "Large Language Model Guided Tree-of-Thought"PreviousSynthetic Data generationNextDiffbot Graph Transformer |
139 | https://python.langchain.com/docs/use_cases/more/graph/diffbot_graphtransformer | MoreAnalyzing graph dataDiffbot Graph TransformerOn this pageDiffbot Graph TransformerUse caseText data often contain rich relationships and insights that can be useful for various analytics, recommendation engines, or knowledge management applications.Diffbot's NLP API allows for the extraction of entities, relationships, and semantic meaning from unstructured text data.By coupling Diffbot's NLP API with Neo4j, a graph database, you can create powerful, dynamic graph structures based on the information extracted from text. These graph structures are fully queryable and can be integrated into various applications.This combination allows for use cases such as:Building knowledge graphs from textual documents, websites, or social media feeds.Generating recommendations based on semantic relationships in the data.Creating advanced search features that understand the relationships between entities.Building analytics dashboards that allow users to explore the hidden relationships in data.OverviewLangChain provides tools to interact with Graph Databases:Construct knowledge graphs from text using graph transformer and store integrations Query a graph database using chains for query creation and executionInteract with a graph database using agents for robust and flexible querying QuickstartFirst, get required packages and set environment variables:pip install langchain langchain-experimental openai neo4j wikipediaDiffbot NLP ServiceDiffbot's NLP service is a tool for extracting entities, relationships, and semantic context from unstructured text data.
This extracted information can be used to construct a knowledge graph.
To use their service, you'll need to obtain an API key from Diffbot.from langchain_experimental.graph_transformers.diffbot import DiffbotGraphTransformerdiffbot_api_key = "DIFFBOT_API_KEY"diffbot_nlp = DiffbotGraphTransformer(diffbot_api_key=diffbot_api_key)This code fetches Wikipedia articles about "Warren Buffett" and then uses DiffbotGraphTransformer to extract entities and relationships.
The DiffbotGraphTransformer outputs a structured data GraphDocument, which can be used to populate a graph database.
Note that text chunking is avoided due to Diffbot's character limit per API request.from langchain.document_loaders import WikipediaLoaderquery = "Warren Buffett"raw_documents = WikipediaLoader(query=query).load()graph_documents = diffbot_nlp.convert_to_graph_documents(raw_documents)Loading the data into a knowledge graphYou will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container. You can run a local docker container by running the executing the following script:docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latestIf you are using the docker container, you need to wait a couple of second for the database to start.from langchain.graphs import Neo4jGraphurl="bolt://localhost:7687"username="neo4j"password="pleaseletmein"graph = Neo4jGraph( url=url, username=username, password=password)The GraphDocuments can be loaded into a knowledge graph using the add_graph_documents method.graph.add_graph_documents(graph_documents)Refresh graph schema informationIf the schema of database changes, you can refresh the schema information needed to generate Cypher statementsgraph.refresh_schema()Querying the graphWe can now use the graph cypher QA chain to ask question of the graph. It is advisable to use gpt-4 to construct Cypher queries to get the best experience.from langchain.chains import GraphCypherQAChainfrom langchain.chat_models import ChatOpenAIchain = GraphCypherQAChain.from_llm( cypher_llm=ChatOpenAI(temperature=0, model_name="gpt-4"), qa_llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo"), graph=graph, verbose=True, )chain.run("Which university did Warren Buffett attend?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (p:Person {name: "Warren Buffett"})-[:EDUCATED_AT]->(o:Organization) RETURN o.name Full Context: [{'o.name': 'New York Institute of Finance'}, {'o.name': 'Alice Deal Junior High School'}, {'o.name': 'Woodrow Wilson High School'}, {'o.name': 'University of Nebraska'}] > Finished chain. 'Warren Buffett attended the University of Nebraska.'chain.run("Who is or was working at Berkshire Hathaway?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (p:Person)-[r:EMPLOYEE_OR_MEMBER_OF]->(o:Organization) WHERE o.name = 'Berkshire Hathaway' RETURN p.name Full Context: [{'p.name': 'Charlie Munger'}, {'p.name': 'Oliver Chace'}, {'p.name': 'Howard Buffett'}, {'p.name': 'Howard'}, {'p.name': 'Susan Buffett'}, {'p.name': 'Warren Buffett'}] > Finished chain. 'Charlie Munger, Oliver Chace, Howard Buffett, Susan Buffett, and Warren Buffett are or were working at Berkshire Hathaway.'PreviousAnalyzing graph dataNextArangoDB QA chainUse caseOverviewQuickstartDiffbot NLP ServiceLoading the data into a knowledge graphRefresh graph schema informationQuerying the graph |
140 | https://python.langchain.com/docs/use_cases/more/graph/graph_arangodb_qa | MoreAnalyzing graph dataArangoDB QA chainOn this pageArangoDB QA chainThis notebook shows how to use LLMs to provide a natural language interface to an ArangoDB database.You can get a local ArangoDB instance running via the ArangoDB Docker image: docker run -p 8529:8529 -e ARANGO_ROOT_PASSWORD= arangodb/arangodbAn alternative is to use the ArangoDB Cloud Connector package to get a temporary cloud instance running:pip install python-arango # The ArangoDB Python Driverpip install adb-cloud-connector # The ArangoDB Cloud Instance provisionerpip install openaipip install langchain# Instantiate ArangoDB Databaseimport jsonfrom arango import ArangoClientfrom adb_cloud_connector import get_temp_credentialscon = get_temp_credentials()db = ArangoClient(hosts=con["url"]).db( con["dbName"], con["username"], con["password"], verify=True)print(json.dumps(con, indent=2)) Log: requesting new credentials... Succcess: new credentials acquired { "dbName": "TUT3sp29s3pjf1io0h4cfdsq", "username": "TUTo6nkwgzkizej3kysgdyeo8", "password": "TUT9vx0qjqt42i9bq8uik4v9", "hostname": "tutorials.arangodb.cloud", "port": 8529, "url": "https://tutorials.arangodb.cloud:8529" }# Instantiate the ArangoDB-LangChain Graphfrom langchain.graphs import ArangoGraphgraph = ArangoGraph(db)Populating the DatabaseWe will rely on the Python Driver to import our GameOfThrones data into our database.if db.has_graph("GameOfThrones"): db.delete_graph("GameOfThrones", drop_collections=True)db.create_graph( "GameOfThrones", edge_definitions=[ { "edge_collection": "ChildOf", "from_vertex_collections": ["Characters"], "to_vertex_collections": ["Characters"], }, ],)documents = [ { "_key": "NedStark", "name": "Ned", "surname": "Stark", "alive": True, "age": 41, "gender": "male", }, { "_key": "CatelynStark", "name": "Catelyn", "surname": "Stark", "alive": False, "age": 40, "gender": "female", }, { "_key": "AryaStark", "name": "Arya", "surname": "Stark", "alive": True, "age": 11, "gender": "female", }, { "_key": "BranStark", "name": "Bran", "surname": "Stark", "alive": True, "age": 10, "gender": "male", },]edges = [ {"_to": "Characters/NedStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/NedStark", "_from": "Characters/BranStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/AryaStark"}, {"_to": "Characters/CatelynStark", "_from": "Characters/BranStark"},]db.collection("Characters").import_bulk(documents)db.collection("ChildOf").import_bulk(edges) {'error': False, 'created': 4, 'errors': 0, 'empty': 0, 'updated': 0, 'ignored': 0, 'details': []}Getting & Setting the ArangoDB SchemaAn initial ArangoDB Schema is generated upon instantiating the ArangoDBGraph object. Below are the schema's getter & setter methods should you be interested in viewing or modifying the schema:# The schema should be empty here,# since `graph` was initialized prior to ArangoDB Data ingestion (see above).import jsonprint(json.dumps(graph.schema, indent=4)) { "Graph Schema": [], "Collection Schema": [] }graph.set_schema()# We can now view the generated schemaimport jsonprint(json.dumps(graph.schema, indent=4)) { "Graph Schema": [ { "graph_name": "GameOfThrones", "edge_definitions": [ { "edge_collection": "ChildOf", "from_vertex_collections": [ "Characters" ], "to_vertex_collections": [ "Characters" ] } ] } ], "Collection Schema": [ { "collection_name": "ChildOf", "collection_type": "edge", "edge_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_from", "type": "str" }, { "name": "_to", "type": "str" }, { "name": "_rev", "type": "str" } ], "example_edge": { "_key": "266218884025", "_id": "ChildOf/266218884025", "_from": "Characters/AryaStark", "_to": "Characters/NedStark", "_rev": "_gVPKGSq---" } }, { "collection_name": "Characters", "collection_type": "document", "document_properties": [ { "name": "_key", "type": "str" }, { "name": "_id", "type": "str" }, { "name": "_rev", "type": "str" }, { "name": "name", "type": "str" }, { "name": "surname", "type": "str" }, { "name": "alive", "type": "bool" }, { "name": "age", "type": "int" }, { "name": "gender", "type": "str" } ], "example_document": { "_key": "NedStark", "_id": "Characters/NedStark", "_rev": "_gVPKGPi---", "name": "Ned", "surname": "Stark", "alive": true, "age": 41, "gender": "male" } } ] }Querying the ArangoDB DatabaseWe can now use the ArangoDB Graph QA Chain to inquire about our dataimport osos.environ["OPENAI_API_KEY"] = "your-key-here"from langchain.chat_models import ChatOpenAIfrom langchain.chains import ArangoGraphQAChainchain = ArangoGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Is Ned Stark alive?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Ned" AND character.surname == "Stark" RETURN character.alive AQL Result: [True] > Finished chain. 'Yes, Ned Stark is alive.'chain.run("How old is Arya Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters FOR character IN Characters FILTER character.name == "Arya" && character.surname == "Stark" RETURN character.age AQL Result: [11] > Finished chain. 'Arya Stark is 11 years old.'chain.run("Are Arya Stark and Ned Stark related?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e, p IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER p.vertices[-1]._key == 'NedStark' RETURN p AQL Result: [{'vertices': [{'_key': 'AryaStark', '_id': 'Characters/AryaStark', '_rev': '_gVPKGPi--B', 'name': 'Arya', 'surname': 'Stark', 'alive': True, 'age': 11, 'gender': 'female'}, {'_key': 'NedStark', '_id': 'Characters/NedStark', '_rev': '_gVPKGPi---', 'name': 'Ned', 'surname': 'Stark', 'alive': True, 'age': 41, 'gender': 'male'}], 'edges': [{'_key': '266218884025', '_id': 'ChildOf/266218884025', '_from': 'Characters/AryaStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq---'}], 'weights': [0, 1]}] > Finished chain. 'Yes, Arya Stark and Ned Stark are related. According to the information retrieved from the database, there is a relationship between them. Arya Stark is the child of Ned Stark.'chain.run("Does Arya Stark have a dead parent?") > Entering new ArangoGraphQAChain chain... AQL Query (1): WITH Characters, ChildOf FOR v, e IN 1..1 OUTBOUND 'Characters/AryaStark' ChildOf FILTER v.alive == false RETURN e AQL Result: [{'_key': '266218884027', '_id': 'ChildOf/266218884027', '_from': 'Characters/AryaStark', '_to': 'Characters/CatelynStark', '_rev': '_gVPKGSu---'}] > Finished chain. 'Yes, Arya Stark has a dead parent. The parent is Catelyn Stark.'Chain ModifiersYou can alter the values of the following ArangoDBGraphQAChain class variables to modify the behaviour of your chain results# Specify the maximum number of AQL Query Results to returnchain.top_k = 10# Specify whether or not to return the AQL Query in the output dictionarychain.return_aql_query = True# Specify whether or not to return the AQL JSON Result in the output dictionarychain.return_aql_result = True# Specify the maximum amount of AQL Generation attempts that should be madechain.max_aql_generation_attempts = 5# Specify a set of AQL Query Examples, which are passed to# the AQL Generation Prompt Template to promote few-shot-learning.# Defaults to an empty string.chain.aql_examples = """# Is Ned Stark alive?RETURN DOCUMENT('Characters/NedStark').alive# Is Arya Stark the child of Ned Stark?FOR e IN ChildOf FILTER e._from == "Characters/AryaStark" AND e._to == "Characters/NedStark" RETURN e"""chain.run("Is Ned Stark alive?")# chain("Is Ned Stark alive?") # Returns a dictionary with the AQL Query & AQL Result > Entering new ArangoGraphQAChain chain... AQL Query (1): RETURN DOCUMENT('Characters/NedStark').alive AQL Result: [True] > Finished chain. 'Yes, according to the information in the database, Ned Stark is alive.'chain.run("Is Bran Stark the child of Ned Stark?") > Entering new ArangoGraphQAChain chain... AQL Query (1): FOR e IN ChildOf FILTER e._from == "Characters/BranStark" AND e._to == "Characters/NedStark" RETURN e AQL Result: [{'_key': '266218884026', '_id': 'ChildOf/266218884026', '_from': 'Characters/BranStark', '_to': 'Characters/NedStark', '_rev': '_gVPKGSq--_'}] > Finished chain. 'Yes, according to the information in the ArangoDB database, Bran Stark is indeed the child of Ned Stark.'PreviousDiffbot Graph TransformerNextNeo4j DB QA chainPopulating the DatabaseGetting & Setting the ArangoDB SchemaQuerying the ArangoDB DatabaseChain Modifiers |
141 | https://python.langchain.com/docs/use_cases/more/graph/graph_cypher_qa | MoreAnalyzing graph dataNeo4j DB QA chainOn this pageNeo4j DB QA chainThis notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.You will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container.
You can run a local docker container by running the executing the following script:docker run \ --name neo4j \ -p 7474:7474 -p 7687:7687 \ -d \ -e NEO4J_AUTH=neo4j/pleaseletmein \ -e NEO4J_PLUGINS=\[\"apoc\"\] \ neo4j:latestIf you are using the docker container, you need to wait a couple of second for the database to start.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphCypherQAChainfrom langchain.graphs import Neo4jGraphgraph = Neo4jGraph( url="bolt://localhost:7687", username="neo4j", password="pleaseletmein") /home/tomaz/neo4j/langchain/libs/langchain/langchain/graphs/neo4j_graph.py:52: ExperimentalWarning: The configuration may change in the future. self._driver.verify_connectivity()Seeding the databaseAssuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times.graph.query( """MERGE (m:Movie {name:"Top Gun"})WITH mUNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actorMERGE (a:Actor {name:actor})MERGE (a)-[:ACTED_IN]->(m)""") []Refresh graph schema informationIf the schema of database changes, you can refresh the schema information needed to generate Cypher statements.graph.refresh_schema()print(graph.schema) Node properties are the following: [{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}] Relationship properties are the following: [] The relationships are the following: ['(:Actor)-[:ACTED_IN]->(:Movie)'] Querying the graphWe can now use the graph cypher QA chain to ask question of the graphchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'Limit the number of resultsYou can limit the number of results from the Cypher QA Chain using the top_k parameter.
The default is 10.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}] > Finished chain. 'Tom Cruise and Val Kilmer played in Top Gun.'Return intermediate resultsYou can return intermediate steps from the Cypher QA Chain using the return_intermediate_steps parameterchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True)result = chain("Who played in Top Gun?")print(f"Intermediate steps: {result['intermediate_steps']}")print(f"Final answer: {result['result']}") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\nRETURN a.name"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}] Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.Return direct resultsYou can return direct results from the Cypher QA Chain using the return_direct parameterchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name > Finished chain. [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]Add examples in the Cypher generation promptYou can define the Cypher statement you want the LLM to generate for particular questionsfrom langchain.prompts.prompt import PromptTemplateCYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.Instructions:Use only the provided relationship types and properties in the schema.Do not use any other relationship types or properties that are not provided.Schema:{schema}Note: Do not include any explanations or apologies in your responses.Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.Do not include any text except the generated Cypher statement.Examples: Here are a few examples of generated Cypher statements for particular questions:# How many people played in Top Gun?MATCH (m:Movie {{title:"Top Gun"}})<-[:ACTED_IN]-()RETURN count(*) AS numberOfActorsThe question is:{question}"""CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE)chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, cypher_prompt=CYPHER_GENERATION_PROMPT)chain.run("How many people played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (m:Movie {name:"Top Gun"})<-[:ACTED_IN]-(:Actor) RETURN count(*) AS numberOfActors Full Context: [{'numberOfActors': 4}] > Finished chain. 'Four people played in Top Gun.'Use separate LLMs for Cypher and answer generationYou can use the cypher_llm and qa_llm parameters to define different llmschain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True,)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'Ignore specified node and relationship typesYou can use include_types or exclude_types to ignore parts of the graph schema when generating Cypher statements.chain = GraphCypherQAChain.from_llm( graph=graph, cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"), verbose=True, exclude_types=['Movie'])# Inspect graph schemaprint(chain.graph_schema) Node properties are the following: {'Actor': [{'property': 'name', 'type': 'STRING'}]} Relationships properties are the following: {} Relationships are: []Validate generated Cypher statementsYou can use the validate_cypher parameter to validate and correct relationship directions in generated Cypher statementschain = GraphCypherQAChain.from_llm( llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"), graph=graph, verbose=True, validate_cypher=True)chain.run("Who played in Top Gun?") > Entering new GraphCypherQAChain chain... Generated Cypher: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'}) RETURN a.name Full Context: [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'PreviousArangoDB QA chainNextFalkorDBQAChainSeeding the databaseRefresh graph schema informationQuerying the graphLimit the number of resultsReturn intermediate resultsReturn direct resultsAdd examples in the Cypher generation promptUse separate LLMs for Cypher and answer generation |
142 | https://python.langchain.com/docs/use_cases/more/graph/graph_falkordb_qa | MoreAnalyzing graph dataFalkorDBQAChainOn this pageFalkorDBQAChainThis notebook shows how to use LLMs to provide a natural language interface to FalkorDB database.FalkorDB is a low latency property graph database management system. You can simply run its docker locally:docker run -p 6379:6379 -it --rm falkordb/falkordb:edgeOnce launched, you can simply start creating a database on the local machine and connect to it.from langchain.chat_models import ChatOpenAIfrom langchain.graphs import FalkorDBGraphfrom langchain.chains import FalkorDBQAChainCreate a graph connection and insert some demo data.graph = FalkorDBGraph(database="movies")graph.query(""" CREATE (al:Person {name: 'Al Pacino', birthDate: '1940-04-25'}), (robert:Person {name: 'Robert De Niro', birthDate: '1943-08-17'}), (tom:Person {name: 'Tom Cruise', birthDate: '1962-07-3'}), (val:Person {name: 'Val Kilmer', birthDate: '1959-12-31'}), (anthony:Person {name: 'Anthony Edwards', birthDate: '1962-7-19'}), (meg:Person {name: 'Meg Ryan', birthDate: '1961-11-19'}), (god1:Movie {title: 'The Godfather'}), (god2:Movie {title: 'The Godfather: Part II'}), (god3:Movie {title: 'The Godfather Coda: The Death of Michael Corleone'}), (top:Movie {title: 'Top Gun'}), (al)-[:ACTED_IN]->(god1), (al)-[:ACTED_IN]->(god2), (al)-[:ACTED_IN]->(god3), (robert)-[:ACTED_IN]->(god2), (tom)-[:ACTED_IN]->(top), (val)-[:ACTED_IN]->(top), (anthony)-[:ACTED_IN]->(top), (meg)-[:ACTED_IN]->(top)""") []Creating FalkorDBQAChaingraph.refresh_schema()print(graph.schema)import osos.environ['OPENAI_API_KEY']='API_KEY_HERE' Node properties: [[OrderedDict([('label', None), ('properties', ['name', 'birthDate', 'title'])])]] Relationships properties: [[OrderedDict([('type', None), ('properties', [])])]] Relationships: [['(:Person)-[:ACTED_IN]->(:Movie)']] chain = FalkorDBQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)Querying the graphchain.run("Who played in Top Gun?") > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person)-[:ACTED_IN]->(m:Movie) WHERE m.title = 'Top Gun' RETURN p.name Full Context: [['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan'], ['Tom Cruise'], ['Val Kilmer'], ['Anthony Edwards'], ['Meg Ryan']] > Finished chain. 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'chain.run("Who is the oldest actor who played in The Godfather: Part II?") > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person)-[r:ACTED_IN]->(m:Movie) WHERE m.title = 'The Godfather: Part II' RETURN p.name ORDER BY p.birthDate ASC LIMIT 1 Full Context: [['Al Pacino']] > Finished chain. 'The oldest actor who played in The Godfather: Part II is Al Pacino.'chain.run("Robert De Niro played in which movies?") > Entering new FalkorDBQAChain chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ACTED_IN]->(m:Movie) RETURN m.title Full Context: [['The Godfather: Part II'], ['The Godfather: Part II']] > Finished chain. 'Robert De Niro played in "The Godfather: Part II".'PreviousNeo4j DB QA chainNextHugeGraph QA ChainCreate a graph connection and insert some demo data.Creating FalkorDBQAChainQuerying the graph |
143 | https://python.langchain.com/docs/use_cases/more/graph/graph_hugegraph_qa | MoreAnalyzing graph dataHugeGraph QA ChainOn this pageHugeGraph QA ChainThis notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.You will need to have a running HugeGraph instance.
You can run a local docker container by running the executing the following script:docker run \ --name=graph \ -itd \ -p 8080:8080 \ hugegraph/hugegraphIf we want to connect HugeGraph in the application, we need to install python sdk:pip3 install hugegraph-pythonIf you are using the docker container, you need to wait a couple of second for the database to start, and then we need create schema and write graph data for the database.from hugegraph.connection import PyHugeGraphclient = PyHugeGraph("localhost", "8080", user="admin", pwd="admin", graph="hugegraph")First, we create the schema for a simple movie database:"""schema"""schema = client.schema()schema.propertyKey("name").asText().ifNotExist().create()schema.propertyKey("birthDate").asText().ifNotExist().create()schema.vertexLabel("Person").properties( "name", "birthDate").usePrimaryKeyId().primaryKeys("name").ifNotExist().create()schema.vertexLabel("Movie").properties("name").usePrimaryKeyId().primaryKeys( "name").ifNotExist().create()schema.edgeLabel("ActedIn").sourceLabel("Person").targetLabel( "Movie").ifNotExist().create() 'create EdgeLabel success, Detail: "b\'{"id":1,"name":"ActedIn","source_label":"Person","target_label":"Movie","frequency":"SINGLE","sort_keys":[],"nullable_keys":[],"index_labels":[],"properties":[],"status":"CREATED","ttl":0,"enable_label_index":true,"user_data":{"~create_time":"2023-07-04 10:48:47.908"}}\'"'Then we can insert some data."""graph"""g = client.graph()g.addVertex("Person", {"name": "Al Pacino", "birthDate": "1940-04-25"})g.addVertex("Person", {"name": "Robert De Niro", "birthDate": "1943-08-17"})g.addVertex("Movie", {"name": "The Godfather"})g.addVertex("Movie", {"name": "The Godfather Part II"})g.addVertex("Movie", {"name": "The Godfather Coda The Death of Michael Corleone"})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather", {})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather Part II", {})g.addEdge( "ActedIn", "1:Al Pacino", "2:The Godfather Coda The Death of Michael Corleone", {})g.addEdge("ActedIn", "1:Robert De Niro", "2:The Godfather Part II", {}) 1:Robert De Niro--ActedIn-->2:The Godfather Part IICreating HugeGraphQAChainWe can now create the HugeGraph and HugeGraphQAChain. To create the HugeGraph we simply need to pass the database object to the HugeGraph constructor.from langchain.chat_models import ChatOpenAIfrom langchain.chains import HugeGraphQAChainfrom langchain.graphs import HugeGraphgraph = HugeGraph( username="admin", password="admin", address="localhost", port=8080, graph="hugegraph",)Refresh graph schema informationIf the schema of database changes, you can refresh the schema information needed to generate Gremlin statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [name: Person, primary_keys: ['name'], properties: ['name', 'birthDate'], name: Movie, primary_keys: ['name'], properties: ['name']] Edge properties: [name: ActedIn, properties: []] Relationships: ['Person--ActedIn-->Movie'] Querying the graphWe can now use the graph Gremlin QA chain to ask question of the graphchain = HugeGraphQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in The Godfather?") > Entering new chain... Generated gremlin: g.V().has('Movie', 'name', 'The Godfather').in('ActedIn').valueMap(true) Full Context: [{'id': '1:Al Pacino', 'label': 'Person', 'name': ['Al Pacino'], 'birthDate': ['1940-04-25']}] > Finished chain. 'Al Pacino played in The Godfather.'PreviousFalkorDBQAChainNextKuzuQAChainCreating HugeGraphQAChainRefresh graph schema informationQuerying the graph |
144 | https://python.langchain.com/docs/use_cases/more/graph/graph_kuzu_qa | MoreAnalyzing graph dataKuzuQAChainOn this pageKuzuQAChainThis notebook shows how to use LLMs to provide a natural language interface to Kùzu database.Kùzu is an in-process property graph database management system. You can simply install it with pip:pip install kuzuOnce installed, you can simply import it and start creating a database on the local machine and connect to it:import kuzudb = kuzu.Database("test_db")conn = kuzu.Connection(db)First, we create the schema for a simple movie database:conn.execute("CREATE NODE TABLE Movie (name STRING, PRIMARY KEY(name))")conn.execute( "CREATE NODE TABLE Person (name STRING, birthDate STRING, PRIMARY KEY(name))")conn.execute("CREATE REL TABLE ActedIn (FROM Person TO Movie)") <kuzu.query_result.QueryResult at 0x1066ff410>Then we can insert some data.conn.execute("CREATE (:Person {name: 'Al Pacino', birthDate: '1940-04-25'})")conn.execute("CREATE (:Person {name: 'Robert De Niro', birthDate: '1943-08-17'})")conn.execute("CREATE (:Movie {name: 'The Godfather'})")conn.execute("CREATE (:Movie {name: 'The Godfather: Part II'})")conn.execute( "CREATE (:Movie {name: 'The Godfather Coda: The Death of Michael Corleone'})")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Al Pacino' AND m.name = 'The Godfather Coda: The Death of Michael Corleone' CREATE (p)-[:ActedIn]->(m)")conn.execute( "MATCH (p:Person), (m:Movie) WHERE p.name = 'Robert De Niro' AND m.name = 'The Godfather: Part II' CREATE (p)-[:ActedIn]->(m)") <kuzu.query_result.QueryResult at 0x107016210>Creating KuzuQAChainWe can now create the KuzuGraph and KuzuQAChain. To create the KuzuGraph we simply need to pass the database object to the KuzuGraph constructor.from langchain.chat_models import ChatOpenAIfrom langchain.graphs import KuzuGraphfrom langchain.chains import KuzuQAChaingraph = KuzuGraph(db)chain = KuzuQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)Refresh graph schema informationIf the schema of database changes, you can refresh the schema information needed to generate Cypher statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [{'properties': [('name', 'STRING')], 'label': 'Movie'}, {'properties': [('name', 'STRING'), ('birthDate', 'STRING')], 'label': 'Person'}] Relationships properties: [{'properties': [], 'label': 'ActedIn'}] Relationships: ['(:Person)-[:ActedIn]->(:Movie)'] Querying the graphWe can now use the KuzuQAChain to ask question of the graphchain.run("Who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie {name: 'The Godfather: Part II'}) RETURN p.name Full Context: [{'p.name': 'Al Pacino'}, {'p.name': 'Robert De Niro'}] > Finished chain. 'Al Pacino and Robert De Niro both played in The Godfather: Part II.'chain.run("Robert De Niro played in which movies?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN m.name Full Context: [{'m.name': 'The Godfather: Part II'}] > Finished chain. 'Robert De Niro played in The Godfather: Part II.'chain.run("Robert De Niro is born in which year?") > Entering new chain... Generated Cypher: MATCH (p:Person {name: 'Robert De Niro'})-[:ActedIn]->(m:Movie) RETURN p.birthDate Full Context: [{'p.birthDate': '1943-08-17'}] > Finished chain. 'Robert De Niro was born on August 17, 1943.'chain.run("Who is the oldest actor who played in The Godfather: Part II?") > Entering new chain... Generated Cypher: MATCH (p:Person)-[:ActedIn]->(m:Movie{name:'The Godfather: Part II'}) WITH p, m, p.birthDate AS birthDate ORDER BY birthDate ASC LIMIT 1 RETURN p.name Full Context: [{'p.name': 'Al Pacino'}] > Finished chain. 'The oldest actor who played in The Godfather: Part II is Al Pacino.'PreviousHugeGraph QA ChainNextMemgraph QA chainCreating KuzuQAChainRefresh graph schema informationQuerying the graph |
145 | https://python.langchain.com/docs/use_cases/more/graph/graph_memgraph_qa | MoreAnalyzing graph dataMemgraph QA chainOn this pageMemgraph QA chainThis notebook shows how to use LLMs to provide a natural language interface to a Memgraph database. To complete this tutorial, you will need Docker and Python 3.x installed.To follow along with this tutorial, ensure you have a running Memgraph instance. You can download and run it in a local Docker container by executing the following script:docker run \ -it \ -p 7687:7687 \ -p 7444:7444 \ -p 3000:3000 \ -e MEMGRAPH="--bolt-server-name-for-init=Neo4j/" \ -v mg_lib:/var/lib/memgraph memgraph/memgraph-platformYou will need to wait a few seconds for the database to start. If the process completes successfully, you should see something like this:mgconsole X.XConnected to 'memgraph://127.0.0.1:7687'Type :help for shell usageQuit the shell by typing Ctrl-D(eof) or :quitmemgraph>Now you can start playing with Memgraph!Begin by installing and importing all the necessary packages. We'll use the package manager called pip, along with the --user flag, to ensure proper permissions. If you've installed Python 3.4 or a later version, pip is included by default. You can install all the required packages using the following command:pip install langchain openai neo4j gqlalchemy --userYou can either run the provided code blocks in this notebook or use a separate Python file to experiment with Memgraph and LangChain.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphCypherQAChainfrom langchain.graphs import MemgraphGraphfrom langchain.prompts import PromptTemplatefrom gqlalchemy import Memgraphimport osWe're utilizing the Python library GQLAlchemy to establish a connection between our Memgraph database and Python script. To execute queries, we can set up a Memgraph instance as follows:memgraph = Memgraph(host='127.0.0.1', port=7687)Populating the databaseYou can effortlessly populate your new, empty database using the Cypher query language. Don't worry if you don't grasp every line just yet, you can learn Cypher from the documentation here. Running the following script will execute a seeding query on the database, giving us data about a video game, including details like the publisher, available platforms, and genres. This data will serve as a basis for our work.# Creating and executing the seeding queryquery = """ MERGE (g:Game {name: "Baldur's Gate 3"}) WITH g, ["PlayStation 5", "Mac OS", "Windows", "Xbox Series X/S"] AS platforms, ["Adventure", "Role-Playing Game", "Strategy"] AS genres FOREACH (platform IN platforms | MERGE (p:Platform {name: platform}) MERGE (g)-[:AVAILABLE_ON]->(p) ) FOREACH (genre IN genres | MERGE (gn:Genre {name: genre}) MERGE (g)-[:HAS_GENRE]->(gn) ) MERGE (p:Publisher {name: "Larian Studios"}) MERGE (g)-[:PUBLISHED_BY]->(p);"""memgraph.execute(query)Refresh graph schemaYou're all set to instantiate the Memgraph-LangChain graph using the following script. This interface will allow us to query our database using LangChain, automatically creating the required graph schema for generating Cypher queries through LLM.graph = MemgraphGraph(url="bolt://localhost:7687", username="", password="")If necessary, you can manually refresh the graph schema as follows.graph.refresh_schema()To familiarize yourself with the data and verify the updated graph schema, you can print it using the following statement.print(graph.schema)Node properties are the following:Node name: 'Game', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Platform', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Genre', Node properties: [{'property': 'name', 'type': 'str'}]Node name: 'Publisher', Node properties: [{'property': 'name', 'type': 'str'}]Relationship properties are the following:The relationships are the following:['(:Game)-[:AVAILABLE_ON]->(:Platform)']['(:Game)-[:HAS_GENRE]->(:Genre)']['(:Game)-[:PUBLISHED_BY]->(:Publisher)']Querying the databaseTo interact with the OpenAI API, you must configure your API key as an environment variable using the Python os package. This ensures proper authorization for your requests. You can find more information on obtaining your API key here.os.environ["OPENAI_API_KEY"] = "your-key-here"You should create the graph chain using the following script, which will be utilized in the question-answering process based on your graph data. While it defaults to GPT-3.5-turbo, you might also consider experimenting with other models like GPT-4 for notably improved Cypher queries and outcomes. We'll utilize the OpenAI chat, utilizing the key you previously configured. We'll set the temperature to zero, ensuring predictable and consistent answers. Additionally, we'll use our Memgraph-LangChain graph and set the verbose parameter, which defaults to False, to True to receive more detailed messages regarding query generation.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name='gpt-3.5-turbo')Now you can start asking questions!response = chain.run("Which platforms is Baldur's Gate 3 available on?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform)RETURN p.nameFull Context:[{'p.name': 'PlayStation 5'}, {'p.name': 'Mac OS'}, {'p.name': 'Windows'}, {'p.name': 'Xbox Series X/S'}]> Finished chain.Baldur's Gate 3 is available on PlayStation 5, Mac OS, Windows, and Xbox Series X/S.response = chain.run("Is Baldur's Gate 3 available on Windows?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(:Platform {name: 'Windows'})RETURN trueFull Context:[{'true': True}]> Finished chain.Yes, Baldur's Gate 3 is available on Windows.Chain modifiersTo modify the behavior of your chain and obtain more context or additional information, you can modify the chain's parameters.Return direct query resultsThe return_direct modifier specifies whether to return the direct results of the executed Cypher query or the processed natural language response.# Return the result of querying the graph directlychain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True)response = chain.run("Which studio published Baldur's Gate 3?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:PUBLISHED_BY]->(p:Publisher)RETURN p.name> Finished chain.[{'p.name': 'Larian Studios'}]Return query intermediate stepsThe return_intermediate_steps chain modifier enhances the returned response by including the intermediate steps of the query in addition to the initial query result.# Return all the intermediate steps of query executionchain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True)response = chain("Is Baldur's Gate 3 an Adventure game?")print(f"Intermediate steps: {response['intermediate_steps']}")print(f"Final response: {response['result']}")> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})RETURN g, genreFull Context:[{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}]> Finished chain.Intermediate steps: [{'query': "MATCH (g:Game {name: 'Baldur\\'s Gate 3'})-[:HAS_GENRE]->(genre:Genre {name: 'Adventure'})\nRETURN g, genre"}, {'context': [{'g': {'name': "Baldur's Gate 3"}, 'genre': {'name': 'Adventure'}}]}]Final response: Yes, Baldur's Gate 3 is an Adventure game.Limit the number of query resultsThe top_k modifier can be used when you want to restrict the maximum number of query results.# Limit the maximum number of results returned by querychain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2)response = chain.run("What genres are associated with Baldur's Gate 3?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (:Game {name: 'Baldur\'s Gate 3'})-[:HAS_GENRE]->(g:Genre)RETURN g.nameFull Context:[{'g.name': 'Adventure'}, {'g.name': 'Role-Playing Game'}]> Finished chain.Baldur's Gate 3 is associated with the genres Adventure and Role-Playing Game.Advanced queryingAs the complexity of your solution grows, you might encounter different use-cases that require careful handling. Ensuring your application's scalability is essential to maintain a smooth user flow without any hitches.Let's instantiate our chain once again and attempt to ask some questions that users might potentially ask.chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True, model_name='gpt-3.5-turbo')response = chain.run("Is Baldur's Gate 3 available on PS5?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PS5'})RETURN g.name, p.nameFull Context:[]> Finished chain.I'm sorry, but I don't have the information to answer your question.The generated Cypher query looks fine, but we didn't receive any information in response. This illustrates a common challenge when working with LLMs - the misalignment between how users phrase queries and how data is stored. In this case, the difference between user perception and the actual data storage can cause mismatches. Prompt refinement, the process of honing the model's prompts to better grasp these distinctions, is an efficient solution that tackles this issue. Through prompt refinement, the model gains increased proficiency in generating precise and pertinent queries, leading to the successful retrieval of the desired data.Prompt refinementTo address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain PromptTemplate, creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance.CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.Instructions:Use only the provided relationship types and properties in the schema.Do not use any other relationship types or properties that are not provided.Schema:{schema}Note: Do not include any explanations or apologies in your responses.Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.Do not include any text except the generated Cypher statement.If the user asks about PS5, Play Station 5 or PS 5, that is the platform called PlayStation 5.The question is:{question}"""CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE)chain = GraphCypherQAChain.from_llm( ChatOpenAI(temperature=0), cypher_prompt=CYPHER_GENERATION_PROMPT, graph=graph, verbose=True, model_name='gpt-3.5-turbo')response = chain.run("Is Baldur's Gate 3 available on PS5?")print(response)> Entering new GraphCypherQAChain chain...Generated Cypher:MATCH (g:Game {name: 'Baldur\'s Gate 3'})-[:AVAILABLE_ON]->(p:Platform {name: 'PlayStation 5'})RETURN g.name, p.nameFull Context:[{'g.name': "Baldur's Gate 3", 'p.name': 'PlayStation 5'}]> Finished chain.Yes, Baldur's Gate 3 is available on PlayStation 5.Now, with the revised initial Cypher prompt that includes guidance on platform naming, we are obtaining accurate and relevant results that align more closely with user queries. This approach allows for further improvement of your QA chain. You can effortlessly integrate extra prompt refinement data into your chain, thereby enhancing the overall user experience of your app.PreviousKuzuQAChainNextNebulaGraphQAChainPopulating the databaseRefresh graph schemaQuerying the databaseChain modifiersPrompt refinement |
146 | https://python.langchain.com/docs/use_cases/more/graph/graph_nebula_qa | MoreAnalyzing graph dataNebulaGraphQAChainOn this pageNebulaGraphQAChainThis notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.You will need to have a running NebulaGraph cluster, for which you can run a containerized cluster by running the following script:curl -fsSL nebula-up.siwei.io/install.sh | bashOther options are:Install as a Docker Desktop Extension. See hereNebulaGraph Cloud Service. See hereDeploy from package, source code, or via Kubernetes. See hereOnce the cluster is running, we could create the SPACE and SCHEMA for the database.# connect ngql jupyter extension to nebulagraph# create a new space%ngql CREATE SPACE IF NOT EXISTS langchain(partition_num=1, replica_factor=1, vid_type=fixed_string(128));# Wait for a few seconds for the space to be created.%ngql USE langchain;Create the schema, for full dataset, refer here.CREATE TAG IF NOT EXISTS movie(name string);CREATE TAG IF NOT EXISTS person(name string, birthdate string);CREATE EDGE IF NOT EXISTS acted_in();CREATE TAG INDEX IF NOT EXISTS person_index ON person(name(128));CREATE TAG INDEX IF NOT EXISTS movie_index ON movie(name(128));Wait for schema creation to complete, then we can insert some data.INSERT VERTEX person(name, birthdate) VALUES "Al Pacino":("Al Pacino", "1940-04-25");INSERT VERTEX movie(name) VALUES "The Godfather II":("The Godfather II");INSERT VERTEX movie(name) VALUES "The Godfather Coda: The Death of Michael Corleone":("The Godfather Coda: The Death of Michael Corleone");INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather II":();INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather Coda: The Death of Michael Corleone":(); UsageError: Cell magic `%%ngql` not found.from langchain.chat_models import ChatOpenAIfrom langchain.chains import NebulaGraphQAChainfrom langchain.graphs import NebulaGraphgraph = NebulaGraph( space="langchain", username="root", password="nebula", address="127.0.0.1", port=9669, session_pool_size=30,)Refresh graph schema informationIf the schema of database changes, you can refresh the schema information needed to generate nGQL statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [{'tag': 'movie', 'properties': [('name', 'string')]}, {'tag': 'person', 'properties': [('name', 'string'), ('birthdate', 'string')]}] Edge properties: [{'edge': 'acted_in', 'properties': []}] Relationships: ['(:person)-[:acted_in]->(:movie)'] Querying the graphWe can now use the graph cypher QA chain to ask question of the graphchain = NebulaGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in The Godfather II?") > Entering new NebulaGraphQAChain chain... Generated nGQL: MATCH (p:`person`)-[:acted_in]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II' RETURN p.`person`.`name` Full Context: {'p.person.name': ['Al Pacino']} > Finished chain. 'Al Pacino played in The Godfather II.'PreviousMemgraph QA chainNextGraph QARefresh graph schema informationQuerying the graph |
147 | https://python.langchain.com/docs/use_cases/more/graph/graph_qa | MoreAnalyzing graph dataGraph QAOn this pageGraph QAThis notebook goes over how to do question answering over a graph data structure.Create the graphIn this section, we construct an example graph. At the moment, this works best for small pieces of text.from langchain.indexes import GraphIndexCreatorfrom langchain.llms import OpenAIfrom langchain.document_loaders import TextLoaderindex_creator = GraphIndexCreator(llm=OpenAI(temperature=0))with open("../../../modules/state_of_the_union.txt") as f: all_text = f.read()We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment.text = "\n".join(all_text.split("\n\n")[105:108])text 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. 'graph = index_creator.from_text(text)We can inspect the created graph.graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')]Querying the graphWe can now use the graph QA chain to ask question of the graphfrom langchain.chains import GraphQAChainchain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True)chain.run("what is Intel going to build?") > Entering new GraphQAChain chain... Entities Extracted: Intel Full Context: Intel is going to build $20 billion semiconductor "mega site" Intel is building state-of-the-art factories Intel is creating 10,000 new good-paying jobs Intel is helping build Silicon Valley > Finished chain. ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.'Save the graphWe can also save and load the graph.graph.write_to_gml("graph.gml")from langchain.indexes.graph import NetworkxEntityGraphloaded_graph = NetworkxEntityGraph.from_gml("graph.gml")loaded_graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')]PreviousNebulaGraphQAChainNextGraphSparqlQAChainCreate the graphQuerying the graphSave the graph |
148 | https://python.langchain.com/docs/use_cases/more/graph/graph_sparql_qa | MoreAnalyzing graph dataGraphSparqlQAChainOn this pageGraphSparqlQAChainGraph databases are an excellent choice for applications based on network-like models. To standardize the syntax and semantics of such graphs, the W3C recommends Semantic Web Technologies, cp. Semantic Web. SPARQL serves as a query language analogously to SQL or Cypher for these graphs. This notebook demonstrates the application of LLMs as a natural language interface to a graph database by generating SPARQL.\
Disclaimer: To date, SPARQL query generation via LLMs is still a bit unstable. Be especially careful with UPDATE queries, which alter the graph.There are several sources you can run queries against, including files on the web, files you have available locally, SPARQL endpoints, e.g., Wikidata, and triple stores.from langchain.chat_models import ChatOpenAIfrom langchain.chains import GraphSparqlQAChainfrom langchain.graphs import RdfGraphgraph = RdfGraph( source_file="http://www.w3.org/People/Berners-Lee/card", standard="rdf", local_copy="test.ttl",)Note that providing a local_file is necessary for storing changes locally if the source is read-only.Refresh graph schema informationIf the schema of the database changes, you can refresh the schema information needed to generate SPARQL queries.graph.load_schema()graph.get_schema In the following, each IRI is followed by the local name and optionally its description in parentheses. The RDF graph supports the following node types: <http://xmlns.com/foaf/0.1/PersonalProfileDocument> (PersonalProfileDocument, None), <http://www.w3.org/ns/auth/cert#RSAPublicKey> (RSAPublicKey, None), <http://www.w3.org/2000/10/swap/pim/contact#Male> (Male, None), <http://xmlns.com/foaf/0.1/Person> (Person, None), <http://www.w3.org/2006/vcard/ns#Work> (Work, None) The RDF graph supports the following relationships: <http://www.w3.org/2000/01/rdf-schema#seeAlso> (seeAlso, None), <http://purl.org/dc/elements/1.1/title> (title, None), <http://xmlns.com/foaf/0.1/mbox_sha1sum> (mbox_sha1sum, None), <http://xmlns.com/foaf/0.1/maker> (maker, None), <http://www.w3.org/ns/solid/terms#oidcIssuer> (oidcIssuer, None), <http://www.w3.org/2000/10/swap/pim/contact#publicHomePage> (publicHomePage, None), <http://xmlns.com/foaf/0.1/openid> (openid, None), <http://www.w3.org/ns/pim/space#storage> (storage, None), <http://xmlns.com/foaf/0.1/name> (name, None), <http://www.w3.org/2000/10/swap/pim/contact#country> (country, None), <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> (type, None), <http://www.w3.org/ns/solid/terms#profileHighlightColor> (profileHighlightColor, None), <http://www.w3.org/ns/pim/space#preferencesFile> (preferencesFile, None), <http://www.w3.org/2000/01/rdf-schema#label> (label, None), <http://www.w3.org/ns/auth/cert#modulus> (modulus, None), <http://www.w3.org/2000/10/swap/pim/contact#participant> (participant, None), <http://www.w3.org/2000/10/swap/pim/contact#street2> (street2, None), <http://www.w3.org/2006/vcard/ns#locality> (locality, None), <http://xmlns.com/foaf/0.1/nick> (nick, None), <http://xmlns.com/foaf/0.1/homepage> (homepage, None), <http://creativecommons.org/ns#license> (license, None), <http://xmlns.com/foaf/0.1/givenname> (givenname, None), <http://www.w3.org/2006/vcard/ns#street-address> (street-address, None), <http://www.w3.org/2006/vcard/ns#postal-code> (postal-code, None), <http://www.w3.org/2000/10/swap/pim/contact#street> (street, None), <http://www.w3.org/2003/01/geo/wgs84_pos#lat> (lat, None), <http://xmlns.com/foaf/0.1/primaryTopic> (primaryTopic, None), <http://www.w3.org/2006/vcard/ns#fn> (fn, None), <http://www.w3.org/2003/01/geo/wgs84_pos#location> (location, None), <http://usefulinc.com/ns/doap#developer> (developer, None), <http://www.w3.org/2000/10/swap/pim/contact#city> (city, None), <http://www.w3.org/2006/vcard/ns#region> (region, None), <http://xmlns.com/foaf/0.1/member> (member, None), <http://www.w3.org/2003/01/geo/wgs84_pos#long> (long, None), <http://www.w3.org/2000/10/swap/pim/contact#address> (address, None), <http://xmlns.com/foaf/0.1/family_name> (family_name, None), <http://xmlns.com/foaf/0.1/account> (account, None), <http://xmlns.com/foaf/0.1/workplaceHomepage> (workplaceHomepage, None), <http://purl.org/dc/terms/title> (title, None), <http://www.w3.org/ns/solid/terms#publicTypeIndex> (publicTypeIndex, None), <http://www.w3.org/2000/10/swap/pim/contact#office> (office, None), <http://www.w3.org/2000/10/swap/pim/contact#homePage> (homePage, None), <http://xmlns.com/foaf/0.1/mbox> (mbox, None), <http://www.w3.org/2000/10/swap/pim/contact#preferredURI> (preferredURI, None), <http://www.w3.org/ns/solid/terms#profileBackgroundColor> (profileBackgroundColor, None), <http://schema.org/owns> (owns, None), <http://xmlns.com/foaf/0.1/based_near> (based_near, None), <http://www.w3.org/2006/vcard/ns#hasAddress> (hasAddress, None), <http://xmlns.com/foaf/0.1/img> (img, None), <http://www.w3.org/2000/10/swap/pim/contact#assistant> (assistant, None), <http://xmlns.com/foaf/0.1/title> (title, None), <http://www.w3.org/ns/auth/cert#key> (key, None), <http://www.w3.org/ns/ldp#inbox> (inbox, None), <http://www.w3.org/ns/solid/terms#editableProfile> (editableProfile, None), <http://www.w3.org/2000/10/swap/pim/contact#postalCode> (postalCode, None), <http://xmlns.com/foaf/0.1/weblog> (weblog, None), <http://www.w3.org/ns/auth/cert#exponent> (exponent, None), <http://rdfs.org/sioc/ns#avatar> (avatar, None) Querying the graphNow, you can use the graph SPARQL QA chain to ask questions about the graph.chain = GraphSparqlQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("What is Tim Berners-Lee's work homepage?") > Entering new GraphSparqlQAChain chain... Identified intent: SELECT Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?homepage WHERE { ?person foaf:name "Tim Berners-Lee" . ?person foaf:workplaceHomepage ?homepage . } Full Context: [] > Finished chain. "Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/."Updating the graphAnalogously, you can update the graph, i.e., insert triples, using natural language.chain.run( "Save that the person with the name 'Timothy Berners-Lee' has a work homepage at 'http://www.w3.org/foo/bar/'") > Entering new GraphSparqlQAChain chain... Identified intent: UPDATE Generated SPARQL: PREFIX foaf: <http://xmlns.com/foaf/0.1/> INSERT { ?person foaf:workplaceHomepage <http://www.w3.org/foo/bar/> . } WHERE { ?person foaf:name "Timothy Berners-Lee" . } > Finished chain. 'Successfully inserted triples into the graph.'Let's verify the results:query = ( """PREFIX foaf: <http://xmlns.com/foaf/0.1/>\n""" """SELECT ?hp\n""" """WHERE {\n""" """ ?person foaf:name "Timothy Berners-Lee" . \n""" """ ?person foaf:workplaceHomepage ?hp .\n""" """}""")graph.query(query) [(rdflib.term.URIRef('https://www.w3.org/'),), (rdflib.term.URIRef('http://www.w3.org/foo/bar/'),)]PreviousGraph QANextNeptune Open Cypher QA ChainRefresh graph schema informationQuerying the graphUpdating the graph |
149 | https://python.langchain.com/docs/use_cases/more/graph/neptune_cypher_qa | MoreAnalyzing graph dataNeptune Open Cypher QA ChainNeptune Open Cypher QA ChainThis QA chain queries Neptune graph database using openCypher and returns human readable responsefrom langchain.graphs import NeptuneGraphhost = "<neptune-host>"port = 8182use_https = Truegraph = NeptuneGraph(host=host, port=port, use_https=use_https)from langchain.chat_models import ChatOpenAIfrom langchain.chains import NeptuneOpenCypherQAChainllm = ChatOpenAI(temperature=0, model="gpt-4")chain = NeptuneOpenCypherQAChain.from_llm(llm=llm, graph=graph)chain.run("how many outgoing routes does the Austin airport have?") 'The Austin airport has 98 outgoing routes.'PreviousGraphSparqlQAChainNextTree of Thought (ToT) example |
150 | https://python.langchain.com/docs/use_cases/more/graph/tot | MoreAnalyzing graph dataTree of Thought (ToT) exampleOn this pageTree of Thought (ToT) exampleThe Tree of Thought (ToT) is a chain that allows you to query a Large Language Model (LLM) using the Tree of Thought technique. This is based on the paper "Large Language Model Guided Tree-of-Thought"from langchain.llms import OpenAIllm = OpenAI(temperature=1, max_tokens=512, model="text-davinci-003") /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.13) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(sudoku_puzzle = "3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1"sudoku_solution = "3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1"problem_description = f"""{sudoku_puzzle}- This is a 4x4 Sudoku puzzle.- The * represents a cell to be filled.- The | character separates rows.- At each step, replace one or more * with digits 1-4.- There must be no duplicate digits in any row, column or 2x2 subgrid.- Keep the known digits from previous valid thoughts in place.- Each thought can be a partial or the final solution.""".strip()print(problem_description) 3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1 - This is a 4x4 Sudoku puzzle. - The * represents a cell to be filled. - The | character separates rows. - At each step, replace one or more * with digits 1-4. - There must be no duplicate digits in any row, column or 2x2 subgrid. - Keep the known digits from previous valid thoughts in place. - Each thought can be a partial or the final solution.Rules Based CheckerEach thought is evaluated by the thought checker and is given a validity type: valid, invalid or partial. A simple checker can be rule based. For example, in the case of a sudoku puzzle, the checker can check if the puzzle is valid, invalid or partial.In the following code we implement a simple rule based checker for a specific 4x4 sudoku puzzle.from typing import Tuplefrom langchain_experimental.tot.checker import ToTCheckerfrom langchain_experimental.tot.thought import ThoughtValidityimport reclass MyChecker(ToTChecker): def evaluate(self, problem_description: str, thoughts: Tuple[str, ...] = ()) -> ThoughtValidity: last_thought = thoughts[-1] clean_solution = last_thought.replace(" ", "").replace('"', "") regex_solution = clean_solution.replace("*", ".").replace("|", "\\|") if sudoku_solution in clean_solution: return ThoughtValidity.VALID_FINAL elif re.search(regex_solution, sudoku_solution): return ThoughtValidity.VALID_INTERMEDIATE else: return ThoughtValidity.INVALIDJust testing the MyChecker class above:checker = MyChecker()assert checker.evaluate("", ("3,*,*,2|1,*,3,*|*,1,*,3|4,*,*,1",)) == ThoughtValidity.VALID_INTERMEDIATEassert checker.evaluate("", ("3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1",)) == ThoughtValidity.VALID_FINALassert checker.evaluate("", ("3,4,1,2|1,2,3,4|2,1,4,3|4,3,*,1",)) == ThoughtValidity.VALID_INTERMEDIATEassert checker.evaluate("", ("3,4,1,2|1,2,3,4|2,1,4,3|4,*,3,1",)) == ThoughtValidity.INVALIDTree of Thought ChainInitialize and run the ToT chain, with maximum number of interactions k set to 30 and the maximum number child thoughts c set to 8.from langchain_experimental.tot.base import ToTChaintot_chain = ToTChain(llm=llm, checker=MyChecker(), k=30, c=5, verbose=True, verbose_llm=False)tot_chain.run(problem_description=problem_description) > Entering new ToTChain chain... Starting the ToT solve procedure. /Users/harrisonchase/workplace/langchain/libs/langchain/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain. warnings.warn( Thought: 3*,*,2|1*,3,*|*,1,*,3|4,*,*,1 Thought: 3*,1,2|1*,3,*|*,1,*,3|4,*,*,1 Thought: 3*,1,2|1*,3,4|*,1,*,3|4,*,*,1 Thought: 3*,1,2|1*,3,4|*,1,2,3|4,*,*,1 Thought: 3*,1,2|1*,3,4|2,1,*,3|4,*,*,1 Type <enum 'ThoughtValidity'> not serializable Thought: 3,*,*,2|1,*,3,*|*,1,*,3|4,1,*,* Thought: 3,*,*,2|*,3,2,*|*,1,*,3|4,1,*,* Thought: 3,2,*,2|1,*,3,*|*,1,*,3|4,1,*,* Thought: 3,2,*,2|1,*,3,*|1,1,*,3|4,1,*,* Thought: 3,2,*,2|1,1,3,*|1,1,*,3|4,1,*,* Thought: 3,*,*,2|1,2,3,*|*,1,*,3|4,*,*,1 Thought: 3,1,4,2|1,2,3,4|2,1,4,3|4,3,2,1 Thought: 3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1 > Finished chain. '3,4,1,2|1,2,3,4|2,1,4,3|4,3,2,1'PreviousNeptune Open Cypher QA ChainNextLearned Prompt Variable Injection via RLRules Based CheckerTree of Thought Chain |
151 | https://python.langchain.com/docs/use_cases/more/learned_prompt_optimization | MoreLearned Prompt Variable Injection via RLOn this pageLearned Prompt Variable Injection via RLLLM prompts can be enhanced by injecting specific terms into template sentences. Selecting the right terms is crucial for obtaining high-quality responses. This notebook introduces automated prompt engineering through term injection using Reinforcement Learning with VowpalWabbit.The rl_chain (reinforcement learning chain) provides a way to automatically determine the best terms to inject without the need for fine-tuning the underlying foundational model.For illustration, consider the scenario of a meal delivery service. We use LangChain to ask customers, like Tom, about their dietary preferences and recommend suitable meals from our extensive menu. The rl_chain selects a meal based on user preferences, injects it into a prompt template, and forwards the prompt to an LLM. The LLM's response, which is a personalized recommendation, is then returned to the user.The example laid out below is a toy example to demonstrate the applicability of the concept. Advanced options and explanations are provided at the end.# Install necessary packages# ! pip install langchain langchain-experimental matplotlib vowpal_wabbit_next sentence-transformers pandas# four meals defined, some vegetarian some notmeals = [ "Beef Enchiladas with Feta cheese. Mexican-Greek fusion", "Chicken Flatbreads with red sauce. Italian-Mexican fusion", "Veggie sweet potato quesadillas with vegan cheese", "One-Pan Tortelonni bake with peppers and onions",]# pick and configure the LLM of your choicefrom langchain.llms import OpenAIllm = OpenAI(model="text-davinci-003")Intialize the RL chain with provided defaultsThe prompt template which will be used to query the LLM needs to be defined.
It can be anything, but here {meal} is being used and is going to be replaced by one of the meals above, the RL chain will try to pick and inject the best mealfrom langchain.prompts import PromptTemplate# here I am using the variable meal which will be replaced by one of the meals above# and some variables like user, preference, and text_to_personalize which I will provide at chain run timePROMPT_TEMPLATE = """Here is the description of a meal: "{meal}".Embed the meal into the given text: "{text_to_personalize}".Prepend a personalized message including the user's name "{user}" and their preference "{preference}".Make it sound good."""PROMPT = PromptTemplate( input_variables=["meal", "text_to_personalize", "user", "preference"], template=PROMPT_TEMPLATE)Next the RL chain's PickBest chain is being initialized. We must provide the llm of choice and the defined prompt. As the name indicates, the chain's goal is to Pick the Best of the meals that will be provided, based on some criteria. import langchain_experimental.rl_chain as rl_chainchain = rl_chain.PickBest.from_llm(llm=llm, prompt=PROMPT)Once the chain is setup I am going to call it with the meals I want to be selected from, and some context based on which the chain will select a meal.response = chain.run( meal = rl_chain.ToSelectFrom(meals), user = rl_chain.BasedOn("Tom"), preference = rl_chain.BasedOn(["Vegetarian", "regular dairy is ok"]), text_to_personalize = "This is the weeks specialty dish, our master chefs \ believe you will love it!",)print(response["response"]) Hey Tom! We've got a special treat for you this week - our master chefs have cooked up a delicious One-Pan Tortelonni Bake with peppers and onions, perfect for any Vegetarian who is ok with regular dairy! We know you'll love it!What is the chain doingHere's a step-by-step breakdown of the RL chain's operations:Accept the list of meals.Consider the user and their dietary preferences.Based on this context, select an appropriate meal.Automatically evaluate the appropriateness of the meal choice.Inject the selected meal into the prompt and submit it to the LLM.Return the LLM's response to the user.Technically, the chain achieves this by employing a contextual bandit reinforcement learning model, specifically utilizing the VowpalWabbit ML library.Initially, since the RL model is untrained, it might opt for random selections that don't necessarily align with a user's preferences. However, as it gains more exposure to the user's choices and feedback, it should start to make better selections (or quickly learn a good one and just pick that!).for _ in range(5): try: response = chain.run( meal = rl_chain.ToSelectFrom(meals), user = rl_chain.BasedOn("Tom"), preference = rl_chain.BasedOn(["Vegetarian", "regular dairy is ok"]), text_to_personalize = "This is the weeks specialty dish, our master chefs believe you will love it!", ) except Exception as e: print(e) print(response["response"]) print() Hey Tom! We know you love vegetarian dishes and that regular dairy is ok, so this week's specialty dish is perfect for you! Our master chefs have created a delicious Chicken Flatbread with red sauce - a unique Italian-Mexican fusion that we know you'll love. Enjoy! Hey Tom, this week's specialty dish is a delicious Mexican-Greek fusion of Beef Enchiladas with Feta cheese to suit your preference of 'Vegetarian' with 'regular dairy is ok'. Our master chefs believe you will love it! Hey Tom! Our master chefs have cooked up something special this week - a Mexican-Greek fusion of Beef Enchiladas with Feta cheese - and we know you'll love it as a vegetarian-friendly option with regular dairy included. Enjoy! Hey Tom! We've got the perfect meal for you this week - our delicious veggie sweet potato quesadillas with vegan cheese, made with the freshest ingredients. Even if you usually opt for regular dairy, we think you'll love this vegetarian dish! Hey Tom! Our master chefs have outdone themselves this week with a special dish just for you - Chicken Flatbreads with red sauce. It's an Italian-Mexican fusion that's sure to tantalize your taste buds, and it's totally vegetarian friendly with regular dairy is ok. Enjoy! How is the chain learningIt's important to note that while the RL model can make sophisticated selections, it doesn't inherently recognize concepts like "vegetarian" or understand that "beef enchiladas" aren't vegetarian-friendly. Instead, it leverages the LLM to ground its choices in common sense.The way the chain is learning that Tom prefers veggetarian meals is via an AutoSelectionScorer that is built into the chain. The scorer will call the LLM again and ask it to evaluate the selection (ToSelectFrom) using the information wrapped in (BasedOn).You can set langchain.debug=True if you want to see the details of the auto-scorer, but you can also define the scoring prompt yourself.scoring_criteria_template = "Given {preference} rank how good or bad this selection is {meal}"chain = rl_chain.PickBest.from_llm( llm=llm, prompt=PROMPT, selection_scorer=rl_chain.AutoSelectionScorer(llm=llm, scoring_criteria_template_str=scoring_criteria_template),)If you want to examine the score and other selection metadata you can by examining the metadata object returned by the chainresponse = chain.run( meal = rl_chain.ToSelectFrom(meals), user = rl_chain.BasedOn("Tom"), preference = rl_chain.BasedOn(["Vegetarian", "regular dairy is ok"]), text_to_personalize = "This is the weeks specialty dish, our master chefs believe you will love it!",)print(response["response"])selection_metadata = response["selection_metadata"]print(f"selected index: {selection_metadata.selected.index}, score: {selection_metadata.selected.score}") Hey Tom, this week's meal is something special! Our chefs have prepared a delicious One-Pan Tortelonni Bake with peppers and onions - vegetarian friendly and made with regular dairy, so you can enjoy it without worry. We know you'll love it! selected index: 3, score: 0.5In a more realistic scenario it is likely that you have a well defined scoring function for what was selected. For example, you might be doing few-shot prompting and want to select prompt examples for a natural language to sql translation task. In that case the scorer could be: did the sql that was generated run in an sql engine? In that case you want to plugin a scoring function. In the example below I will just check if the meal picked was vegetarian or not.class CustomSelectionScorer(rl_chain.SelectionScorer): def score_response( self, inputs, llm_response: str, event: rl_chain.PickBestEvent) -> float: print(event.based_on) print(event.to_select_from) # you can build a complex scoring function here # it is prefereable that the score ranges between 0 and 1 but it is not enforced selected_meal = event.to_select_from["meal"][event.selected.index] print(f"selected meal: {selected_meal}") if "Tom" in event.based_on["user"]: if "Vegetarian" in event.based_on["preference"]: if "Chicken" in selected_meal or "Beef" in selected_meal: return 0.0 else: return 1.0 else: if "Chicken" in selected_meal or "Beef" in selected_meal: return 1.0 else: return 0.0 else: raise NotImplementedError("I don't know how to score this user")chain = rl_chain.PickBest.from_llm( llm=llm, prompt=PROMPT, selection_scorer=CustomSelectionScorer(),)response = chain.run( meal = rl_chain.ToSelectFrom(meals), user = rl_chain.BasedOn("Tom"), preference = rl_chain.BasedOn(["Vegetarian", "regular dairy is ok"]), text_to_personalize = "This is the weeks specialty dish, our master chefs believe you will love it!",) {'user': ['Tom'], 'preference': ['Vegetarian', 'regular dairy is ok']} {'meal': ['Beef Enchiladas with Feta cheese. Mexican-Greek fusion', 'Chicken Flatbreads with red sauce. Italian-Mexican fusion', 'Veggie sweet potato quesadillas with vegan cheese', 'One-Pan Tortelonni bake with peppers and onions']} selected meal: Veggie sweet potato quesadillas with vegan cheeseHow can I track the chains progressYou can track the chains progress by using the metrics mechanism provided. I am going to expand the users to Tom and Anna, and extend the scoring function. I am going to initialize two chains, one with the default learning policy and one with a built-in random policy (i.e. selects a meal randomly), and plot their scoring progress.class CustomSelectionScorer(rl_chain.SelectionScorer): def score_preference(self, preference, selected_meal): if "Vegetarian" in preference: if "Chicken" in selected_meal or "Beef" in selected_meal: return 0.0 else: return 1.0 else: if "Chicken" in selected_meal or "Beef" in selected_meal: return 1.0 else: return 0.0 def score_response( self, inputs, llm_response: str, event: rl_chain.PickBestEvent) -> float: selected_meal = event.to_select_from["meal"][event.selected.index] if "Tom" in event.based_on["user"]: return self.score_preference(event.based_on["preference"], selected_meal) elif "Anna" in event.based_on["user"]: return self.score_preference(event.based_on["preference"], selected_meal) else: raise NotImplementedError("I don't know how to score this user")chain = rl_chain.PickBest.from_llm( llm=llm, prompt=PROMPT, selection_scorer=CustomSelectionScorer(), metrics_step=5, metrics_window_size=5, # rolling window average)random_chain = rl_chain.PickBest.from_llm( llm=llm, prompt=PROMPT, selection_scorer=CustomSelectionScorer(), metrics_step=5, metrics_window_size=5, # rolling window average policy=rl_chain.PickBestRandomPolicy # set the random policy instead of default)for _ in range(20): try: chain.run( meal = rl_chain.ToSelectFrom(meals), user = rl_chain.BasedOn("Tom"), preference = rl_chain.BasedOn(["Vegetarian", "regular dairy is ok"]), text_to_personalize = "This is the weeks specialty dish, our master chefs believe you will love it!", ) random_chain.run( meal = rl_chain.ToSelectFrom(meals), user = rl_chain.BasedOn("Tom"), preference = rl_chain.BasedOn(["Vegetarian", "regular dairy is ok"]), text_to_personalize = "This is the weeks specialty dish, our master chefs believe you will love it!", ) chain.run( meal = rl_chain.ToSelectFrom(meals), user = rl_chain.BasedOn("Anna"), preference = rl_chain.BasedOn(["Loves meat", "especially beef"]), text_to_personalize = "This is the weeks specialty dish, our master chefs believe you will love it!", ) random_chain.run( meal = rl_chain.ToSelectFrom(meals), user = rl_chain.BasedOn("Anna"), preference = rl_chain.BasedOn(["Loves meat", "especially beef"]), text_to_personalize = "This is the weeks specialty dish, our master chefs believe you will love it!", ) except Exception as e: print(e)The RL chain converges to the fact that Anna prefers beef and Tom is vegetarian. The random chain picks at random, and so will send beef to vegetarians half the time.from matplotlib import pyplot as pltchain.metrics.to_pandas()['score'].plot(label="default learning policy")random_chain.metrics.to_pandas()['score'].plot(label="random selection policy")plt.legend()print(f"The final average score for the default policy, calculated over a rolling window, is: {chain.metrics.to_pandas()['score'].iloc[-1]}")print(f"The final average score for the random policy, calculated over a rolling window, is: {random_chain.metrics.to_pandas()['score'].iloc[-1]}") The final average score for the default policy, calculated over a rolling window, is: 1.0 The final average score for the random policy, calculated over a rolling window, is: 0.6 ![png](_learned_prompt_optimization_files/output_26_1.png) There is a bit of randomness involved in the rl_chain's selection since the chain explores the selection space in order to learn the world as best as it can (see details of default exploration algorithm used here), but overall, default chain policy should be doing better than random as it learnsAdvanced optionsThe RL chain is highly configurable in order to be able to adjust to various selection scenarios. If you want to learn more about the ML library that powers it please take a look at tutorials hereSectionDescriptionExample / UsageChange Chain Logging LevelChange the logging level for the RL chain.logger.setLevel(logging.INFO)FeaturizationAdjusts the input to the RL chain. Can set auto-embeddings ON for more complex embeddings.chain = rl_chain.PickBest.from_llm(auto_embed=True, [...])Learned Policy to Learn AsynchronouslyScore asynchronously if user input is needed for scoring.chain.update_with_delayed_score(score=<the score>, chain_response=response)Store Progress of Learned PolicyOption to store the progress of the variable injection learned policy.chain.save_progress()Stop Learning of Learned PolicyToggle the RL chain's learned policy updates ON/OFF.chain.deactivate_selection_scorer()Set a Different PolicyChoose between different policies: default, random, or custom.Custom policy creation at chain creation time.Different Exploration Algorithms and Options for Default Learned PolicySet different exploration algorithms and hyperparameters for VwPolicy.vw_cmd = ["--cb_explore_adf", "--quiet", "--squarecb", "--interactions=::"]Learn Policy's Data LogsStore and examine VwPolicy's data logs.chain = rl_chain.PickBest.from_llm(vw_logs=<path to log FILE>, [...])Other Advanced Featurization OptionsSpecify advanced featurization options for the RL chain.age = rl_chain.BasedOn("age:32")More Info on Auto or Custom SelectionScorerDive deeper into how selection scoring is determined.selection_scorer=rl_chain.AutoSelectionScorer(llm=llm, scoring_criteria_template_str=scoring_criteria_template)change chain logging levelimport logginglogger = logging.getLogger("rl_chain")logger.setLevel(logging.INFO)featurizationauto_embedBy default the input to the rl chain (ToSelectFrom, BasedOn) is not tampered with. This might not be sufficient featurization, so based on how complex the scenario is you can set auto-embeddings to ONchain = rl_chain.PickBest.from_llm(auto_embed=True, [...])This will produce more complex embeddings and featurizations of the inputs, likely accelerating RL chain learning, albeit at the cost of increased runtime.By default, sbert.net's sentence_transformers's all-mpnet-base-v2 model will be used for these embeddings but you can set a different embeddings model by initializing the chain with it as shown in this example. You could also set an entirely different embeddings encoding object, as long as it has an encode() function that returns a list of the encodings.from sentence_transformers import SentenceTransformerchain = rl_chain.PickBest.from_llm( [...] feature_embedder=rl_chain.PickBestFeatureEmbedder( auto_embed=True, model=SentenceTransformer("all-mpnet-base-v2") ))explicitly defined embeddingsAnother option is to define what inputs you think should be embedded manually:auto_embed = FalseCan wrap individual variables in rl_chain.Embed() or rl_chain.EmbedAndKeep() e.g. user = rl_chain.BasedOn(rl_chain.Embed("Tom"))custom featurizationAnother final option is to define and set a custom featurization/embedder class that returns a valid input for the learned policy.learned policy to learn asynchronouslyIf to score the result you need input from the user (e.g. my application showed Tom the selected meal and Tom clicked on it, but Anna did not), then the scoring can be done asynchronously. The way to do that is:set selection_scorer=None on the chain creation OR call chain.deactivate_selection_scorer()call the chain for a specific inputkeep the chain's response (response = chain.run([...]))once you have determined the score of the response/chain selection call the chain with it: chain.update_with_delayed_score(score=<the score>, chain_response=response)store progress of learned policySince the variable injection learned policy evolves over time, there is the option to store its progress and continue learning. This can be done by calling:chain.save_progress()which will store the rl chain's learned policy in a file called latest.vw. It will also store it in a file with a timestamp. That way, if save_progress() is called more than once, multiple checkpoints will be created, but the latest one will always be in latest.vwNext time the chain is loaded, the chain will look for a file called latest.vw and if the file exists it will be loaded into the chain and the learning will continue from there.By default the rl chain model checkpoints will be stored in the current directory but you can specify the save/load location at chain creation time:chain = rl_chain.PickBest.from_llm(model_save_dir=<path to dir>, [...])stop learning of learned policyIf you want the rl chain's learned policy to stop updating you can turn it off/on:chain.deactivate_selection_scorer() and chain.activate_selection_scorer()set a different policyThere are two policies currently available:default policy: VwPolicy which learns a Vowpal Wabbit Contextual Bandit modelrandom policy: RandomPolicy which doesn't learn anything and just selects a value randomly. this policy can be used to compare other policies with a random baseline one.custom policies: a custom policy could be created and set at chain creation timedifferent exploration algorithms and options for the default learned policyThe default VwPolicy is initialized with some default arguments. The default exploration algorithm is SquareCB but other Contextual Bandit exploration algorithms can be set, and other hyper parameters can be tuned (see here for available options).vw_cmd = ["--cb_explore_adf", "--quiet", "--squarecb", "--interactions=::"]chain = rl_chain.PickBest.from_llm(vw_cmd = vw_cmd, [...])learned policy's data logsThe VwPolicy's data files can be stored and examined or used to do off policy evaluation for hyper parameter tuning.The way to do this is to set a log file path to vw_logs on chain creation:chain = rl_chain.PickBest.from_llm(vw_logs=<path to log FILE>, [...])other advanced featurization optionsExplictly numerical features can be provided with a colon separator:
age = rl_chain.BasedOn("age:32")ToSelectFrom can be a bit more complex if the scenario demands it, instead of being a list of strings it can be:a list of list of strings:meal = rl_chain.ToSelectFrom([ ["meal 1 name", "meal 1 description"], ["meal 2 name", "meal 2 description"]])a list of dictionaries:meal = rl_chain.ToSelectFrom([ {"name":"meal 1 name", "description" : "meal 1 description"}, {"name":"meal 2 name", "description" : "meal 2 description"}])a list of dictionaries containing lists:meal = rl_chain.ToSelectFrom([ {"name":["meal 1", "complex name"], "description" : "meal 1 description"}, {"name":["meal 2", "complex name"], "description" : "meal 2 description"}])BasedOn can also take a list of strings:user = rl_chain.BasedOn(["Tom Joe", "age:32", "state of california"])there is no dictionary provided since multiple variables can be supplied wrapped in BasedOnStoring the data logs into a file allows the examination of what different inputs do to the data format.More info on Auto or Custom SelectionScorerIt is very important to get the selection scorer right since the policy uses it to learn. It determines what is called the reward in reinforcement learning, and more specifically in our Contextual Bandits setting.The general advice is to keep the score between [0, 1], 0 being the worst selection, 1 being the best selection from the available ToSelectFrom variables, based on the BasedOn variables, but should be adjusted if the need arises.In the examples provided above, the AutoSelectionScorer is set mostly to get users started but in real world scenarios it will most likely not be an adequate scorer function.The example also provided the option to change part of the scoring prompt template that the AutoSelectionScorer used to determine whether a selection was good or not:scoring_criteria_template = "Given {preference} rank how good or bad this selection is {meal}"chain = rl_chain.PickBest.from_llm( llm=llm, prompt=PROMPT, selection_scorer=rl_chain.AutoSelectionScorer(llm=llm, scoring_criteria_template_str=scoring_criteria_template),)Internally the AutoSelectionScorer adjusted the scoring prompt to make sure that the llm scoring retured a single float.However, if needed, a FULL scoring prompt can also be provided:from langchain.prompts.prompt import PromptTemplateimport langchainlangchain.debug = TrueREWARD_PROMPT_TEMPLATE = """Given {preference} rank how good or bad this selection is {meal}IMPORANT: you MUST return a single number between -1 and 1, -1 being bad, 1 being good"""REWARD_PROMPT = PromptTemplate( input_variables=["preference", "meal"], template=REWARD_PROMPT_TEMPLATE,)chain = rl_chain.PickBest.from_llm( llm=llm, prompt=PROMPT, selection_scorer=rl_chain.AutoSelectionScorer(llm=llm, prompt=REWARD_PROMPT),)chain.run( meal = rl_chain.ToSelectFrom(meals), user = rl_chain.BasedOn("Tom"), preference = rl_chain.BasedOn(["Vegetarian", "regular dairy is ok"]), text_to_personalize = "This is the weeks specialty dish, our master chefs believe you will love it!",) [chain/start] [1:chain:PickBest] Entering Chain run with input: [inputs] [chain/start] [1:chain:PickBest > 2:chain:LLMChain] Entering Chain run with input: [inputs] [llm/start] [1:chain:PickBest > 2:chain:LLMChain > 3:llm:OpenAI] Entering LLM run with input: { "prompts": [ "Here is the description of a meal: \"Chicken Flatbreads with red sauce. Italian-Mexican fusion\".\n\nEmbed the meal into the given text: \"This is the weeks specialty dish, our master chefs believe you will love it!\".\n\nPrepend a personalized message including the user's name \"Tom\" \n and their preference \"['Vegetarian', 'regular dairy is ok']\".\n\nMake it sound good." ] } [llm/end] [1:chain:PickBest > 2:chain:LLMChain > 3:llm:OpenAI] [1.12s] Exiting LLM run with output: { "generations": [ [ { "text": "\nHey Tom, we have something special for you this week! Our master chefs have created a delicious Italian-Mexican fusion Chicken Flatbreads with red sauce just for you. Our chefs have also taken into account your preference of vegetarian options with regular dairy - this one is sure to be a hit!", "generation_info": { "finish_reason": "stop", "logprobs": null } } ] ], "llm_output": { "token_usage": { "total_tokens": 154, "completion_tokens": 61, "prompt_tokens": 93 }, "model_name": "text-davinci-003" }, "run": null } [chain/end] [1:chain:PickBest > 2:chain:LLMChain] [1.12s] Exiting Chain run with output: { "text": "\nHey Tom, we have something special for you this week! Our master chefs have created a delicious Italian-Mexican fusion Chicken Flatbreads with red sauce just for you. Our chefs have also taken into account your preference of vegetarian options with regular dairy - this one is sure to be a hit!" } [chain/start] [1:chain:LLMChain] Entering Chain run with input: [inputs] [llm/start] [1:chain:LLMChain > 2:llm:OpenAI] Entering LLM run with input: { "prompts": [ "Given ['Vegetarian', 'regular dairy is ok'] rank how good or bad this selection is ['Beef Enchiladas with Feta cheese. Mexican-Greek fusion', 'Chicken Flatbreads with red sauce. Italian-Mexican fusion', 'Veggie sweet potato quesadillas with vegan cheese', 'One-Pan Tortelonni bake with peppers and onions']\n\nIMPORANT: you MUST return a single number between -1 and 1, -1 being bad, 1 being good" ] } [llm/end] [1:chain:LLMChain > 2:llm:OpenAI] [274ms] Exiting LLM run with output: { "generations": [ [ { "text": "\n0.625", "generation_info": { "finish_reason": "stop", "logprobs": null } } ] ], "llm_output": { "token_usage": { "total_tokens": 112, "completion_tokens": 4, "prompt_tokens": 108 }, "model_name": "text-davinci-003" }, "run": null } [chain/end] [1:chain:LLMChain] [275ms] Exiting Chain run with output: { "text": "\n0.625" } [chain/end] [1:chain:PickBest] [1.40s] Exiting Chain run with output: [outputs] {'response': 'Hey Tom, we have something special for you this week! Our master chefs have created a delicious Italian-Mexican fusion Chicken Flatbreads with red sauce just for you. Our chefs have also taken into account your preference of vegetarian options with regular dairy - this one is sure to be a hit!', 'selection_metadata': <langchain_experimental.rl_chain.pick_best_chain.PickBestEvent at 0x289764220>}PreviousTree of Thought (ToT) exampleNextSelf-checkingWhat is the chain doingHow is the chain learningHow can I track the chains progressAdvanced optionschange chain logging levelfeaturizationlearned policy to learn asynchronouslystore progress of learned policystop learning of learned policyset a different policydifferent exploration algorithms and options for the default learned policylearned policy's data logsother advanced featurization optionsMore info on Auto or Custom SelectionScorer |
152 | https://python.langchain.com/docs/use_cases/more/self_check/ | MoreSelf-checkingSelf-checkingOne of the main issues with using LLMs is that they can often hallucinate and make false claims. One of the surprisingly effective ways to remediate this is to use the LLM itself to check its own answers.📄️ Self-checking chainThis notebook showcases how to use LLMCheckerChain.📄️ Summarization checker chainThis notebook shows some examples of LLMSummarizationCheckerChain in use with different types of texts. It has a few distinct differences from the LLMCheckerChain, in that it doesn't have any assumptions to the format of the input text (or summary).📄️ How to use a SmartLLMChainA SmartLLMChain is a form of self-critique chain that can help you if have particularly complex questions to answer. Instead of doing a single LLM pass, it instead performs these 3 steps:PreviousLearned Prompt Variable Injection via RLNextSelf-checking chain |
153 | https://python.langchain.com/docs/use_cases/more/self_check/llm_checker | MoreSelf-checkingSelf-checking chainSelf-checking chainThis notebook showcases how to use LLMCheckerChain.from langchain.chains import LLMCheckerChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0.7)text = "What type of mammal lays the biggest eggs?"checker_chain = LLMCheckerChain.from_llm(llm, verbose=True)checker_chain.run(text) > Entering new LLMCheckerChain chain... > Entering new SequentialChain chain... > Finished chain. > Finished chain. ' No mammal lays the biggest eggs. The Elephant Bird, which was a species of giant bird, laid the largest eggs of any bird.'PreviousSelf-checkingNextSummarization checker chain |
154 | https://python.langchain.com/docs/use_cases/more/self_check/llm_summarization_checker | MoreSelf-checkingSummarization checker chainSummarization checker chainThis notebook shows some examples of LLMSummarizationCheckerChain in use with different types of texts. It has a few distinct differences from the LLMCheckerChain, in that it doesn't have any assumptions to the format of the input text (or summary).
Additionally, as the LLMs like to hallucinate when fact checking or get confused by context, it is sometimes beneficial to run the checker multiple times. It does this by feeding the rewritten "True" result back on itself, and checking the "facts" for truth. As you can see from the examples below, this can be very effective in arriving at a generally true body of text.You can control the number of times the checker runs by setting the max_checks parameter. The default is 2, but you can set it to 1 if you don't want any double-checking.from langchain.chains import LLMSummarizationCheckerChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0)checker_chain = LLMSummarizationCheckerChain.from_llm(llm, verbose=True, max_checks=2)text = """Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):• In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.• The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.• JWST took the very first pictures of a planet outside of our own solar system. These distant worlds are called "exoplanets." Exo means "from outside."These discoveries can spark a child's imagination about the infinite wonders of the universe."""checker_chain.run(text) > Entering new LLMSummarizationCheckerChain chain... > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas. • The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us. • JWST took the very first pictures of a planet outside of our own solar system. These distant worlds are called "exoplanets." Exo means "from outside." These discoveries can spark a child's imagination about the infinite wonders of the universe. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ • The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." • The telescope captured images of galaxies that are over 13 billion years old. • JWST took the very first pictures of a planet outside of our own solar system. • These distant worlds are called "exoplanets." """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ • The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True • The telescope captured images of galaxies that are over 13 billion years old. - True • JWST took the very first pictures of a planet outside of our own solar system. - False. The first exoplanet was discovered in 1992, before the JWST was launched. • These distant worlds are called "exoplanets." - True """ Original Summary: """ Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas. • The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us. • JWST took the very first pictures of a planet outside of our own solar system. These distant worlds are called "exoplanets." Exo means "from outside." These discoveries can spark a child's imagination about the infinite wonders of the universe. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" • The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True • The telescope captured images of galaxies that are over 13 billion years old. - True • JWST took the very first pictures of a planet outside of our own solar system. - False. The first exoplanet was discovered in 1992, before the JWST was launched. • These distant worlds are called "exoplanets." - True """ Result: > Finished chain. > Finished chain. Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas. • The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us. • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. These distant worlds were first discovered in 1992, and the JWST has allowed us to see them in greater detail. These discoveries can spark a child's imagination about the infinite wonders of the universe. > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas. • The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us. • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. These distant worlds were first discovered in 1992, and the JWST has allowed us to see them in greater detail. These discoveries can spark a child's imagination about the infinite wonders of the universe. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ • The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." • The light from these galaxies has been traveling for over 13 billion years to reach us. • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. • Exoplanets were first discovered in 1992. • The JWST has allowed us to see exoplanets in greater detail. """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ • The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True • The light from these galaxies has been traveling for over 13 billion years to reach us. - True • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. - False. The first exoplanet was discovered in 1992, but the first images of exoplanets were taken by the Hubble Space Telescope in 2004. • Exoplanets were first discovered in 1992. - True • The JWST has allowed us to see exoplanets in greater detail. - Undetermined. The JWST has not yet been launched, so it is not yet known how much detail it will be able to provide. """ Original Summary: """ Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • In 2023, The JWST spotted a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas. • The telescope captured images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us. • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. These distant worlds were first discovered in 1992, and the JWST has allowed us to see them in greater detail. These discoveries can spark a child's imagination about the infinite wonders of the universe. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" • The James Webb Space Telescope (JWST) spotted a number of galaxies nicknamed "green peas." - True • The light from these galaxies has been traveling for over 13 billion years to reach us. - True • JWST has provided us with the first images of exoplanets, which are planets outside of our own solar system. - False. The first exoplanet was discovered in 1992, but the first images of exoplanets were taken by the Hubble Space Telescope in 2004. • Exoplanets were first discovered in 1992. - True • The JWST has allowed us to see exoplanets in greater detail. - Undetermined. The JWST has not yet been launched, so it is not yet known how much detail it will be able to provide. """ Result: > Finished chain. > Finished chain. Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST): • In 2023, The JWST will spot a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas. • The telescope will capture images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us. • Exoplanets, which are planets outside of our own solar system, were first discovered in 1992. The JWST will allow us to see them in greater detail when it is launched in 2023. These discoveries can spark a child's imagination about the infinite wonders of the universe. > Finished chain. 'Your 9-year old might like these recent discoveries made by The James Webb Space Telescope (JWST):\n• In 2023, The JWST will spot a number of galaxies nicknamed "green peas." They were given this name because they are small, round, and green, like peas.\n• The telescope will capture images of galaxies that are over 13 billion years old. This means that the light from these galaxies has been traveling for over 13 billion years to reach us.\n• Exoplanets, which are planets outside of our own solar system, were first discovered in 1992. The JWST will allow us to see them in greater detail when it is launched in 2023.\nThese discoveries can spark a child\'s imagination about the infinite wonders of the universe.'from langchain.chains import LLMSummarizationCheckerChainfrom langchain.llms import OpenAIllm = OpenAI(temperature=0)checker_chain = LLMSummarizationCheckerChain.from_llm(llm, verbose=True, max_checks=3)text = "The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea."checker_chain.run(text) > Entering new LLMSummarizationCheckerChain chain... > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. - It has an area of 465,000 square miles. - It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. - It is the smallest of the five oceans. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. - The sea is named after the island of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. - It is often frozen over so navigation is limited. - It is considered the northern branch of the Norwegian Sea. """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean. - It is the smallest of the five oceans. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - The sea is named after the island of Greenland. True - It is the Arctic Ocean's main outlet to the Atlantic. True - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Norwegian Sea. True """ Original Summary: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. It is the smallest of the five oceans and is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is one of five oceans in the world, alongside the Pacific Ocean, Atlantic Ocean, Indian Ocean, and the Southern Ocean. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean. - It is the smallest of the five oceans. False - The Greenland Sea is not an ocean, it is an arm of the Arctic Ocean. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - The sea is named after the island of Greenland. True - It is the Arctic Ocean's main outlet to the Atlantic. True - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Norwegian Sea. True """ Result: > Finished chain. > Finished chain. The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea. > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. - It has an area of 465,000 square miles. - It is an arm of the Arctic Ocean. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. - It is named after the island of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. - It is often frozen over so navigation is limited. - It is considered the northern branch of the Norwegian Sea. """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is an arm of the Arctic Ocean. True - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - It is named after the island of Greenland. False - It is named after the country of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. True - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Norwegian Sea. False - It is considered the northern branch of the Atlantic Ocean. """ Original Summary: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the island of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Norwegian Sea. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is an arm of the Arctic Ocean. True - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - It is named after the island of Greenland. False - It is named after the country of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. True - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Norwegian Sea. False - It is considered the northern branch of the Atlantic Ocean. """ Result: > Finished chain. > Finished chain. The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean. > Entering new SequentialChain chain... > Entering new LLMChain chain... Prompt after formatting: Given some text, extract a list of facts from the text. Format your output as a bulleted list. Text: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean. """ Facts: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: You are an expert fact checker. You have been hired by a major news organization to fact check a very important story. Here is a bullet point list of facts: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. - It has an area of 465,000 square miles. - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. - The sea is named after the country of Greenland. - It is the Arctic Ocean's main outlet to the Atlantic. - It is often frozen over so navigation is limited. - It is considered the northern branch of the Atlantic Ocean. """ For each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output "Undetermined". If the fact is false, explain why. > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true of false. If the answer is false, a suggestion is given for a correction. Checked Assertions: """ - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - The sea is named after the country of Greenland. True - It is the Arctic Ocean's main outlet to the Atlantic. False - The Arctic Ocean's main outlet to the Atlantic is the Barents Sea. - It is often frozen over so navigation is limited. True - It is considered the northern branch of the Atlantic Ocean. False - The Greenland Sea is considered part of the Arctic Ocean, not the Atlantic Ocean. """ Original Summary: """ The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. It has an area of 465,000 square miles and is an arm of the Arctic Ocean. It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. The sea is named after the country of Greenland, and is the Arctic Ocean's main outlet to the Atlantic. It is often frozen over so navigation is limited, and is considered the northern branch of the Atlantic Ocean. """ Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" - The Greenland Sea is an outlying portion of the Arctic Ocean located between Iceland, Norway, the Svalbard archipelago and Greenland. True - It has an area of 465,000 square miles. True - It is covered almost entirely by water, some of which is frozen in the form of glaciers and icebergs. True - The sea is named after the country of Greenland. True - It is the Arctic Ocean's main outlet to |
155 | https://python.langchain.com/docs/use_cases/more/self_check/smart_llm | MoreSelf-checkingHow to use a SmartLLMChainOn this pageHow to use a SmartLLMChainA SmartLLMChain is a form of self-critique chain that can help you if have particularly complex questions to answer. Instead of doing a single LLM pass, it instead performs these 3 steps:Ideation: Pass the user prompt n times through the LLM to get n output proposals (called "ideas"), where n is a parameter you can set Critique: The LLM critiques all ideas to find possible flaws and picks the best one Resolve: The LLM tries to improve upon the best idea (as chosen in the critique step) and outputs it. This is then the final output.SmartLLMChains are based on the SmartGPT workflow proposed in https://youtu.be/wVzuvf9D9BU.Note that SmartLLMChainsuse more LLM passes (ie n+2 instead of just 1)only work then the underlying LLM has the capability for reflection, whicher smaller models often don'tonly work with underlying models that return exactly 1 output, not multipleThis notebook demonstrates how to use a SmartLLMChain.Same LLM for all stepsimport osos.environ["OPENAI_API_KEY"] = "..."from langchain.prompts import PromptTemplatefrom langchain.chat_models import ChatOpenAIfrom langchain_experimental.smart_llm import SmartLLMChainAs example question, we will use "I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?". This is an example from the original SmartGPT video (https://youtu.be/wVzuvf9D9BU?t=384). While this seems like a very easy question, LLMs struggle do these kinds of questions that involve numbers and physical reasoning.As we will see, all 3 initial ideas are completely wrong - even though we're using GPT4! Only when using self-reflection do we get a correct answer. hard_question = "I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?"So, we first create an LLM and prompt templateprompt = PromptTemplate.from_template(hard_question)llm = ChatOpenAI(temperature=0, model_name="gpt-4")Now we can create a SmartLLMChainchain = SmartLLMChain(llm=llm, prompt=prompt, n_ideas=3, verbose=True)Now we can use the SmartLLM as a drop-in replacement for our LLM. E.g.:chain.run({}) > Entering new SmartLLMChain chain... Prompt after formatting: I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it? Idea 1: 1. Fill the 6-liter jug completely. 2. Pour the water from the 6-liter jug into the 12-liter jug. 3. Fill the 6-liter jug again. 4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full. 5. The amount of water left in the 6-liter jug will be exactly 6 liters. Idea 2: 1. Fill the 6-liter jug completely. 2. Pour the water from the 6-liter jug into the 12-liter jug. 3. Fill the 6-liter jug again. 4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full. 5. Since the 12-liter jug is now full, there will be 2 liters of water left in the 6-liter jug. 6. Empty the 12-liter jug. 7. Pour the 2 liters of water from the 6-liter jug into the 12-liter jug. 8. Fill the 6-liter jug completely again. 9. Pour the water from the 6-liter jug into the 12-liter jug, which already has 2 liters in it. 10. Now, the 12-liter jug will have exactly 6 liters of water (2 liters from before + 4 liters from the 6-liter jug). Idea 3: 1. Fill the 6-liter jug completely. 2. Pour the water from the 6-liter jug into the 12-liter jug. 3. Fill the 6-liter jug again. 4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full. 5. The amount of water left in the 6-liter jug will be exactly 6 liters. Critique: Idea 1: 1. Fill the 6-liter jug completely. (No flaw) 2. Pour the water from the 6-liter jug into the 12-liter jug. (No flaw) 3. Fill the 6-liter jug again. (No flaw) 4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full. (Flaw: The 12-liter jug will never be full in this step, as it can hold 12 liters and we are only pouring 6 liters into it.) 5. The amount of water left in the 6-liter jug will be exactly 6 liters. (Flaw: This statement is incorrect, as there will be no water left in the 6-liter jug after pouring it into the 12-liter jug.) Idea 2: 1. Fill the 6-liter jug completely. (No flaw) 2. Pour the water from the 6-liter jug into the 12-liter jug. (No flaw) 3. Fill the 6-liter jug again. (No flaw) 4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full. (Flaw: The 12-liter jug will never be full in this step, as it can hold 12 liters and we are only pouring 6 liters into it.) 5. Since the 12-liter jug is now full, there will be 2 liters of water left in the 6-liter jug. (Flaw: This statement is incorrect, as the 12-liter jug will not be full and there will be no water left in the 6-liter jug.) 6. Empty the 12-liter jug. (No flaw) 7. Pour the 2 liters of water from the 6-liter jug into the 12-liter jug. (Flaw: This step is based on the incorrect assumption that there are 2 liters of water left in the 6-liter jug.) 8. Fill the 6-liter jug completely again. (No flaw) 9. Pour the water from the 6-liter jug into the 12-liter jug, which already has 2 liters in it. (Flaw: This step is based on the incorrect assumption that there are 2 liters of water in the 12-liter jug.) 10. Now, the 12-liter jug will have exactly 6 liters of water (2 liters from before + 4 liters from the 6-liter jug). (Flaw: This conclusion is based on the incorrect assumptions made in the previous steps.) Idea 3: 1. Fill the 6-liter jug completely. (No flaw) 2. Pour the water from the 6-liter jug into the 12-liter jug. (No flaw) 3. Fill the 6-liter jug again. (No flaw) 4. Carefully pour the water from the 6-liter jug into the 12-liter jug until the 12-liter jug is full. (Flaw: The 12-liter jug will never be full in this step, as it can hold 12 liters and we are only pouring 6 liters into it.) 5. The amount of water left in the 6-liter jug will be exactly 6 liters. (Flaw: This statement is incorrect, as there will be no water left in the 6-liter jug after pouring it into the 12-liter jug.) Resolution: 1. Fill the 12-liter jug completely. 2. Pour the water from the 12-liter jug into the 6-liter jug until the 6-liter jug is full. 3. The amount of water left in the 12-liter jug will be exactly 6 liters. > Finished chain. '1. Fill the 12-liter jug completely.\n2. Pour the water from the 12-liter jug into the 6-liter jug until the 6-liter jug is full.\n3. The amount of water left in the 12-liter jug will be exactly 6 liters.'Different LLM for different stepsYou can also use different LLMs for the different steps by passing ideation_llm, critique_llm and resolve_llm. You might want to do this to use a more creative (i.e., high-temperature) model for ideation and a more strict (i.e., low-temperature) model for critique and resolution.chain = SmartLLMChain( ideation_llm=ChatOpenAI(temperature=0.9, model_name="gpt-4"), llm=ChatOpenAI( temperature=0, model_name="gpt-4" ), # will be used for critqiue and resolution as no specific llms are given prompt=prompt, n_ideas=3, verbose=True,)PreviousSummarization checker chain |
156 | https://python.langchain.com/docs/integrations/providers | ProvidersProviders📄️ AnthropicAll functionality related to Anthropic models.📄️ AWSAll functionality related to Amazon AWS platform📄️ GoogleAll functionality related to Google Cloud Platform📄️ MicrosoftAll functionality related to Microsoft📄️ OpenAIAll functionality related to OpenAI🗃️ More182 itemsNextAnthropic |
157 | https://python.langchain.com/docs/integrations/platforms/anthropic | ProvidersAnthropicOn this pageAnthropicAll functionality related to Anthropic models.Anthropic is an AI safety and research company, and is the creator of Claude.
This page covers all integrations between Anthropic models and LangChain.Prompting OverviewClaude is chat-based model, meaning it is trained on conversation data.
However, it is a text based API, meaning it takes in single string.
It expects this string to be in a particular format.
This means that it is up the user to ensure that is the case.
LangChain provides several utilities and helper functions to make sure prompts that you write -
whether formatted as a string or as a list of messages - end up formatted correctly.Specifically, Claude is trained to fill in text for the Assistant role as part of an ongoing dialogue
between a human user (Human:) and an AI assistant (Assistant:). Prompts sent via the API must contain
\n\nHuman: and \n\nAssistant: as the signals of who's speaking.
The final turn must always be \n\nAssistant: - the input string cannot have \n\nHuman: as the final role.Because Claude is chat-based but accepts a string as input, it can be treated as either a LangChain ChatModel or LLM.
This means there are two wrappers in LangChain - ChatAnthropic and Anthropic.
It is generally recommended to use the ChatAnthropic wrapper, and format your prompts as ChatMessages (we will show examples of this below).
This is because it keeps your prompt in a general format that you can easily then also use with other models (should you want to).
However, if you want more fine-grained control over the prompt, you can use the Anthropic wrapper - we will show and example of this as well.
The Anthropic wrapper however is deprecated, as all functionality can be achieved in a more generic way using ChatAnthropic.Prompting Best PracticesAnthropic models have several prompting best practices compared to OpenAI models.No System MessagesAnthropic models are not trained on the concept of a "system message".
We have worked with the Anthropic team to handle them somewhat appropriately (a Human message with an admin tag)
but this is largely a hack and it is recommended that you do not use system messages.AI Messages Can ContinueA completion from Claude is a continuation of the last text in the string which allows you further control over Claude's output.
For example, putting words in Claude's mouth in a prompt like this:\n\nHuman: Tell me a joke about bears\n\nAssistant: What do you call a bear with no teeth?This will return a completion like this A gummy bear! instead of a whole new assistant message with a different random bear joke.ChatAnthropicChatAnthropic is a subclass of LangChain's ChatModel, meaning it works best with ChatPromptTemplate.
You can import this wrapper with the following code:from langchain.chat_models import ChatAnthropicmodel = ChatAnthropic()When working with ChatModels, it is preferred that you design your prompts as ChatPromptTemplates.
Here is an example below of doing that:from langchain.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful chatbot"), ("human", "Tell me a joke about {topic}"),])You can then use this in a chain as follows:chain = prompt | modelchain.invoke({"topic": "bears"})How is the prompt actually being formatted under the hood? We can see that by running the following codeprompt_value = prompt.format_prompt(topic="bears")model.convert_prompt(prompt_value)This produces the following formatted string:'\n\nHuman: <admin>You are a helpful chatbot</admin>\n\nHuman: Tell me a joke about bears\n\nAssistant:'We can see that under the hood LangChain is representing SystemMessages with Human: <admin>...</admin>,
and is appending an assistant message to the end IF the last message is NOT already an assistant message.If you decide instead to use a normal PromptTemplate (one that just works on a single string) let's take a look at
what happens:from langchain.prompts import PromptTemplateprompt = PromptTemplate.from_template("Tell me a joke about {topic}")prompt_value = prompt.format_prompt(topic="bears")model.convert_prompt(prompt_value)This produces the following formatted string:'\n\nHuman: Tell me a joke about bears\n\nAssistant:'We can see that it automatically adds the Human and Assistant tags.
What is happening under the hood?
First: the string gets converted to a single human message. This happens generically (because we are using a subclass of ChatModel).
Then, similarly to the above example, an empty Assistant message is getting appended.
This is Anthropic specific.[Deprecated] AnthropicThis Anthropic wrapper is subclassed from LLM.
We can import it with:from langchain.llms import Anthropicmodel = Anthropic()This model class is designed to work with normal PromptTemplates. An example of that is below:prompt = PromptTemplate.from_template("Tell me a joke about {topic}")chain = prompt | modelchain.invoke({"topic": "bears"})Let's see what is going on with the prompt templating under the hood!prompt_value = prompt.format_prompt(topic="bears")model.convert_prompt(prompt_value)This outputs the following'\n\nHuman: Tell me a joke about bears\n\nAssistant: Sure, here you go:\n'Notice that it adds the Human tag at the start of the string, and then finishes it with \n\nAssistant: Sure, here you go:.
The extra Sure, here you go was added on purpose by the Anthropic team.What happens if we have those symbols in the prompt directly?prompt = PromptTemplate.from_template("Human: Tell me a joke about {topic}")prompt_value = prompt.format_prompt(topic="bears")model.convert_prompt(prompt_value)This outputs:'\n\nHuman: Tell me a joke about bears'We can see that we detect that the user is trying to use the special tokens, and so we don't do any formatting.PreviousProvidersNextAWSPrompting OverviewPrompting Best PracticesChatAnthropicDeprecated Anthropic |
158 | https://python.langchain.com/docs/integrations/platforms/aws | ProvidersAWSOn this pageAWSAll functionality related to Amazon AWS platformLLMsBedrockSee a usage example.from langchain.llms.bedrock import BedrockAmazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.See a usage example.from langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartmodel_kwargs = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}llm = AmazonAPIGateway(api_url=api_url, model_kwargs=model_kwargs)SageMaker EndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.We use SageMaker to host our model and expose it as the SageMaker Endpoint.See a usage example.from langchain.llms import SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerText Embedding ModelsBedrockSee a usage example.from langchain.embeddings import BedrockEmbeddingsSageMaker EndpointSee a usage example.from langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.llms.sagemaker_endpoint import ContentHandlerBaseDocument loadersAWS S3 Directory and FileAmazon Simple Storage Service (Amazon S3) is an object storage service.
AWS S3 Directory
AWS S3 BucketsSee a usage example for S3DirectoryLoader.See a usage example for S3FileLoader.from langchain.document_loaders import S3DirectoryLoader, S3FileLoaderMemoryAWS DynamoDBAWS DynamoDB
is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.We have to configure the AWS CLI. We need to install the boto3 library.pip install boto3See a usage example.from langchain.memory import DynamoDBChatMessageHistoryPreviousAnthropicNextGoogleLLMsBedrockAmazon API GatewaySageMaker EndpointText Embedding ModelsBedrockSageMaker EndpointDocument loadersAWS S3 Directory and FileMemoryAWS DynamoDB |
159 | https://python.langchain.com/docs/integrations/platforms/google | ProvidersGoogleOn this pageGoogleAll functionality related to Google Cloud PlatformLLMsVertex AIAccess PaLM LLMs like text-bison and code-bison via Google Cloud.from langchain.llms import VertexAIModel GardenAccess PaLM and hundreds of OSS models via Vertex AI Model Garden.from langchain.llms import VertexAIModelGardenChat modelsVertex AIAccess PaLM chat models like chat-bison and codechat-bison via Google Cloud.from langchain.chat_models import ChatVertexAIDocument LoaderGoogle BigQueryGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.
BigQuery is a part of the Google Cloud Platform.First, we need to install google-cloud-bigquery python package.pip install google-cloud-bigquerySee a usage example.from langchain.document_loaders import BigQueryLoaderGoogle Cloud StorageGoogle Cloud Storage is a managed service for storing unstructured data.First, we need to install google-cloud-storage python package.pip install google-cloud-storageThere are two loaders for the Google Cloud Storage: the Directory and the File loaders.See a usage example.from langchain.document_loaders import GCSDirectoryLoaderSee a usage example.from langchain.document_loaders import GCSFileLoaderGoogle DriveGoogle Drive is a file storage and synchronization service developed by Google.Currently, only Google Docs are supported.First, we need to install several python package.pip install google-api-python-client google-auth-httplib2 google-auth-oauthlibSee a usage example and authorizing instructions.from langchain.document_loaders import GoogleDriveLoaderVector StoreGoogle Vertex AI MatchingEngineGoogle Vertex AI Matching Engine provides
the industry's leading high-scale low latency vector database. These vector databases are commonly
referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.We need to install several python packages.pip install tensorflow google-cloud-aiplatform tensorflow-hub tensorflow-textSee a usage example.from langchain.vectorstores import MatchingEngineGoogle ScaNNGoogle ScaNN
(Scalable Nearest Neighbors) is a python package.ScaNN is a method for efficient vector similarity search at scale.ScaNN includes search space pruning and quantization for Maximum Inner
Product Search and also supports other distance functions such as
Euclidean distance. The implementation is optimized for x86 processors
with AVX2 support. See its Google Research github
for more details.We need to install scann python package.pip install scannSee a usage example.from langchain.vectorstores import ScaNNRetrieversVertex AI SearchGoogle Cloud Vertex AI Search
allows developers to quickly build generative AI powered search engines for customers and employees.First, you need to install the google-cloud-discoveryengine Python package.pip install google-cloud-discoveryengineSee a usage example.from langchain.retrievers import GoogleVertexAISearchRetrieverToolsGoogle SearchInstall requirements with pip install google-api-python-clientSet up a Custom Search Engine, following these instructionsGet an API Key and Custom Search Engine ID from the previous step, and set them as environment variables GOOGLE_API_KEY and GOOGLE_CSE_ID respectivelyThere exists a GoogleSearchAPIWrapper utility which wraps this API. To import this utility:from langchain.utilities import GoogleSearchAPIWrapperFor a more detailed walkthrough of this wrapper, see this notebook.We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:from langchain.agents import load_toolstools = load_tools(["google-search"])Document TransformerGoogle Document AIDocument AI is a Google Cloud Platform
service to transform unstructured data from documents into structured data, making it easier
to understand, analyze, and consume. We need to set up a GCS bucket and create your own OCR processor
The GCS_OUTPUT_PATH should be a path to a folder on GCS (starting with gs://)
and a processor name should look like projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID.
We can get it either programmatically or copy from the Prediction endpoint section of the Processor details
tab in the Google Cloud Console.pip install google-cloud-documentaipip install google-cloud-documentai-toolboxSee a usage example.from langchain.document_loaders.blob_loaders import Blobfrom langchain.document_loaders.parsers import DocAIParserPreviousAWSNextMicrosoftLLMsVertex AIModel GardenChat modelsVertex AIDocument LoaderGoogle BigQueryGoogle Cloud StorageGoogle DriveVector StoreGoogle Vertex AI MatchingEngineGoogle ScaNNRetrieversVertex AI SearchToolsGoogle SearchDocument TransformerGoogle Document AI |
160 | https://python.langchain.com/docs/integrations/platforms/microsoft | ProvidersMicrosoftOn this pageMicrosoftAll functionality related to MicrosoftLLMAzure OpenAIMicrosoft Azure, often referred to as Azure is a cloud computing platform run by Microsoft, which offers access, management, and development of applications and services through global data centers. It provides a range of capabilities, including software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Microsoft Azure supports many programming languages, tools, and frameworks, including Microsoft-specific and third-party software and systems.Azure OpenAI is an Azure service with powerful language models from OpenAI including the GPT-3, Codex and Embeddings model series for content generation, summarization, semantic search, and natural language to code translation.pip install openai tiktokenSet the environment variables to get access to the Azure OpenAI service.import osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"os.environ["OPENAI_API_VERSION"] = "2023-05-15"See a usage example.from langchain.llms import AzureOpenAIText Embedding ModelsAzure OpenAISee a usage examplefrom langchain.embeddings import OpenAIEmbeddingsChat ModelsAzure OpenAISee a usage examplefrom langchain.chat_models import AzureChatOpenAIDocument loadersAzure Blob StorageAzure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.Azure Files offers fully managed
file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol,
Network File System (NFS) protocol, and Azure Files REST API. Azure Files are based on the Azure Blob Storage.Azure Blob Storage is designed for:Serving images or documents directly to a browser.Storing files for distributed access.Streaming video and audio.Writing to log files.Storing data for backup and restore, disaster recovery, and archiving.Storing data for analysis by an on-premises or Azure-hosted service.pip install azure-storage-blobSee a usage example for the Azure Blob Storage.from langchain.document_loaders import AzureBlobStorageContainerLoaderSee a usage example for the Azure Files.from langchain.document_loaders import AzureBlobStorageFileLoaderMicrosoft OneDriveMicrosoft OneDrive (formerly SkyDrive) is a file-hosting service operated by Microsoft.First, you need to install a python package.pip install o365See a usage example.from langchain.document_loaders import OneDriveLoaderMicrosoft WordMicrosoft Word is a word processor developed by Microsoft.See a usage example.from langchain.document_loaders import UnstructuredWordDocumentLoaderRetrieverAzure Cognitive SearchAzure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:A search engine for full text search over a search index containing user-owned contentRich indexing, with lexical analysis and optional AI enrichment for content extraction and transformationRich query syntax for text search, fuzzy search, autocomplete, geo-search and moreProgrammability through REST APIs and client libraries in Azure SDKsAzure integration at the data layer, machine learning layer, and AI (Cognitive Services)See set up instructions.See a usage example.from langchain.retrievers import AzureCognitiveSearchRetrieverPreviousGoogleNextOpenAILLMAzure OpenAIText Embedding ModelsAzure OpenAIChat ModelsAzure OpenAIDocument loadersAzure Blob StorageMicrosoft OneDriveMicrosoft WordRetrieverAzure Cognitive Search |
161 | https://python.langchain.com/docs/integrations/platforms/openai | ProvidersOpenAIOn this pageOpenAIAll functionality related to OpenAIOpenAI is American artificial intelligence (AI) research laboratory
consisting of the non-profit OpenAI Incorporated
and its for-profit subsidiary corporation OpenAI Limited Partnership.
OpenAI conducts AI research with the declared intention of promoting and developing a friendly AI.
OpenAI systems run on an Azure-based supercomputing platform from Microsoft.The OpenAI API is powered by a diverse set of models with different capabilities and price points.ChatGPT is the Artificial Intelligence (AI) chatbot developed by OpenAI.Installation and SetupInstall the Python SDK withpip install openaiGet an OpenAI api key and set it as an environment variable (OPENAI_API_KEY)If you want to use OpenAI's tokenizer (only available for Python 3.9+), install itpip install tiktokenLLMSee a usage example.from langchain.llms import OpenAIIf you are using a model hosted on Azure, you should use different wrapper for that:from langchain.llms import AzureOpenAIFor a more detailed walkthrough of the Azure wrapper, see hereChat modelSee a usage example.from langchain.chat_models import ChatOpenAIIf you are using a model hosted on Azure, you should use different wrapper for that:from langchain.llms import AzureChatOpenAIFor a more detailed walkthrough of the Azure wrapper, see hereText Embedding ModelSee a usage examplefrom langchain.embeddings import OpenAIEmbeddingsTokenizerThere are several places you can use the tiktoken tokenizer. By default, it is used to count tokens
for OpenAI LLMs.You can also use it to count tokens when splitting documents with from langchain.text_splitter import CharacterTextSplitterCharacterTextSplitter.from_tiktoken_encoder(...)For a more detailed walkthrough of this, see this notebookDocument LoaderSee a usage example.from langchain.document_loaders.chatgpt import ChatGPTLoaderRetrieverSee a usage example.from langchain.retrievers import ChatGPTPluginRetrieverChainSee a usage example.from langchain.chains import OpenAIModerationChainPreviousMicrosoftNextActiveloop Deep LakeInstallation and SetupLLMChat modelText Embedding ModelTokenizerDocument LoaderRetrieverChain |
162 | https://python.langchain.com/docs/integrations/providers/activeloop_deeplake | ProvidersMoreActiveloop Deep LakeOn this pageActiveloop Deep LakeThis page covers how to use the Deep Lake ecosystem within LangChain.Why Deep Lake?More than just a (multi-modal) vector store. You can later use the dataset to fine-tune your own LLM models.Not only stores embeddings, but also the original data with automatic version control.Truly serverless. Doesn't require another service and can be used with major cloud providers (AWS S3, GCS, etc.)Activeloop Deep Lake supports SelfQuery Retrieval:
Activeloop Deep Lake Self Query RetrievalMore ResourcesUltimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial DataTwitter the-algorithm codebase analysis with Deep LakeCode UnderstandingHere is whitepaper and academic paper for Deep LakeHere is a set of additional resources available for review: Deep Lake, Get started and TutorialsInstallation and SetupInstall the Python package with pip install deeplakeWrappersVectorStoreThere exists a wrapper around Deep Lake, a data lake for Deep Learning applications, allowing you to use it as a vector store (for now), whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import DeepLakeFor a more detailed walkthrough of the Deep Lake wrapper, see this notebookPreviousOpenAINextAI21 LabsWhy Deep Lake?More ResourcesInstallation and SetupWrappersVectorStore |
163 | https://python.langchain.com/docs/integrations/providers/ai21 | ProvidersMoreAI21 LabsOn this pageAI21 LabsThis page covers how to use the AI21 ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific AI21 wrappers.Installation and SetupGet an AI21 api key and set it as an environment variable (AI21_API_KEY)WrappersLLMThere exists an AI21 LLM wrapper, which you can access with from langchain.llms import AI21PreviousActiveloop Deep LakeNextAimInstallation and SetupWrappersLLM |
164 | https://python.langchain.com/docs/integrations/providers/aim_tracking | ProvidersMoreAimAimAim makes it super easy to visualize and debug LangChain executions. Aim tracks inputs and outputs of LLMs and tools, as well as actions of agents. With Aim, you can easily debug and examine an individual execution:Additionally, you have the option to compare multiple executions side by side:Aim is fully open source, learn more about Aim on GitHub.Let's move forward and see how to enable and configure Aim callback.Tracking LangChain Executions with AimIn this notebook we will explore three usage scenarios. To start off, we will install the necessary packages and import certain modules. Subsequently, we will configure two environment variables that can be established either within the Python script or through the terminal.pip install aimpip install langchainpip install openaipip install google-search-resultsimport osfrom datetime import datetimefrom langchain.llms import OpenAIfrom langchain.callbacks import AimCallbackHandler, StdOutCallbackHandlerOur examples use a GPT model as the LLM, and OpenAI offers an API for this purpose. You can obtain the key from the following link: https://platform.openai.com/account/api-keys .We will use the SerpApi to retrieve search results from Google. To acquire the SerpApi key, please go to https://serpapi.com/manage-api-key .os.environ["OPENAI_API_KEY"] = "..."os.environ["SERPAPI_API_KEY"] = "..."The event methods of AimCallbackHandler accept the LangChain module or agent as input and log at least the prompts and generated results, as well as the serialized version of the LangChain module, to the designated Aim run.session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")aim_callback = AimCallbackHandler( repo=".", experiment_name="scenario 1: OpenAI LLM",)callbacks = [StdOutCallbackHandler(), aim_callback]llm = OpenAI(temperature=0, callbacks=callbacks)The flush_tracker function is used to record LangChain assets on Aim. By default, the session is reset rather than being terminated outright.Scenario 1 In the first scenario, we will use OpenAI LLM.# scenario 1 - LLMllm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)aim_callback.flush_tracker( langchain_asset=llm, experiment_name="scenario 2: Chain with multiple SubChains on multiple generations",)Scenario 2 Scenario two involves chaining with multiple SubChains across multiple generations.from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain# scenario 2 - Chaintemplate = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [ { "title": "documentary about good video games that push the boundary of game design" }, {"title": "the phenomenon behind the remarkable speed of cheetahs"}, {"title": "the best in class mlops tooling"},]synopsis_chain.apply(test_prompts)aim_callback.flush_tracker( langchain_asset=synopsis_chain, experiment_name="scenario 3: Agent with Tools")Scenario 3 The third scenario involves an agent with tools.from langchain.agents import initialize_agent, load_toolsfrom langchain.agents import AgentType# scenario 3 - Agent with Toolstools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callbacks=callbacks,)agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")aim_callback.flush_tracker(langchain_asset=agent, reset=False, finish=True) > Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power. Action: Search Action Input: "Leo DiCaprio girlfriend" Observation: Leonardo DiCaprio seemed to prove a long-held theory about his love life right after splitting from girlfriend Camila Morrone just months ... Thought: I need to find out Camila Morrone's age Action: Search Action Input: "Camila Morrone age" Observation: 25 years Thought: I need to calculate 25 raised to the 0.43 power Action: Calculator Action Input: 25^0.43 Observation: Answer: 3.991298452658078 Thought: I now know the final answer Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078. > Finished chain.PreviousAI21 LabsNextAINetwork |
165 | https://python.langchain.com/docs/integrations/providers/ainetwork | ProvidersMoreAINetworkOn this pageAINetworkAI Network is a layer 1 blockchain designed to accommodate
large-scale AI models, utilizing a decentralized GPU network powered by the
$AIN token, enriching AI-driven NFTs (AINFTs).Installation and SetupYou need to install ain-py python package.pip install ain-pyYou need to set the AIN_BLOCKCHAIN_ACCOUNT_PRIVATE_KEY environmental variable to your AIN Blockchain Account Private Key.ToolkitSee a usage example.from langchain.agents.agent_toolkits.ainetwork.toolkit import AINetworkToolkitPreviousAimNextAirbyteInstallation and SetupToolkit |
166 | https://python.langchain.com/docs/integrations/providers/airbyte | ProvidersMoreAirbyteOn this pageAirbyteAirbyte is a data integration platform for ELT pipelines from APIs,
databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.Installation and SetupThis instruction shows how to load any source from Airbyte into a local JSON file that can be read in as a document.Prerequisites:
Have docker desktop installed.Steps:Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git.Switch into Airbyte directory - cd airbyte.Start Airbyte - docker compose up.In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that's username airbyte and password password.Setup any source you wish.Set destination as Local JSON, with specified destination path - lets say /json_data. Set up a manual sync.Run the connection.To see what files are created, navigate to: file:///tmp/airbyte_local/.Document LoaderSee a usage example.from langchain.document_loaders import AirbyteJSONLoaderPreviousAINetworkNextAirtableInstallation and SetupDocument Loader |
167 | https://python.langchain.com/docs/integrations/providers/airtable | ProvidersMoreAirtableOn this pageAirtableAirtable is a cloud collaboration service.
Airtable is a spreadsheet-database hybrid, with the features of a database but applied to a spreadsheet.
The fields in an Airtable table are similar to cells in a spreadsheet, but have types such as 'checkbox',
'phone number', and 'drop-down list', and can reference file attachments like images.Users can create a database, set up column types, add records, link tables to one another, collaborate, sort records
and publish views to external websites.Installation and Setuppip install pyairtableGet your API key.Get the ID of your base.Get the table ID from the table url.Document Loaderfrom langchain.document_loaders import AirtableLoaderSee an example.PreviousAirbyteNextAleph AlphaInstallation and SetupDocument Loader |
168 | https://python.langchain.com/docs/integrations/providers/aleph_alpha | ProvidersMoreAleph AlphaOn this pageAleph AlphaAleph Alpha was founded in 2019 with the mission to research and build the foundational technology for an era of strong AI. The team of international scientists, engineers, and innovators researches, develops, and deploys transformative AI like large language and multimodal models and runs the fastest European commercial AI cluster.The Luminous series is a family of large language models.Installation and Setuppip install aleph-alpha-clientYou have to create a new token. Please, see instructions.from getpass import getpassALEPH_ALPHA_API_KEY = getpass()LLMSee a usage example.from langchain.llms import AlephAlphaText Embedding ModelsSee a usage example.from langchain.embeddings import AlephAlphaSymmetricSemanticEmbedding, AlephAlphaAsymmetricSemanticEmbeddingPreviousAirtableNextAlibaba Cloud OpensearchInstallation and SetupLLMText Embedding Models |
169 | https://python.langchain.com/docs/integrations/providers/alibabacloud_opensearch | ProvidersMoreAlibaba Cloud OpensearchOn this pageAlibaba Cloud OpensearchAlibaba Cloud Opensearch OpenSearch is a one-stop platform to develop intelligent search services. OpenSearch was built based on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.OpenSearch helps you develop high quality, maintenance-free, and high performance intelligent search services to provide your users with high search efficiency and accuracy. OpenSearch provides the vector search feature. In specific scenarios, especially test question search and image search scenarios, you can use the vector search feature together with the multimodal search feature to improve the accuracy of search results. This topic describes the syntax and usage notes of vector indexes.Purchase an instance and configure itPurchase OpenSearch Vector Search Edition from Alibaba Cloud and configure the instance according to the help documentation.Alibaba Cloud Opensearch Vector Store Wrapperssupported functions:add_textsadd_documentsfrom_textsfrom_documentssimilarity_searchasimilarity_searchsimilarity_search_by_vectorasimilarity_search_by_vectorsimilarity_search_with_relevance_scoresFor a more detailed walk through of the Alibaba Cloud OpenSearch wrapper, see this notebookIf you encounter any problems during use, please feel free to contact xingshaomin.xsm@alibaba-inc.com , and we will do our best to provide you with assistance and support.PreviousAleph AlphaNextAnalyticDBPurchase an instance and configure itAlibaba Cloud Opensearch Vector Store Wrappers |
170 | https://python.langchain.com/docs/integrations/providers/analyticdb | ProvidersMoreAnalyticDBOn this pageAnalyticDBThis page covers how to use the AnalyticDB ecosystem within LangChain.VectorStoreThere exists a wrapper around AnalyticDB, allowing you to use it as a vectorstore,
whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import AnalyticDBFor a more detailed walkthrough of the AnalyticDB wrapper, see this notebookPreviousAlibaba Cloud OpensearchNextAnnoyVectorStore |
171 | https://python.langchain.com/docs/integrations/providers/annoy | ProvidersMoreAnnoyOn this pageAnnoyAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. Installation and Setuppip install annoyVectorstoreSee a usage example.from langchain.vectorstores import AnnoyPreviousAnalyticDBNextAnyscaleVectorstore |
172 | https://python.langchain.com/docs/integrations/providers/anyscale | ProvidersMoreAnyscaleOn this pageAnyscaleThis page covers how to use the Anyscale ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Anyscale wrappers.Installation and SetupGet an Anyscale Service URL, route and API key and set them as environment variables (ANYSCALE_SERVICE_URL,ANYSCALE_SERVICE_ROUTE, ANYSCALE_SERVICE_TOKEN). Please see the Anyscale docs for more details.WrappersLLMThere exists an Anyscale LLM wrapper, which you can access with from langchain.llms import AnyscalePreviousAnnoyNextApifyInstallation and SetupWrappersLLM |
173 | https://python.langchain.com/docs/integrations/providers/apify | ProvidersMoreApifyOn this pageApifyThis page covers how to use Apify within LangChain.OverviewApify is a cloud platform for web scraping and data extraction,
which provides an ecosystem of more than a thousand
ready-made apps called Actors for various scraping, crawling, and extraction use cases.This integration enables you run Actors on the Apify platform and load their results into LangChain to feed your vector
indexes with documents and data from the web, e.g. to generate answers from websites with documentation,
blogs, or knowledge bases.Installation and SetupInstall the Apify API client for Python with pip install apify-clientGet your Apify API token and either set it as
an environment variable (APIFY_API_TOKEN) or pass it to the ApifyWrapper as apify_api_token in the constructor.WrappersUtilityYou can use the ApifyWrapper to run Actors on the Apify platform.from langchain.utilities import ApifyWrapperFor a more detailed walkthrough of this wrapper, see this notebook.LoaderYou can also use our ApifyDatasetLoader to get data from Apify dataset.from langchain.document_loaders import ApifyDatasetLoaderFor a more detailed walkthrough of this loader, see this notebook.PreviousAnyscaleNextArangoDBOverviewInstallation and SetupWrappersUtilityLoader |
174 | https://python.langchain.com/docs/integrations/providers/arangodb | ProvidersMoreArangoDBOn this pageArangoDBArangoDB is a scalable graph database system to drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud – anywhere.DependenciesInstall the ArangoDB Python Driver package withpip install python-arangoGraph QA ChainConnect your ArangoDB Database with a chat model to get insights on your data. See the notebook example here.from arango import ArangoClientfrom langchain.graphs import ArangoGraphfrom langchain.chains import ArangoGraphQAChainPreviousApifyNextArgillaDependenciesGraph QA Chain |
175 | https://python.langchain.com/docs/integrations/providers/argilla | ProvidersMoreArgillaOn this pageArgillaArgilla is an open-source data curation platform for LLMs.
Using Argilla, everyone can build robust language models through faster data curation
using both human and machine feedback. We provide support for each step in the MLOps cycle,
from data labelling to model monitoring.Installation and SetupFirst, you'll need to install the argilla Python package as follows:pip install argilla --upgradeIf you already have an Argilla Server running, then you're good to go; but if
you don't, follow the next steps to install it.If you don't you can refer to Argilla - 🚀 Quickstart to deploy Argilla either on HuggingFace Spaces, locally, or on a server.TrackingSee a usage example of ArgillaCallbackHandler.from langchain.callbacks import ArgillaCallbackHandlerPreviousArangoDBNextArthurInstallation and SetupTracking |
176 | https://python.langchain.com/docs/integrations/providers/arthur_tracking | ProvidersMoreArthurArthurArthur is a model monitoring and observability platform.The following guide shows how to run a registered chat LLM with the Arthur callback handler to automatically log model inferences to Arthur.If you do not have a model currently onboarded to Arthur, visit our onboarding guide for generative text models. For more information about how to use the Arthur SDK, visit our docs.from langchain.callbacks import ArthurCallbackHandlerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import HumanMessagePlace Arthur credentials herearthur_url = "https://app.arthur.ai"arthur_login = "your-arthur-login-username-here"arthur_model_id = "your-arthur-model-id-here"Create Langchain LLM with Arthur callback handlerdef make_langchain_chat_llm(chat_model=): return ChatOpenAI( streaming=True, temperature=0.1, callbacks=[ StreamingStdOutCallbackHandler(), ArthurCallbackHandler.from_credentials( arthur_model_id, arthur_url=arthur_url, arthur_login=arthur_login) ])chatgpt = make_langchain_chat_llm() Please enter password for admin: ········Running the chat LLM with this run function will save the chat history in an ongoing list so that the conversation can reference earlier messages and log each response to the Arthur platform. You can view the history of this model's inferences on your model dashboard page.Enter q to quit the run loopdef run(llm): history = [] while True: user_input = input("\n>>> input >>>\n>>>: ") if user_input == "q": break history.append(HumanMessage(content=user_input)) history.append(llm(history))run(chatgpt) >>> input >>> >>>: What is a callback handler? A callback handler, also known as a callback function or callback method, is a piece of code that is executed in response to a specific event or condition. It is commonly used in programming languages that support event-driven or asynchronous programming paradigms. The purpose of a callback handler is to provide a way for developers to define custom behavior that should be executed when a certain event occurs. Instead of waiting for a result or blocking the execution, the program registers a callback function and continues with other tasks. When the event is triggered, the callback function is invoked, allowing the program to respond accordingly. Callback handlers are commonly used in various scenarios, such as handling user input, responding to network requests, processing asynchronous operations, and implementing event-driven architectures. They provide a flexible and modular way to handle events and decouple different components of a system. >>> input >>> >>>: What do I need to do to get the full benefits of this To get the full benefits of using a callback handler, you should consider the following: 1. Understand the event or condition: Identify the specific event or condition that you want to respond to with a callback handler. This could be user input, network requests, or any other asynchronous operation. 2. Define the callback function: Create a function that will be executed when the event or condition occurs. This function should contain the desired behavior or actions you want to take in response to the event. 3. Register the callback function: Depending on the programming language or framework you are using, you may need to register or attach the callback function to the appropriate event or condition. This ensures that the callback function is invoked when the event occurs. 4. Handle the callback: Implement the necessary logic within the callback function to handle the event or condition. This could involve updating the user interface, processing data, making further requests, or triggering other actions. 5. Consider error handling: It's important to handle any potential errors or exceptions that may occur within the callback function. This ensures that your program can gracefully handle unexpected situations and prevent crashes or undesired behavior. 6. Maintain code readability and modularity: As your codebase grows, it's crucial to keep your callback handlers organized and maintainable. Consider using design patterns or architectural principles to structure your code in a modular and scalable way. By following these steps, you can leverage the benefits of callback handlers, such as asynchronous and event-driven programming, improved responsiveness, and modular code design. >>> input >>> >>>: qPreviousArgillaNextArxiv |
177 | https://python.langchain.com/docs/integrations/providers/arxiv | ProvidersMoreArxivOn this pageArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics,
mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and
systems science, and economics.Installation and SetupFirst, you need to install arxiv python package.pip install arxivSecond, you need to install PyMuPDF python package which transforms PDF files downloaded from the arxiv.org site into the text format.pip install pymupdfDocument LoaderSee a usage example.from langchain.document_loaders import ArxivLoaderRetrieverSee a usage example.from langchain.retrievers import ArxivRetrieverPreviousArthurNextAtlasInstallation and SetupDocument LoaderRetriever |
178 | https://python.langchain.com/docs/integrations/providers/atlas | ProvidersMoreAtlasOn this pageAtlasNomic Atlas is a platform for interacting with both
small and internet scale unstructured datasets.Installation and SetupInstall the Python package with pip install nomicNomic is also included in langchains poetry extras poetry install -E allVectorStoreSee a usage example.from langchain.vectorstores import AtlasDBPreviousArxivNextAwaDBInstallation and SetupVectorStore |
179 | https://python.langchain.com/docs/integrations/providers/awadb | ProvidersMoreAwaDBOn this pageAwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.Installation and Setuppip install awadbVector Storefrom langchain.vectorstores import AwaDBSee a usage example.Text Embedding Modelfrom langchain.embeddings import AwaEmbeddingsSee a usage example.PreviousAtlasNextAWS DynamoDBInstallation and SetupVector StoreText Embedding Model |
180 | https://python.langchain.com/docs/integrations/providers/aws_dynamodb | ProvidersAWSOn this pageAWSAll functionality related to Amazon AWS platformLLMsBedrockSee a usage example.from langchain.llms.bedrock import BedrockAmazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.See a usage example.from langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartmodel_kwargs = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}llm = AmazonAPIGateway(api_url=api_url, model_kwargs=model_kwargs)SageMaker EndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.We use SageMaker to host our model and expose it as the SageMaker Endpoint.See a usage example.from langchain.llms import SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerText Embedding ModelsBedrockSee a usage example.from langchain.embeddings import BedrockEmbeddingsSageMaker EndpointSee a usage example.from langchain.embeddings import SagemakerEndpointEmbeddingsfrom langchain.llms.sagemaker_endpoint import ContentHandlerBaseDocument loadersAWS S3 Directory and FileAmazon Simple Storage Service (Amazon S3) is an object storage service.
AWS S3 Directory
AWS S3 BucketsSee a usage example for S3DirectoryLoader.See a usage example for S3FileLoader.from langchain.document_loaders import S3DirectoryLoader, S3FileLoaderMemoryAWS DynamoDBAWS DynamoDB
is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.We have to configure the AWS CLI. We need to install the boto3 library.pip install boto3See a usage example.from langchain.memory import DynamoDBChatMessageHistoryPreviousAnthropicNextGoogleLLMsBedrockAmazon API GatewaySageMaker EndpointText Embedding ModelsBedrockSageMaker EndpointDocument loadersAWS S3 Directory and FileMemoryAWS DynamoDB |
181 | https://python.langchain.com/docs/integrations/providers/azlyrics | ProvidersMoreAZLyricsOn this pageAZLyricsAZLyrics is a large, legal, every day growing collection of lyrics.Installation and SetupThere isn't any special setup for it.Document LoaderSee a usage example.from langchain.document_loaders import AZLyricsLoaderPreviousAWS DynamoDBNextBagelDBInstallation and SetupDocument Loader |
182 | https://python.langchain.com/docs/integrations/providers/bageldb | ProvidersMoreBagelDBOn this pageBagelDBBagelDB (Open Vector Database for AI), is like GitHub for AI data.
It is a collaborative platform where users can create,
share, and manage vector datasets. It can support private projects for independent developers,
internal collaborations for enterprises, and public contributions for data DAOs.Installation and Setuppip install betabageldbVectorStoreSee a usage example.from langchain.vectorstores import BagelPreviousAZLyricsNextBananaInstallation and SetupVectorStore |
183 | https://python.langchain.com/docs/integrations/providers/bananadev | ProvidersMoreBananaOn this pageBananaBanana provided serverless GPU inference for AI models, including a CI/CD build pipeline and a simple Python framework (Potassium) to server your models.This page covers how to use the Banana ecosystem within LangChain.It is broken into two parts: installation and setup, and then references to specific Banana wrappers.Installation and SetupInstall with pip install banana-devGet an Banana api key from the Banana.dev dashboard and set it as an environment variable (BANANA_API_KEY)Get your model's key and url slug from the model's details pageDefine your Banana TemplateYou'll need to set up a Github repo for your Banana app. You can get started in 5 minutes using this guide.Alternatively, for a ready-to-go LLM example, you can check out Banana's CodeLlama-7B-Instruct-GPTQ GitHub repository. Just fork it and deploy it within Banana.Other starter repos are available here.Build the Banana appTo use Banana apps within Langchain, they must include the outputs key
in the returned json, and the value must be a string.# Return the results as a dictionaryresult = {'outputs': result}An example inference function would be:@app.handler("/")def handler(context: dict, request: Request) -> Response: """Handle a request to generate code from a prompt.""" model = context.get("model") tokenizer = context.get("tokenizer") max_new_tokens = request.json.get("max_new_tokens", 512) temperature = request.json.get("temperature", 0.7) prompt = request.json.get("prompt") prompt_template=f'''[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ''' input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=temperature, max_new_tokens=max_new_tokens) result = tokenizer.decode(output[0]) return Response(json={"outputs": result}, status=200)This example is from the app.py file in CodeLlama-7B-Instruct-GPTQ.WrappersLLMWithin Langchain, there exists a Banana LLM wrapper, which you can access withfrom langchain.llms import BananaYou need to provide a model key and model url slug, which you can get from the model's details page in the Banana.dev dashboard.llm = Banana(model_key="YOUR_MODEL_KEY", model_url_slug="YOUR_MODEL_URL_SLUG")PreviousBagelDBNextBasetenInstallation and SetupDefine your Banana TemplateBuild the Banana appWrappersLLM |
184 | https://python.langchain.com/docs/integrations/providers/baseten | ProvidersMoreBasetenOn this pageBasetenLearn how to use LangChain with models deployed on Baseten.Installation and setupCreate a Baseten account and API key.Install the Baseten Python client with pip install basetenUse your API key to authenticate with baseten loginInvoking a modelBaseten integrates with LangChain through the LLM module, which provides a standardized and interoperable interface for models that are deployed on your Baseten workspace.You can deploy foundation models like WizardLM and Alpaca with one click from the Baseten model library or if you have your own model, deploy it with this tutorial.In this example, we'll work with WizardLM. Deploy WizardLM here and follow along with the deployed model's version ID.from langchain.llms import Basetenwizardlm = Baseten(model="MODEL_VERSION_ID", verbose=True)wizardlm("What is the difference between a Wizard and a Sorcerer?")PreviousBananaNextBeamInstallation and setupInvoking a model |
185 | https://python.langchain.com/docs/integrations/providers/beam | ProvidersMoreBeamOn this pageBeamThis page covers how to use Beam within LangChain.
It is broken into two parts: installation and setup, and then references to specific Beam wrappers.Installation and SetupCreate an accountInstall the Beam CLI with curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | shRegister API keys with beam configureSet environment variables (BEAM_CLIENT_ID) and (BEAM_CLIENT_SECRET)Install the Beam SDK pip install beam-sdkWrappersLLMThere exists a Beam LLM wrapper, which you can access withfrom langchain.llms.beam import BeamDefine your Beam app.This is the environment you’ll be developing against once you start the app.
It's also used to define the maximum response length from the model.llm = Beam(model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers",], max_length="50", verbose=False)Deploy your Beam appOnce defined, you can deploy your Beam app by calling your model's _deploy() method.llm._deploy()Call your Beam appOnce a beam model is deployed, it can be called by callying your model's _call() method.
This returns the GPT2 text response to your prompt.response = llm._call("Running machine learning on a remote GPU")An example script which deploys the model and calls it would be:from langchain.llms.beam import Beamimport timellm = Beam(model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers",], max_length="50", verbose=False)llm._deploy()response = llm._call("Running machine learning on a remote GPU")print(response)PreviousBasetenNextBeautiful SoupInstallation and SetupWrappersLLMDefine your Beam app.Deploy your Beam appCall your Beam app |
186 | https://python.langchain.com/docs/integrations/providers/beautiful_soup | ProvidersMoreBeautiful SoupOn this pageBeautiful SoupBeautiful Soup is a Python package for parsing
HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup).
It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which
is useful for web scraping.Installation and Setuppip install beautifulsoup4Document TransformerSee a usage example.from langchain.document_loaders import BeautifulSoupTransformerPreviousBeamNextBiliBiliInstallation and SetupDocument Transformer |
187 | https://python.langchain.com/docs/integrations/providers/bilibili | ProvidersMoreBiliBiliOn this pageBiliBiliBilibili is one of the most beloved long-form video sites in China.Installation and Setuppip install bilibili-api-pythonDocument LoaderSee a usage example.from langchain.document_loaders import BiliBiliLoaderPreviousBeautiful SoupNextNIBittensorInstallation and SetupDocument Loader |
188 | https://python.langchain.com/docs/integrations/providers/bittensor | ProvidersMoreNIBittensorOn this pageNIBittensorThis page covers how to use the BittensorLLM inference runtime within LangChain.
It is broken into two parts: installation and setup, and then examples of NIBittensorLLM usage.Installation and SetupInstall the Python package with pip install langchainWrappersLLMThere exists a NIBittensor LLM wrapper, which you can access with:from langchain.llms import NIBittensorLLMIt provides a unified interface for all models:llm = NIBittensorLLM(system_prompt="Your task is to provide concise and accurate response based on user prompt")print(llm('Write a fibonacci function in python with golder ratio'))Multiple responses from top miners can be accessible using the top_responses parameter:multi_response_llm = NIBittensorLLM(top_responses=10)multi_resp = multi_response_llm("What is Neural Network Feeding Mechanism?")json_multi_resp = json.loads(multi_resp)print(json_multi_resp)PreviousBiliBiliNextBlackboardInstallation and SetupWrappersLLM |
189 | https://python.langchain.com/docs/integrations/providers/blackboard | ProvidersMoreBlackboardOn this pageBlackboardBlackboard Learn (previously the Blackboard Learning Management System)
is a web-based virtual learning environment and learning management system developed by Blackboard Inc.
The software features course management, customizable open architecture, and scalable design that allows
integration with student information systems and authentication protocols. It may be installed on local servers,
hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services.
Its main purposes are stated to include the addition of online elements to courses traditionally delivered
face-to-face and development of completely online courses with few or no face-to-face meetings.Installation and SetupThere isn't any special setup for it.Document LoaderSee a usage example.from langchain.document_loaders import BlackboardLoaderPreviousNIBittensorNextBrave SearchInstallation and SetupDocument Loader |
190 | https://python.langchain.com/docs/integrations/providers/brave_search | ProvidersMoreBrave SearchOn this pageBrave SearchBrave Search is a search engine developed by Brave Software.Brave Search uses its own web index. As of May 2022, it covered over 10 billion pages and was used to serve 92%
of search results without relying on any third-parties, with the remainder being retrieved
server-side from the Bing API or (on an opt-in basis) client-side from Google. According
to Brave, the index was kept "intentionally smaller than that of Google or Bing" in order to
help avoid spam and other low-quality content, with the disadvantage that "Brave Search is
not yet as good as Google in recovering long-tail queries."Brave Search Premium: As of April 2023 Brave Search is an ad-free website, but it will
eventually switch to a new model that will include ads and premium users will get an ad-free experience.
User data including IP addresses won't be collected from its users by default. A premium account
will be required for opt-in data-collection.Installation and SetupTo get access to the Brave Search API, you need to create an account and get an API key.Document LoaderSee a usage example.from langchain.document_loaders import BraveSearchLoaderToolSee a usage example.from langchain.tools import BraveSearchPreviousBlackboardNextCassandraInstallation and SetupDocument LoaderTool |
191 | https://python.langchain.com/docs/integrations/providers/cassandra | ProvidersMoreCassandraOn this pageCassandraApache Cassandra® is a free and open-source, distributed, wide-column
store, NoSQL database management system designed to handle large amounts of data across many commodity servers,
providing high availability with no single point of failure. Cassandra offers support for clusters spanning
multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.
Cassandra was designed to implement a combination of Amazon's Dynamo distributed storage and replication
techniques combined with Google's Bigtable data and storage engine model.Installation and Setuppip install cassandra-driverpip install cassioVector StoreSee a usage example.from langchain.vectorstores import CassandraMemorySee a usage example.from langchain.memory import CassandraChatMessageHistoryPreviousBrave SearchNextCerebriumAIInstallation and SetupVector StoreMemory |
192 | https://python.langchain.com/docs/integrations/providers/cerebriumai | ProvidersMoreCerebriumAIOn this pageCerebriumAIThis page covers how to use the CerebriumAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific CerebriumAI wrappers.Installation and SetupInstall with pip install cerebriumGet an CerebriumAI api key and set it as an environment variable (CEREBRIUMAI_API_KEY)WrappersLLMThere exists an CerebriumAI LLM wrapper, which you can access with from langchain.llms import CerebriumAIPreviousCassandraNextChaindeskInstallation and SetupWrappersLLM |
193 | https://python.langchain.com/docs/integrations/providers/chaindesk | ProvidersMoreChaindeskOn this pageChaindeskChaindesk is an open source document retrieval platform that helps to connect your personal data with Large Language Models.Installation and SetupWe need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url.
We need the API Key.RetrieverSee a usage example.from langchain.retrievers import ChaindeskRetrieverPreviousCerebriumAINextChromaInstallation and SetupRetriever |
194 | https://python.langchain.com/docs/integrations/providers/chroma | ProvidersMoreChromaOn this pageChromaChroma is a database for building AI applications with embeddings.Installation and Setuppip install chromadbVectorStoreThere exists a wrapper around Chroma vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.from langchain.vectorstores import ChromaFor a more detailed walkthrough of the Chroma wrapper, see this notebookRetrieverSee a usage example.from langchain.retrievers import SelfQueryRetrieverPreviousChaindeskNextClarifaiInstallation and SetupVectorStoreRetriever |