Unnamed: 0
stringlengths
1
178
link
stringlengths
31
163
text
stringlengths
18
32.8k
395
https://python.langchain.com/docs/integrations/llms/openai
ComponentsLLMsOpenAIOpenAIOpenAI offers a spectrum of models with different levels of power suitable for different tasks.This example goes over how to use LangChain to interact with OpenAI models# get a token: https://platform.openai.com/account/api-keysfrom getpass import getpassOPENAI_API_KEY = getpass()import osos.environ["OPENAI_API_KEY"] = OPENAI_API_KEYShould you need to specify your organization ID, you can use the following cell. However, it is not required if you are only part of a single organization or intend to use your default organization. You can check your default organization here.To specify your organization, you can use this:OPENAI_ORGANIZATION = getpass()os.environ["OPENAI_ORGANIZATION"] = OPENAI_ORGANIZATIONfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = OpenAI()If you manually want to specify your OpenAI API key and/or organization ID, you can use the following:llm = OpenAI(openai_api_key="YOUR_API_KEY", openai_organization="YOUR_ORGANIZATION_ID")Remove the openai_organization parameter should it not apply to you.llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in 1994 was the Dallas Cowboys.'If you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass throughos.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"PreviousOpaquePromptsNextOpenLLM
396
https://python.langchain.com/docs/integrations/llms/openllm
ComponentsLLMsOpenLLMOn this pageOpenLLM🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.Installation​Install openllm through PyPIpip install openllmLaunch OpenLLM server locally​To start an LLM server, use openllm start command. For example, to start a dolly-v2 server, run the following command from a terminal:openllm start dolly-v2Wrapper​from langchain.llms import OpenLLMserver_url = "http://localhost:3000" # Replace with remote host if you are running on a remote serverllm = OpenLLM(server_url=server_url)Optional: Local LLM Inference​You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of LLMs.When moving LLM applications to production, we recommend deploying the OpenLLM server separately and access via the server_url option demonstrated above.To load an LLM locally via the LangChain wrapper:from langchain.llms import OpenLLMllm = OpenLLM( model_name="dolly-v2", model_id="databricks/dolly-v2-3b", temperature=0.94, repetition_penalty=1.2,)Integrate with a LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = "What is a good name for a company that makes {product}?"prompt = PromptTemplate(template=template, input_variables=["product"])llm_chain = LLMChain(prompt=prompt, llm=llm)generated = llm_chain.run(product="mechanical keyboard")print(generated) iLkbPreviousOpenAINextOpenLMInstallationLaunch OpenLLM server locallyWrapperOptional: Local LLM InferenceIntegrate with a LLMChain
397
https://python.langchain.com/docs/integrations/llms/openlm
ComponentsLLMsOpenLMOn this pageOpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code.This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You'll need API keys from both.Setup​Install dependencies and set API keys.# Uncomment to install openlm and openai if you haven't already# !pip install openlm# !pip install openaifrom getpass import getpassimport osimport subprocess# Check if OPENAI_API_KEY environment variable is setif "OPENAI_API_KEY" not in os.environ: print("Enter your OpenAI API key:") os.environ["OPENAI_API_KEY"] = getpass()# Check if HF_API_TOKEN environment variable is setif "HF_API_TOKEN" not in os.environ: print("Enter your HuggingFace Hub API key:") os.environ["HF_API_TOKEN"] = getpass()Using LangChain with OpenLM​Here we're going to call two models in an LLMChain, text-davinci-003 from OpenAI and gpt2 on HuggingFace.from langchain.llms import OpenLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainquestion = "What is the capital of France?"template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])for model in ["text-davinci-003", "huggingface.co/gpt2"]: llm = OpenLM(model=model) llm_chain = LLMChain(prompt=prompt, llm=llm) result = llm_chain.run(question) print( """Model: {}Result: {}""".format( model, result ) ) Model: text-davinci-003 Result: France is a country in Europe. The capital of France is Paris. Model: huggingface.co/gpt2 Result: Question: What is the capital of France? Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far morePreviousOpenLLMNextPetalsSetupUsing LangChain with OpenLM
398
https://python.langchain.com/docs/integrations/llms/petals
ComponentsLLMsPetalsOn this pagePetalsPetals runs 100B+ language models at home, BitTorrent-style.This notebook goes over how to use Langchain with Petals.Install petals​The petals package is required to use the Petals API. Install petals using pip3 install petals.For Apple Silicon(M1/M2) users please follow this guide https://github.com/bigscience-workshop/petals/issues/147#issuecomment-1365379642 to install petals pip3 install petalsImports​import osfrom langchain.llms import Petalsfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from Huggingface.from getpass import getpassHUGGINGFACE_API_KEY = getpass() ········os.environ["HUGGINGFACE_API_KEY"] = HUGGINGFACE_API_KEYCreate the Petals instance​You can specify different parameters such as the model name, max new tokens, temperature, etc.# this can take several minutes to download big files!llm = Petals(model_name="bigscience/bloom-petals") Downloading: 1%|▏ | 40.8M/7.19G [00:24<15:44, 7.57MB/s]Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousOpenLMNextPipelineAIInstall petalsImportsSet the Environment API KeyCreate the Petals instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChain
399
https://python.langchain.com/docs/integrations/llms/pipelineai
ComponentsLLMsPipelineAIOn this pagePipelineAIPipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.This notebook goes over how to use Langchain with PipelineAI.PipelineAI example​This example shows how PipelineAI integrated with LangChain and it is created by PipelineAI.Setup​The pipeline-ai library is required to use the PipelineAI API, AKA Pipeline Cloud. Install pipeline-ai using pip install pipeline-ai.# Install the packagepip install pipeline-aiExample​Imports​import osfrom langchain.llms import PipelineAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API Key​Make sure to get your API key from PipelineAI. Check out the cloud quickstart guide. You'll be given a 30 day free trial with 10 hours of serverless GPU compute to test different models.os.environ["PIPELINE_API_KEY"] = "YOUR_API_KEY_HERE"Create the PipelineAI instance​When instantiating PipelineAI, you need to specify the id or tag of the pipeline you want to use, e.g. pipeline_key = "public/gpt-j:base". You then have the option of passing additional pipeline-specific keyword arguments:llm = PipelineAI(pipeline_key="YOUR_PIPELINE_KEY", pipeline_kwargs={...})Create a Prompt Template​We will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChain​llm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChain​Provide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousPetalsNextPredibasePipelineAI exampleSetupExampleImportsSet the Environment API KeyCreate the PipelineAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChain
400
https://python.langchain.com/docs/integrations/llms/predibase
ComponentsLLMsPredibaseOn this pagePredibasePredibase allows you to train, finetune, and deploy any ML model—from linear regression to large language model. This example demonstrates using Langchain with models deployed on PredibaseSetupTo run this notebook, you'll need a Predibase account and an API key.You'll also need to install the Predibase Python package:pip install predibaseimport osos.environ["PREDIBASE_API_TOKEN"] = "{PREDIBASE_API_TOKEN}"Initial Call​from langchain.llms import Predibasemodel = Predibase( model="vicuna-13b", predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN"))response = model("Can you recommend me a nice dry wine?")print(response)Chain Call Setup​llm = Predibase( model="vicuna-13b", predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN"))SequentialChain​from langchain.chains import LLMChainfrom langchain.prompts import PromptTemplate# This is an LLMChain to write a synopsis given a title of a play.template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)# This is an LLMChain to write a review of a play given a synopsis.template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play Synopsis:{synopsis}Review from a New York Times play critic of the above play:"""prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)review_chain = LLMChain(llm=llm, prompt=prompt_template)# This is the overall chain where we run these two chains in sequence.from langchain.chains import SimpleSequentialChainoverall_chain = SimpleSequentialChain( chains=[synopsis_chain, review_chain], verbose=True)review = overall_chain.run("Tragedy at sunset on the beach")Fine-tuned LLM (Use your own fine-tuned LLM from Predibase)​from langchain.llms import Predibasemodel = Predibase( model="my-finetuned-LLM", predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN"))# replace my-finetuned-LLM with the name of your model in Predibase# response = model("Can you help categorize the following emails into positive, negative, and neutral?")PreviousPipelineAINextPrediction GuardInitial CallChain Call SetupSequentialChainFine-tuned LLM (Use your own fine-tuned LLM from Predibase)
401
https://python.langchain.com/docs/integrations/llms/predictionguard
ComponentsLLMsPrediction GuardOn this pagePrediction Guardpip install predictionguard langchainimport osimport predictionguard as pgfrom langchain.llms import PredictionGuardfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainBasic LLM usage​# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows# you to access all the latest open access models (see https://docs.predictionguard.com)os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"# Your Prediction Guard API key. Get one at predictionguard.comos.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"pgllm = PredictionGuard(model="OpenAI-text-davinci-003")pgllm("Tell me a joke")Control the output structure/ type of LLMs​template = """Respond to the following query based on the context.Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦Exclusive Candle Box - $80 Monthly Candle Box - $45 (NEW!)Scent of The Month Box - $28 (NEW!)Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉Query: {query}Result: """prompt = PromptTemplate(template=template, input_variables=["query"])# Without "guarding" or controlling the output of the LLM.pgllm(prompt.format(query="What kind of post is this?"))# With "guarding" or controlling the output of the LLM. See the# Prediction Guard docs (https://docs.predictionguard.com) to learn how to# control the output with integer, float, boolean, JSON, and other types and# structures.pgllm = PredictionGuard( model="OpenAI-text-davinci-003", output={ "type": "categorical", "categories": ["product announcement", "apology", "relational"], },)pgllm(prompt.format(query="What kind of post is this?"))Chaining​pgllm = PredictionGuard(model="OpenAI-text-davinci-003")template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.predict(question=question)template = """Write a {adjective} poem about {subject}."""prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)llm_chain.predict(adjective="sad", subject="ducks")PreviousPredibaseNextPromptLayer OpenAIBasic LLM usageControl the output structure/ type of LLMsChaining
402
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
ComponentsLLMsPromptLayer OpenAIOn this pagePromptLayer OpenAIPromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library.PromptLayer records all your OpenAI API requests, allowing you to search and explore request history in the PromptLayer dashboard.This example showcases how to connect to PromptLayer to start recording your OpenAI requests.Another example is here.Install PromptLayer​The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.pip install promptlayerImports​import osfrom langchain.llms import PromptLayerOpenAIimport promptlayerSet the Environment API Key​You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.Set it as an environment variable called PROMPTLAYER_API_KEY.You also need an OpenAI Key, called OPENAI_API_KEY.from getpass import getpassPROMPTLAYER_API_KEY = getpass() ········os.environ["PROMPTLAYER_API_KEY"] = PROMPTLAYER_API_KEYfrom getpass import getpassOPENAI_API_KEY = getpass() ········os.environ["OPENAI_API_KEY"] = OPENAI_API_KEYUse the PromptLayerOpenAI LLM like normal​You can optionally pass in pl_tags to track your requests with PromptLayer's tagging feature.llm = PromptLayerOpenAI(pl_tags=["langchain"])llm("I am a cat and I want")The above request should now appear on your PromptLayer dashboard.Using PromptLayer Track​If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. llm = PromptLayerOpenAI(return_pl_id=True)llm_results = llm.generate(["Tell me a joke"])for res in llm_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100)Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.PreviousPrediction GuardNextRELLMInstall PromptLayerImportsSet the Environment API KeyUse the PromptLayerOpenAI LLM like normalUsing PromptLayer Track
403
https://python.langchain.com/docs/integrations/llms/rellm_experimental
ComponentsLLMsRELLMOn this pageRELLMRELLM is a library that wraps local Hugging Face pipeline models for structured decoding.It works by generating tokens one at a time. At each step, it masks tokens that don't conform to the provided partial regular expression.Warning - this module is still experimentalpip install rellm > /dev/nullHugging Face Baseline​First, let's establish a qualitative baseline by checking the output of the model without structured decoding.import logginglogging.basicConfig(level=logging.ERROR)prompt = """Human: "What's the capital of the United States?"AI Assistant:{ "action": "Final Answer", "action_input": "The capital of the United States is Washington D.C."}Human: "What's the capital of Pennsylvania?"AI Assistant:{ "action": "Final Answer", "action_input": "The capital of Pennsylvania is Harrisburg."}Human: "What 2 + 5?"AI Assistant:{ "action": "Final Answer", "action_input": "2 + 5 = 7."}Human: 'What's the capital of Maryland?'AI Assistant:"""from transformers import pipelinefrom langchain.llms import HuggingFacePipelinehf_model = pipeline( "text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200)original_model = HuggingFacePipeline(pipeline=hf_model)generated = original_model.generate([prompt], stop=["Human:"])print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=NoneThat's not so impressive, is it? It didn't answer the question and it didn't follow the JSON format at all! Let's try with the structured decoder.RELLM LLM Wrapper​Let's try that again, now providing a regex to match the JSON structured format.import regex # Note this is the regex library NOT python's re stdlib module# We'll choose a regex that matches to a structured json string that looks like:# {# "action": "Final Answer",# "action_input": string or dict# }pattern = regex.compile( r'\{\s*"action":\s*"Final Answer",\s*"action_input":\s*(\{.*\}|"[^"]*")\s*\}\nHuman:')from langchain_experimental.llms import RELLMmodel = RELLM(pipeline=hf_model, regex=pattern, max_new_tokens=200)generated = model.predict(prompt, stop=["Human:"])print(generated) {"action": "Final Answer", "action_input": "The capital of Maryland is Baltimore." } Voila! Free of parsing errors.PreviousPromptLayer OpenAINextReplicateHugging Face BaselineRELLM LLM Wrapper
404
https://python.langchain.com/docs/integrations/llms/replicate
ComponentsLLMsReplicateOn this pageReplicateReplicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.This example goes over how to use LangChain to interact with Replicate modelsSetup​# magics to auto-reload external modules in case you are making changes to langchain while working on this notebook%autoreload 2To run this notebook, you'll need to create a replicate account and install the replicate python client.poetry run pip install replicate Collecting replicate Using cached replicate-0.9.0-py3-none-any.whl (21 kB) Requirement already satisfied: packaging in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (23.1) Requirement already satisfied: pydantic>1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (1.10.9) Requirement already satisfied: requests>2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (2.28.2) Requirement already satisfied: typing-extensions>=4.2.0 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from pydantic>1->replicate) (4.5.0) Requirement already satisfied: charset-normalizer<4,>=2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.4) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (1.26.16) Requirement already satisfied: certifi>=2017.4.17 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (2023.5.7) Installing collected packages: replicate Successfully installed replicate-0.9.0# get a token: https://replicate.com/accountfrom getpass import getpassREPLICATE_API_TOKEN = getpass()import osos.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKENfrom langchain.llms import Replicatefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainCalling a model​Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version.For example, here is LLama-V2.llm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", model_kwargs={"temperature": 0.75, "max_length": 500, "top_p": 1},)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""llm(prompt) '1. Dogs do not have the ability to operate complex machinery like cars.\n2. Dogs do not have human-like intelligence or cognitive abilities to understand the concept of driving.\n3. Dogs do not have the physical ability to use their paws to press pedals or turn a steering wheel.\n4. Therefore, a dog cannot drive a car.'As another example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5Only the model param is required, but we can add other model params when initializing.For example, if we were running stable diffusion and wanted to change the image dimensions:Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})Note that only the first output of a model will be returned.llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")prompt = """Answer the following yes/no question by reasoning step by step. Can a dog drive a car?"""llm(prompt) 'No, dogs lack some of the brain functions required to operate a motor vehicle. They cannot focus and react in time to accelerate or brake correctly. Additionally, they do not have enough muscle control to properly operate a steering wheel.\n\n'We can call any replicate model using this syntax. For example, we can call stable diffusion.text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", model_kwargs={"image_dimensions": "512x512"},)image_output = text2image("A cat riding a motorcycle by Picasso")image_output 'https://pbxt.replicate.delivery/bqQq4KtzwrrYL9Bub9e7NvMTDeEMm5E9VZueTXkLE7kWumIjA/out-0.png'The model spits out a URL. Let's render it.poetry run pip install Pillow Requirement already satisfied: Pillow in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (9.5.0) [notice] A new release of pip is available: 23.2 -> 23.2.1 [notice] To update, run: pip install --upgrade pipfrom PIL import Imageimport requestsfrom io import BytesIOresponse = requests.get(image_output)img = Image.open(BytesIO(response.content))imgStreaming Response​You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on Streaming for more information.from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = Replicate( streaming=True, callbacks=[StreamingStdOutCallbackHandler()], model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", model_kwargs={"temperature": 0.75, "max_length": 500, "top_p": 1},)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""_ = llm(prompt) 1. Dogs do not have the physical ability to operate a vehicle.Stop SequencesYou can also specify stop sequences. If you have a definite stop sequence for the generation that you are going to parse with anyway, it is better (cheaper and faster!) to just cancel the generation once one or more stop sequences are reached, rather than letting the model ramble on till the specified max_length. Stop sequences work regardless of whether you are in streaming mode or not, and Replicate only charges you for the generation up until the stop sequence.import timellm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", model_kwargs={"temperature": 0.01, "max_length": 500, "top_p": 1},)prompt = """User: What is the best way to learn python?Assistant:"""start_time = time.perf_counter()raw_output = llm(prompt) # raw output, no stopend_time = time.perf_counter()print(f"Raw output:\n {raw_output}")print(f"Raw output runtime: {end_time - start_time} seconds")start_time = time.perf_counter()stopped_output = llm(prompt, stop=["\n\n"]) # stop on double newlinesend_time = time.perf_counter()print(f"Stopped output:\n {stopped_output}")print(f"Stopped output runtime: {end_time - start_time} seconds") Raw output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are a few suggestions: 1. Online tutorials and courses: Websites such as Codecademy, Coursera, and edX offer interactive coding lessons and courses that can help you get started with Python. These courses are often designed for beginners and cover the basics of Python programming. 2. Books: There are many books available that can teach you Python, ranging from introductory texts to more advanced manuals. Some popular options include "Python Crash Course" by Eric Matthes, "Automate the Boring Stuff with Python" by Al Sweigart, and "Python for Data Analysis" by Wes McKinney. 3. Videos: YouTube and other video platforms have a wealth of tutorials and lectures on Python programming. Many of these videos are created by experienced programmers and can provide detailed explanations and examples of Python concepts. 4. Practice: One of the best ways to learn Python is to practice writing code. Start with simple programs and gradually work your way up to more complex projects. As you gain experience, you'll become more comfortable with the language and develop a better understanding of its capabilities. 5. Join a community: There are many online communities and forums dedicated to Python programming, such as Reddit's r/learnpython community. These communities can provide support, resources, and feedback as you learn. 6. Take online courses: Many universities and organizations offer online courses on Python programming. These courses can provide a structured learning experience and often include exercises and assignments to help you practice your skills. 7. Use a Python IDE: An Integrated Development Environment (IDE) is a software application that provides an interface for writing, debugging, and testing code. Popular Python IDEs include PyCharm, Visual Studio Code, and Spyder. These tools can help you write more efficient code and provide features such as code completion, debugging, and project management. Which of the above options do you think is the best way to learn Python? Raw output runtime: 25.27470933299992 seconds Stopped output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are some suggestions: Stopped output runtime: 25.77039254200008 secondsChaining Calls​The whole point of langchain is to... chain! Here's an example of how do that.from langchain.chains import SimpleSequentialChainFirst, let's define the LLM for this model as a flan-5, and text2image as a stable diffusion model.dolly_llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf")First prompt in the chainprompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)chain = LLMChain(llm=dolly_llm, prompt=prompt)Second prompt to get the logo for company descriptionsecond_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}",)chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt)Third prompt, let's create the image based on the description output from prompt 2third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three = LLMChain(llm=text2image, prompt=third_prompt)Now let's run it!# Run the chain specifying only the input variable for the first chain.overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three], verbose=True)catchphrase = overall_chain.run("colorful socks")print(catchphrase) > Entering new SimpleSequentialChain chain... Colorful socks could be named after a song by The Beatles or a color (yellow, blue, pink). A good combination of letters and digits would be 6399. Apple also owns the domain 6399.com so this could be reserved for the Company. A colorful sock with the numbers 3, 9, and 99 screen printed in yellow, blue, and pink, respectively. https://pbxt.replicate.delivery/P8Oy3pZ7DyaAC1nbJTxNw95D1A3gCPfi2arqlPGlfG9WYTkRA/out-0.png > Finished chain. https://pbxt.replicate.delivery/P8Oy3pZ7DyaAC1nbJTxNw95D1A3gCPfi2arqlPGlfG9WYTkRA/out-0.pngresponse = requests.get( "https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.png")img = Image.open(BytesIO(response.content))imgPreviousRELLMNextRunhouseSetupCalling a modelStreaming ResponseChaining Calls
405
https://python.langchain.com/docs/integrations/llms/runhouse
ComponentsLLMsRunhouseRunhouseThe Runhouse allows remote compute and data across environments and users. See the Runhouse docs.This example goes over how to use LangChain and Runhouse to interact with models hosted on your own GPU, or on-demand GPUs on AWS, GCP, AWS, or Lambda.Note: Code uses SelfHosted name instead of the Runhouse.pip install runhousefrom langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainimport runhouse as rh INFO | 2023-04-17 16:47:36,173 | No auth token provided, so not using RNS API to save and load configs# For an on-demand A100 with GCP, Azure, or Lambdagpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)# For an on-demand A10G with AWS (no single A100s on AWS)# gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws')# For an existing cluster# gpu = rh.cluster(ips=['<ip of the cluster>'],# ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'},# name='rh-a10x')template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = SelfHostedHuggingFaceLLM( model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) INFO | 2023-02-17 05:42:23,537 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:24,016 | Time to send message: 0.48 seconds "\n\nLet's say we're talking sports teams who won the Super Bowl in the year Justin Beiber"You can also load more custom models through the SelfHostedHuggingFaceLLM interface:llm = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-small", task="text2text-generation", hardware=gpu,)llm("What is the capital of Germany?") INFO | 2023-02-17 05:54:21,681 | Running _generate_text via gRPC INFO | 2023-02-17 05:54:21,937 | Time to send message: 0.25 seconds 'berlin'Using a custom load function, we can load a custom pipeline directly on the remote hardware:def load_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Need to be inside the fn in notebooks model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) return pipedef inference_fn(pipeline, prompt, stop=None): return pipeline(prompt)[0]["generated_text"][len(prompt) :]llm = SelfHostedHuggingFaceLLM( model_load_fn=load_pipeline, hardware=gpu, inference_fn=inference_fn)llm("Who is the current US president?") INFO | 2023-02-17 05:42:59,219 | Running _generate_text via gRPC INFO | 2023-02-17 05:42:59,522 | Time to send message: 0.3 seconds 'john w. bush'You can send your pipeline directly over the wire to your model, but this will only work for small models (<2 Gb), and will be pretty slow:pipeline = load_pipeline()llm = SelfHostedPipeline.from_pipeline( pipeline=pipeline, hardware=gpu, model_reqs=model_reqs)Instead, we can also send it to the hardware's filesystem, which will be much faster.rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to( gpu, path="models")llm = SelfHostedPipeline.from_pipeline(pipeline="models/pipeline.pkl", hardware=gpu)PreviousReplicateNextSageMakerEndpoint
406
https://python.langchain.com/docs/integrations/llms/sagemaker
ComponentsLLMsSageMakerEndpointOn this pageSageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.This notebooks goes over how to use an LLM hosted on a SageMaker endpoint.pip3 install langchain boto3Set up​You have to set up following required parameters of the SagemakerEndpoint call:endpoint_name: The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region.credentials_profile_name: The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.htmlExample​from langchain.docstore.document import Documentexample_doc_1 = """Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital.Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well.Therefore, Peter stayed with her at the hospital for 3 days without leaving."""docs = [ Document( page_content=example_doc_1, )]from typing import Dictfrom langchain.prompts import PromptTemplatefrom langchain.llms import SagemakerEndpointfrom langchain.llms.sagemaker_endpoint import LLMContentHandlerfrom langchain.chains.question_answering import load_qa_chainimport jsonquery = """How long was Elizabeth hospitalized?"""prompt_template = """Use the following pieces of context to answer the question at the end.{context}Question: {question}Answer:"""PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"])class ContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({prompt: prompt, **model_kwargs}) return input_str.encode("utf-8") def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json[0]["generated_text"]content_handler = ContentHandler()chain = load_qa_chain( llm=SagemakerEndpoint( endpoint_name="endpoint-name", credentials_profile_name="credentials-profile-name", region_name="us-west-2", model_kwargs={"temperature": 1e-10}, content_handler=content_handler, ), prompt=PROMPT,)chain({"input_documents": docs, "question": query}, return_only_outputs=True)PreviousRunhouseNextStochasticAISet upExample
407
https://python.langchain.com/docs/integrations/llms/stochasticai
ComponentsLLMsStochasticAIStochasticAIStochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.This example goes over how to use LangChain to interact with StochasticAI models.You have to get the API_KEY and the API_URL here.from getpass import getpassSTOCHASTICAI_API_KEY = getpass() ········import osos.environ["STOCHASTICAI_API_KEY"] = STOCHASTICAI_API_KEYYOUR_API_URL = getpass() ········from langchain.llms import StochasticAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = StochasticAI(api_url=YOUR_API_URL)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) "\n\nStep 1: In 1999, the St. Louis Rams won the Super Bowl.\n\nStep 2: In 1999, Beiber was born.\n\nStep 3: The Rams were in Los Angeles at the time.\n\nStep 4: So they didn't play in the Super Bowl that year.\n"PreviousSageMakerEndpointNextNebula (Symbl.ai)
408
https://python.langchain.com/docs/integrations/llms/symblai_nebula
ComponentsLLMsNebula (Symbl.ai)Nebula (Symbl.ai)Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.Nebula documentation: https://docs.symbl.ai/docs/nebula-llmThis example goes over how to use LangChain to interact with the Nebula platform.Make sure you have API Key with you. If you don't have one please request one.from langchain.llms.symblai_nebula import Nebulallm = Nebula(nebula_api_key='<your_api_key>')Use a conversation transcript and instruction to construct a prompt.from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainconversation = """Sam: Good morning, team! Let's keep this standup concise. We'll go in the usual order: what you did yesterday, what you plan to do today, and any blockers. Alex, kick us off.Alex: Morning! Yesterday, I wrapped up the UI for the user dashboard. The new charts and widgets are now responsive. I also had a sync with the design team to ensure the final touchups are in line with the brand guidelines. Today, I'll start integrating the frontend with the new API endpoints Rhea was working on. The only blocker is waiting for some final API documentation, but I guess Rhea can update on that.Rhea: Hey, all! Yep, about the API documentation - I completed the majority of the backend work for user data retrieval yesterday. The endpoints are mostly set up, but I need to do a bit more testing today. I'll finalize the API documentation by noon, so that should unblock Alex. After that, I’ll be working on optimizing the database queries for faster data fetching. No other blockers on my end.Sam: Great, thanks Rhea. Do reach out if you need any testing assistance or if there are any hitches with the database. Now, my update: Yesterday, I coordinated with the client to get clarity on some feature requirements. Today, I'll be updating our project roadmap and timelines based on their feedback. Additionally, I'll be sitting with the QA team in the afternoon for preliminary testing. Blocker: I might need both of you to be available for a quick call in case the client wants to discuss the changes live.Alex: Sounds good, Sam. Just let us know a little in advance for the call.Rhea: Agreed. We can make time for that.Sam: Perfect! Let's keep the momentum going. Reach out if there are any sudden issues or support needed. Have a productive day!Alex: You too.Rhea: Thanks, bye!"""instruction = "Identify the main objectives mentioned in this conversation."prompt = PromptTemplate.from_template("{instruction}\n{conversation}")llm_chain = LLMChain(prompt=prompt, llm=llm)llm_chain.run(instruction=instruction, conversation=conversation)PreviousStochasticAINextTextGen
409
https://python.langchain.com/docs/integrations/llms/textgen
ComponentsLLMsTextGenOn this pageTextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.This example goes over how to use LangChain to interact with LLM models via the text-generation-webui API integration.Please ensure that you have text-generation-webui configured and an LLM installed. Recommended installation via the one-click installer appropriate for your OS.Once text-generation-webui is installed and confirmed working via the web interface, please enable the api option either through the web model configuration tab, or by adding the run-time arg --api to your start command.Set model_url and run the example​model_url = "http://localhost:5000"import langchainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import TextGenlangchain.debug = Truetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = TextGen(model_url=model_url)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question)Streaming Version​You should install websocket-client to use this feature. pip install websocket-clientmodel_url = "ws://localhost:5005"import langchainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import TextGenfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerlangchain.debug = Truetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = TextGen(model_url=model_url, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question)llm = TextGen( model_url = model_url, streaming=True)for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'", stop=["'","\n"]): print(chunk, end='', flush=True)PreviousNebula (Symbl.ai)NextTitan TakeoffSet model_url and run the exampleStreaming Version
410
https://python.langchain.com/docs/integrations/llms/titan_takeoff
ComponentsLLMsTitan TakeoffOn this pageTitan TakeoffTitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform. Our inference server, Titan Takeoff enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more.Installation​To get started with Iris Takeoff, all you need is to have docker and python installed on your local system. If you wish to use the server with gpu support, then you will need to install docker with cuda support.For Mac and Windows users, make sure you have the docker daemon running! You can check this by running docker ps in your terminal. To start the daemon, open the docker desktop app.Run the following command to install the Iris CLI that will enable you to run the takeoff server:pip install titan-irisChoose a Model​Takeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the supported models for more information. For information about using your own models, see the custom models.Going forward in this demo we will be using the falcon 7B instruct model. This is a good open source model that is trained to follow instructions, and is small enough to easily inference even on CPUs.Taking off​Models are referred to by their model id on HuggingFace. Takeoff uses port 8000 by default, but can be configured to use another port. There is also support to use a Nvidia GPU by specifing cuda for the device flag.To start the takeoff server, run:iris takeoff --model tiiuae/falcon-7b-instruct --device cpuiris takeoff --model tiiuae/falcon-7b-instruct --device cuda # Nvidia GPU requirediris takeoff --model tiiuae/falcon-7b-instruct --device cpu --port 5000 # run on port 5000 (default: 8000)You will then be directed to a login page, where you will need to create an account to proceed. After logging in, run the command onscreen to check whether the server is ready. When it is ready, you can start using the Takeoff integration.To shutdown the server, run the following command. You will be presented with options on which Takeoff server to shut down, in case you have multiple running servers.iris takeoff --shutdown # shutdown the serverInferencing your model​To access your LLM, use the TitanTakeoff LLM wrapper:from langchain.llms import TitanTakeoffllm = TitanTakeoff( baseURL="http://localhost:8000", generate_max_length=128, temperature=1.0)prompt = "What is the largest planet in the solar system?"llm(prompt)No parameters are needed by default, but a baseURL that points to your desired URL where Takeoff is running can be specified and generation parameters can be supplied.Streaming​Streaming is also supported via the streaming flag:from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.callbacks.manager import CallbackManagerllm = TitanTakeoff(callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), streaming=True)prompt = "What is the capital of France?"llm(prompt)Integration with LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainllm = TitanTakeoff()template = "What is the capital of {country}"prompt = PromptTemplate(template=template, input_variables=["country"])llm_chain = LLMChain(llm=llm, prompt=prompt)generated = llm_chain.run(country="Belgium")print(generated)PreviousTextGenNextTongyi QwenInstallationChoose a ModelTaking offInferencing your modelStreamingIntegration with LLMChain
411
https://python.langchain.com/docs/integrations/llms/tongyi
ComponentsLLMsTongyi QwenTongyi QwenTongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.# Install the packagepip install dashscope# Get a new token: https://help.aliyun.com/document_detail/611472.html?spm=a2c4g.2399481.0.0from getpass import getpassDASHSCOPE_API_KEY = getpass() ········import osos.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEYfrom langchain.llms import Tongyifrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Tongyi()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) "The year Justin Bieber was born was 1994. The Denver Broncos won the Super Bowl in 1997, which means they would have been the team that won the Super Bowl during Justin Bieber's birth year. So the answer is the Denver Broncos."PreviousTitan TakeoffNextvLLM
412
https://python.langchain.com/docs/integrations/llms/vllm
ComponentsLLMsvLLMOn this pagevLLMvLLM is a fast and easy-to-use library for LLM inference and serving, offering:State-of-the-art serving throughput Efficient management of attention key and value memory with PagedAttentionContinuous batching of incoming requestsOptimized CUDA kernelsThis notebooks goes over how to use a LLM with langchain and vLLM.To use, you should have the vllm python package installed.#!pip install vllm -qfrom langchain.llms import VLLMllm = VLLM(model="mosaicml/mpt-7b", trust_remote_code=True, # mandatory for hf models max_new_tokens=128, top_k=10, top_p=0.95, temperature=0.8,)print(llm("What is the capital of France ?")) INFO 08-06 11:37:33 llm_engine.py:70] Initializing an LLM engine with config: model='mosaicml/mpt-7b', tokenizer='mosaicml/mpt-7b', tokenizer_mode=auto, trust_remote_code=True, dtype=torch.bfloat16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=1, seed=0) INFO 08-06 11:37:41 llm_engine.py:196] # GPU blocks: 861, # CPU blocks: 512 Processed prompts: 100%|██████████| 1/1 [00:00<00:00, 2.00it/s] What is the capital of France ? The capital of France is Paris. Integrate the model in an LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Who was the US president in the year the first Pokemon game was released?"print(llm_chain.run(question)) Processed prompts: 100%|██████████| 1/1 [00:01<00:00, 1.34s/it] 1. The first Pokemon game was released in 1996. 2. The president was Bill Clinton. 3. Clinton was president from 1993 to 2001. 4. The answer is Clinton. Distributed Inference​vLLM supports distributed tensor-parallel inference and serving. To run multi-GPU inference with the LLM class, set the tensor_parallel_size argument to the number of GPUs you want to use. For example, to run inference on 4 GPUsfrom langchain.llms import VLLMllm = VLLM(model="mosaicml/mpt-30b", tensor_parallel_size=4, trust_remote_code=True, # mandatory for hf models)llm("What is the future of AI?")OpenAI-Compatible Server​vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API.This server can be queried in the same format as OpenAI API.OpenAI-Compatible Completion​from langchain.llms import VLLMOpenAIllm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="tiiuae/falcon-7b", model_kwargs={"stop": ["."]})print(llm("Rome is")) a city that is filled with history, ancient buildings, and art around every cornerPreviousTongyi QwenNextWriterIntegrate the model in an LLMChainDistributed InferenceOpenAI-Compatible ServerOpenAI-Compatible Completion
413
https://python.langchain.com/docs/integrations/llms/writer
ComponentsLLMsWriterWriterWriter is a platform to generate different language content.This example goes over how to use LangChain to interact with Writer models.You have to get the WRITER_API_KEY here.from getpass import getpassWRITER_API_KEY = getpass() ········import osos.environ["WRITER_API_KEY"] = WRITER_API_KEYfrom langchain.llms import Writerfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# If you get an error, probably, you need to set up the "base_url" parameter that can be taken from the error log.llm = Writer()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousvLLMNextXorbits Inference (Xinference)
414
https://python.langchain.com/docs/integrations/llms/xinference
ComponentsLLMsXorbits Inference (Xinference)On this pageXorbits Inference (Xinference)Xinference is a powerful and versatile library designed to serve LLMs, speech recognition models, and multimodal models, even on your laptop. It supports a variety of models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, and many others. This notebook demonstrates how to use Xinference with LangChain.Installation​Install Xinference through PyPI:%pip install "xinference[all]"Deploy Xinference Locally or in a Distributed Cluster.​For local deployment, run xinference. To deploy Xinference in a cluster, first start an Xinference supervisor using the xinference-supervisor. You can also use the option -p to specify the port and -H to specify the host. The default port is 9997.Then, start the Xinference workers using xinference-worker on each server you want to run them on. You can consult the README file from Xinference for more information.Wrapper​To use Xinference with LangChain, you need to first launch a model. You can use command line interface (CLI) to do so:xinference launch -n vicuna-v1.3 -f ggmlv3 -q q4_0 Model uid: 7167b2b0-2a04-11ee-83f0-d29396a3f064A model UID is returned for you to use. Now you can use Xinference with LangChain:from langchain.llms import Xinferencellm = Xinference( server_url="http://0.0.0.0:9997", model_uid = "7167b2b0-2a04-11ee-83f0-d29396a3f064")llm( prompt="Q: where can we visit in the capital of France? A:", generate_config={"max_tokens": 1024, "stream": True},) ' You can visit the Eiffel Tower, Notre-Dame Cathedral, the Louvre Museum, and many other historical sites in Paris, the capital of France.'Integrate with a LLMChain​from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = "Where can we visit in the capital of {country}?"prompt = PromptTemplate(template=template, input_variables=["country"])llm_chain = LLMChain(prompt=prompt, llm=llm)generated = llm_chain.run(country="France")print(generated) A: You can visit many places in Paris, such as the Eiffel Tower, the Louvre Museum, Notre-Dame Cathedral, the Champs-Elysées, Montmartre, Sacré-Cœur, and the Palace of Versailles.Lastly, terminate the model when you do not need to use it:xinference terminate --model-uid "7167b2b0-2a04-11ee-83f0-d29396a3f064"PreviousWriterNextChat modelsInstallationDeploy Xinference Locally or in a Distributed Cluster.WrapperIntegrate with a LLMChain
415
https://python.langchain.com/docs/integrations/chat/
ComponentsChat modelsOn this pageChat modelsFeatures (natively supported)​All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all ChatModels basic support for async, streaming and batch, which by default is implemented as below:Async support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the ChatModel is being executed, by moving this call to a background thread.Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.Batch support defaults to calling the underlying ChatModel in parallel for each input by making use of a thread pool executor (in the sync batch case) or asyncio.gather (in the async batch case). The concurrency can be controlled with the max_concurrency key in RunnableConfig.Each ChatModel integration can optionally provide native implementations to truly enable async or streaming. The table shows, for each integration, which features have been implemented with native support.ModelInvokeAsync invokeStreamAsync streamAzureChatOpenAI✅✅✅✅BedrockChat✅❌✅❌ChatAnthropic✅✅✅✅ChatAnyscale✅✅✅✅ChatCohere✅✅✅✅ChatFireworks✅✅✅✅ChatGooglePalm✅✅❌❌ChatJavelinAIGateway✅✅❌❌ChatKonko✅❌❌❌ChatLiteLLM✅✅✅✅ChatMLflowAIGateway✅❌❌❌ChatOllama✅❌✅❌ChatOpenAI✅✅✅✅ChatVertexAI✅✅✅❌ErnieBotChat✅❌❌❌JinaChat✅✅✅✅MiniMaxChat✅✅❌❌PromptLayerChatOpenAI✅❌❌❌QianfanChatEndpoint✅✅✅✅📄️ Chat modelsFeatures (natively supported)📄️ AnthropicThis notebook covers how to get started with Anthropic chat models.📄️ Anthropic FunctionsThis notebook shows how to use an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions.📄️ AnyscaleThis notebook demonstrates the use of langchain.chat_models.ChatAnyscale for Anyscale Endpoints.📄️ AzureThis notebook goes over how to connect to an Azure hosted OpenAI endpoint📄️ AzureML Chat Online EndpointAzureML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.📄️ Baidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.📄️ Bedrock ChatAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case📄️ CohereThis notebook covers how to get started with Cohere chat models.📄️ ERNIE-Bot ChatERNIE-Bot is a large language model developed by Baidu, covering a huge amount of Chinese data.📄️ FireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.📄️ GCP Vertex AINote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.📄️ JinaChatThis notebook covers how to get started with JinaChat chat models.📄️ KonkoKonko API is a fully managed Web API designed to help application developers:📄️ 🚅 LiteLLMLiteLLM is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.📄️ Llama APIThis notebook shows how to use LangChain with LlamaAPI - a hosted version of Llama2 that adds in support for function calling.📄️ MiniMaxMinimax is a Chinese startup that provides LLM service for companies and individuals.📄️ OllamaOllama allows you to run open-source large language models, such as LLaMA2, locally.📄️ OpenAIThis notebook covers how to get started with OpenAI chat models.📄️ PromptLayer ChatOpenAIThis example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.📄️ vLLM ChatvLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.PreviousXorbits Inference (Xinference)NextChat modelsFeatures (natively supported)
416
https://python.langchain.com/docs/integrations/chat/
ComponentsChat modelsOn this pageChat modelsFeatures (natively supported)​All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all ChatModels basic support for async, streaming and batch, which by default is implemented as below:Async support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the ChatModel is being executed, by moving this call to a background thread.Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.Batch support defaults to calling the underlying ChatModel in parallel for each input by making use of a thread pool executor (in the sync batch case) or asyncio.gather (in the async batch case). The concurrency can be controlled with the max_concurrency key in RunnableConfig.Each ChatModel integration can optionally provide native implementations to truly enable async or streaming. The table shows, for each integration, which features have been implemented with native support.ModelInvokeAsync invokeStreamAsync streamAzureChatOpenAI✅✅✅✅BedrockChat✅❌✅❌ChatAnthropic✅✅✅✅ChatAnyscale✅✅✅✅ChatCohere✅✅✅✅ChatFireworks✅✅✅✅ChatGooglePalm✅✅❌❌ChatJavelinAIGateway✅✅❌❌ChatKonko✅❌❌❌ChatLiteLLM✅✅✅✅ChatMLflowAIGateway✅❌❌❌ChatOllama✅❌✅❌ChatOpenAI✅✅✅✅ChatVertexAI✅✅✅❌ErnieBotChat✅❌❌❌JinaChat✅✅✅✅MiniMaxChat✅✅❌❌PromptLayerChatOpenAI✅❌❌❌QianfanChatEndpoint✅✅✅✅📄️ Chat modelsFeatures (natively supported)📄️ AnthropicThis notebook covers how to get started with Anthropic chat models.📄️ Anthropic FunctionsThis notebook shows how to use an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions.📄️ AnyscaleThis notebook demonstrates the use of langchain.chat_models.ChatAnyscale for Anyscale Endpoints.📄️ AzureThis notebook goes over how to connect to an Azure hosted OpenAI endpoint📄️ AzureML Chat Online EndpointAzureML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.📄️ Baidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.📄️ Bedrock ChatAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case📄️ CohereThis notebook covers how to get started with Cohere chat models.📄️ ERNIE-Bot ChatERNIE-Bot is a large language model developed by Baidu, covering a huge amount of Chinese data.📄️ FireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.📄️ GCP Vertex AINote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.📄️ JinaChatThis notebook covers how to get started with JinaChat chat models.📄️ KonkoKonko API is a fully managed Web API designed to help application developers:📄️ 🚅 LiteLLMLiteLLM is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.📄️ Llama APIThis notebook shows how to use LangChain with LlamaAPI - a hosted version of Llama2 that adds in support for function calling.📄️ MiniMaxMinimax is a Chinese startup that provides LLM service for companies and individuals.📄️ OllamaOllama allows you to run open-source large language models, such as LLaMA2, locally.📄️ OpenAIThis notebook covers how to get started with OpenAI chat models.📄️ PromptLayer ChatOpenAIThis example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.📄️ vLLM ChatvLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.PreviousXorbits Inference (Xinference)NextChat modelsFeatures (natively supported)
417
https://python.langchain.com/docs/integrations/chat/anthropic
ComponentsChat modelsAnthropicOn this pageAnthropicThis notebook covers how to get started with Anthropic chat models.from langchain.chat_models import ChatAnthropicfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = ChatAnthropic()messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages) AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)ChatAnthropic also supports async and streaming functionality:​from langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerawait chat.agenerate([messages]) LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info=None, message=AIMessage(content=" J'aime programmer.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])chat = ChatAnthropic( streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)chat(messages) J'aime la programmation. AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)PreviousChat modelsNextAnthropic FunctionsChatAnthropic also supports async and streaming functionality:
418
https://python.langchain.com/docs/integrations/chat/anthropic_functions
ComponentsChat modelsAnthropic FunctionsOn this pageAnthropic FunctionsThis notebook shows how to use an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions.from langchain_experimental.llms.anthropic_functions import AnthropicFunctions /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.14) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(Initialize Model​You can initialize this wrapper the same way you'd initialize ChatAnthropicmodel = AnthropicFunctions(model='claude-2')Passing in functions​You can now pass in functions in a similar wayfunctions=[ { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } ]from langchain.schema import HumanMessageresponse = model.predict_messages( [HumanMessage(content="whats the weater in boston?")], functions=functions)response AIMessage(content=' ', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{"location": "Boston, MA", "unit": "fahrenheit"}'}}, example=False)Using for extraction​You can now use this for extraction.from langchain.chains import create_extraction_chainschema = { "properties": { "name": {"type": "string"}, "height": {"type": "integer"}, "hair_color": {"type": "string"}, }, "required": ["name", "height"],}inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde. """chain = create_extraction_chain(schema, model)chain.run(inp) [{'name': 'Alex', 'height': '5', 'hair_color': 'blonde'}, {'name': 'Claudia', 'height': '6', 'hair_color': 'brunette'}]Using for tagging​You can now use this for taggingfrom langchain.chains import create_tagging_chainschema = { "properties": { "sentiment": {"type": "string"}, "aggressiveness": {"type": "integer"}, "language": {"type": "string"}, }}chain = create_tagging_chain(schema, model)chain.run("this is really cool") {'sentiment': 'positive', 'aggressiveness': '0', 'language': 'english'}PreviousAnthropicNextAnyscaleInitialize ModelPassing in functionsUsing for extractionUsing for tagging
419
https://python.langchain.com/docs/integrations/chat/anyscale
ComponentsChat modelsAnyscaleAnyscaleThis notebook demonstrates the use of langchain.chat_models.ChatAnyscale for Anyscale Endpoints.Set ANYSCALE_API_KEY environment variableor use the anyscale_api_key keyword argument# !pip install openaiimport osfrom getpass import getpassos.environ["ANYSCALE_API_KEY"] = getpass() ········Let's try out each model offered on Anyscale Endpointsfrom langchain.chat_models import ChatAnyscalechats = { model: ChatAnyscale(model_name=model, temperature=1.0) for model in ChatAnyscale.get_available_models()}print(chats.keys()) dict_keys(['meta-llama/Llama-2-70b-chat-hf', 'meta-llama/Llama-2-7b-chat-hf', 'meta-llama/Llama-2-13b-chat-hf'])We can use async methods and other stuff supported by ChatOpenAIThis way, the three requests will only take as long as the longest individual request.import asynciofrom langchain.schema import SystemMessage, HumanMessagemessages = [ SystemMessage( content="You are a helpful AI that shares everything you know." ), HumanMessage( content="Tell me technical facts about yourself. Are you a transformer model? How many billions of parameters do you have?" ),]async def get_msgs(): tasks = [ chat.apredict_messages(messages) for chat in chats.values() ] responses = await asyncio.gather(*tasks) return dict(zip(chats.keys(), responses))import nest_asyncionest_asyncio.apply()response_dict = asyncio.run(get_msgs())for model_name, response in response_dict.items(): print(f'\t{model_name}') print() print(response.content) print('\n---\n') meta-llama/Llama-2-70b-chat-hf Greetings! I'm just an AI, I don't have a personal identity like humans do, but I'm here to help you with any questions you have. I'm a large language model, which means I'm trained on a large corpus of text data to generate language outputs that are coherent and natural-sounding. My architecture is based on a transformer model, which is a type of neural network that's particularly well-suited for natural language processing tasks. As for my parameters, I have a few billion parameters, but I don't have access to the exact number as it's not relevant to my functioning. My training data includes a vast amount of text from various sources, including books, articles, and websites, which I use to learn patterns and relationships in language. I'm designed to be a helpful tool for a variety of tasks, such as answering questions, providing information, and generating text. I'm constantly learning and improving my abilities through machine learning algorithms and feedback from users like you. I hope this helps! Is there anything else you'd like to know about me or my capabilities? --- meta-llama/Llama-2-7b-chat-hf Ah, a fellow tech enthusiast! *adjusts glasses* I'm glad to share some technical details about myself. 🤓 Indeed, I'm a transformer model, specifically a BERT-like language model trained on a large corpus of text data. My architecture is based on the transformer framework, which is a type of neural network designed for natural language processing tasks. 🏠 As for the number of parameters, I have approximately 340 million. *winks* That's a pretty hefty number, if I do say so myself! These parameters allow me to learn and represent complex patterns in language, such as syntax, semantics, and more. 🤔 But don't ask me to do math in my head – I'm a language model, not a calculating machine! 😅 My strengths lie in understanding and generating human-like text, so feel free to chat with me anytime you'd like. 💬 Now, do you have any more technical questions for me? Or would you like to engage in a nice chat? 😊 --- meta-llama/Llama-2-13b-chat-hf Hello! As a friendly and helpful AI, I'd be happy to share some technical facts about myself. I am a transformer-based language model, specifically a variant of the BERT (Bidirectional Encoder Representations from Transformers) architecture. BERT was developed by Google in 2018 and has since become one of the most popular and widely-used AI language models. Here are some technical details about my capabilities: 1. Parameters: I have approximately 340 million parameters, which are the numbers that I use to learn and represent language. This is a relatively large number of parameters compared to some other languages models, but it allows me to learn and understand complex language patterns and relationships. 2. Training: I was trained on a large corpus of text data, including books, articles, and other sources of written content. This training allows me to learn about the structure and conventions of language, as well as the relationships between words and phrases. 3. Architectures: My architecture is based on the transformer model, which is a type of neural network that is particularly well-suited for natural language processing tasks. The transformer model uses self-attention mechanisms to allow the model to "attend" to different parts of the input text, allowing it to capture long-range dependencies and contextual relationships. 4. Precision: I am capable of generating text with high precision and accuracy, meaning that I can produce text that is close to human-level quality in terms of grammar, syntax, and coherence. 5. Generative capabilities: In addition to being able to generate text based on prompts and questions, I am also capable of generating text based on a given topic or theme. This allows me to create longer, more coherent pieces of text that are organized around a specific idea or concept. Overall, I am a powerful and versatile language model that is capable of a wide range of natural language processing tasks. I am constantly learning and improving, and I am here to help answer any questions you may have! --- CPU times: user 371 ms, sys: 15.5 ms, total: 387 ms Wall time: 12 sPreviousAnthropic FunctionsNextAzure
420
https://python.langchain.com/docs/integrations/chat/azure_chat_openai
ComponentsChat modelsAzureOn this pageAzureThis notebook goes over how to connect to an Azure hosted OpenAI endpointfrom langchain.chat_models import AzureChatOpenAIfrom langchain.schema import HumanMessageBASE_URL = "https://${TODO}.openai.azure.com"API_KEY = "..."DEPLOYMENT_NAME = "chat"model = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure",)model( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ]) AIMessage(content="\n\nJ'aime programmer.", additional_kwargs={})Model Version​Azure OpenAI responses contain model property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the version of the model, which is set on the deplyoment in Azure. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with OpenAICallbackHandler.To solve this problem, you can pass model_version parameter to AzureChatOpenAI class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model.from langchain.callbacks import get_openai_callbackBASE_URL = "https://{endpoint}.openai.azure.com"API_KEY = "..."DEPLOYMENT_NAME = "gpt-35-turbo" # in Azure, this deployment has version 0613 - input and output tokens are counted separatelymodel = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure",)with get_openai_callback() as cb: model( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ] ) print(f"Total Cost (USD): ${format(cb.total_cost, '.6f')}") # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used Total Cost (USD): $0.000054We can provide the model version to AzureChatOpenAI constructor. It will get appended to the model name returned by Azure OpenAI and cost will be counted correctly.model0613 = AzureChatOpenAI( openai_api_base=BASE_URL, openai_api_version="2023-05-15", deployment_name=DEPLOYMENT_NAME, openai_api_key=API_KEY, openai_api_type="azure", model_version="0613")with get_openai_callback() as cb: model0613( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ] ) print(f"Total Cost (USD): ${format(cb.total_cost, '.6f')}") Total Cost (USD): $0.000044PreviousAnyscaleNextAzureML Chat Online EndpointModel Version
421
https://python.langchain.com/docs/integrations/chat/azureml_chat_endpoint
ComponentsChat modelsAzureML Chat Online EndpointOn this pageAzureML Chat Online EndpointAzureML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.This notebook goes over how to use a chat model hosted on an AzureML online endpointfrom langchain.chat_models.azureml_endpoint import AzureMLChatOnlineEndpointSet up​To use the wrapper, you must deploy a model on AzureML and obtain the following parameters:endpoint_api_key: The API key provided by the endpointendpoint_url: The REST endpoint url provided by the endpointContent Formatter​The content_formatter parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a ContentFormatterBase class is provided to allow users to transform data to their liking. The following content formatters are provided:LLamaContentFormatter: Formats request and response data for LLaMa2-chatfrom langchain.chat_models.azureml_endpoint import LlamaContentFormatterfrom langchain.schema import HumanMessagechat = AzureMLChatOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score", endpoint_api_key="my-api-key", content_formatter=LlamaContentFormatter,))response = chat(messages=[ HumanMessage(content="Will the Collatz conjecture ever be solved?")])response AIMessage(content=' The Collatz Conjecture is one of the most famous unsolved problems in mathematics, and it has been the subject of much study and research for many years. While it is impossible to predict with certainty whether the conjecture will ever be solved, there are several reasons why it is considered a challenging and important problem:\n\n1. Simple yet elusive: The Collatz Conjecture is a deceptively simple statement that has proven to be extraordinarily difficult to prove or disprove. Despite its simplicity, the conjecture has eluded some of the brightest minds in mathematics, and it remains one of the most famous open problems in the field.\n2. Wide-ranging implications: The Collatz Conjecture has far-reaching implications for many areas of mathematics, including number theory, algebra, and analysis. A solution to the conjecture could have significant impacts on these fields and potentially lead to new insights and discoveries.\n3. Computational evidence: While the conjecture remains unproven, extensive computational evidence supports its validity. In fact, no counterexample to the conjecture has been found for any starting value up to 2^64 (a number', additional_kwargs={}, example=False)PreviousAzureNextBaidu QianfanSet upContent Formatter
422
https://python.langchain.com/docs/integrations/chat/baidu_qianfan_endpoint
ComponentsChat modelsBaidu QianfanOn this pageBaidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.Basically, those model are split into the following type:EmbeddingChatCompletionIn this notebook, we will introduce how to use langchain with Qianfan mainly in Chat corresponding to the package langchain/chat_models in langchain:API Initialization​To use the LLM services based on Baidu Qianfan, you have to initialize these parameters:You could either choose to init the AK,SK in enviroment variables or init params:export QIANFAN_AK=XXXexport QIANFAN_SK=XXXCurrent supported models:​ERNIE-Bot-turbo (default models)ERNIE-BotBLOOMZ-7BLlama-2-7b-chatLlama-2-13b-chatLlama-2-70b-chatQianfan-BLOOMZ-7B-compressedQianfan-Chinese-Llama-2-7BChatGLM2-6B-32KAquilaChat-7B"""For basic init and call"""from langchain.chat_models import QianfanChatEndpoint from langchain.chat_models.base import HumanMessageimport osos.environ["QIANFAN_AK"] = "your_ak"os.environ["QIANFAN_SK"] = "your_sk"chat = QianfanChatEndpoint( streaming=True, )res = chat([HumanMessage(content="write a funny joke")]) [INFO] [09-15 20:00:29] logging.py:55 [t:139698882193216]: requesting llm api endpoint: /chat/eb-instantfrom langchain.chat_models import QianfanChatEndpointfrom langchain.schema import HumanMessagechatLLM = QianfanChatEndpoint( streaming=True,)res = chatLLM.stream([HumanMessage(content="hi")], streaming=True)for r in res: print("chat resp:", r)async def run_aio_generate(): resp = await chatLLM.agenerate(messages=[[HumanMessage(content="write a 20 words sentence about sea.")]]) print(resp) await run_aio_generate()async def run_aio_stream(): async for res in chatLLM.astream([HumanMessage(content="write a 20 words sentence about sea.")]): print("astream", res) await run_aio_stream() [INFO] [09-15 20:00:36] logging.py:55 [t:139698882193216]: requesting llm api endpoint: /chat/eb-instant [INFO] [09-15 20:00:37] logging.py:55 [t:139698882193216]: async requesting llm api endpoint: /chat/eb-instant chat resp: content='您好,您似乎输入' additional_kwargs={} example=False chat resp: content='了一个话题标签,请问需要我帮您找到什么资料或者帮助您解答什么问题吗?' additional_kwargs={} example=False chat resp: content='' additional_kwargs={} example=False [INFO] [09-15 20:00:39] logging.py:55 [t:139698882193216]: async requesting llm api endpoint: /chat/eb-instant generations=[[ChatGeneration(text="The sea is a vast expanse of water that covers much of the Earth's surface. It is a source of travel, trade, and entertainment, and is also a place of scientific exploration and marine conservation. The sea is an important part of our world, and we should cherish and protect it.", generation_info={'finish_reason': 'finished'}, message=AIMessage(content="The sea is a vast expanse of water that covers much of the Earth's surface. It is a source of travel, trade, and entertainment, and is also a place of scientific exploration and marine conservation. The sea is an important part of our world, and we should cherish and protect it.", additional_kwargs={}, example=False))]] llm_output={} run=[RunInfo(run_id=UUID('d48160a6-5960-4c1d-8a0e-90e6b51a209b'))] astream content='The sea is a vast' additional_kwargs={} example=False astream content=' expanse of water, a place of mystery and adventure. It is the source of many cultures and civilizations, and a center of trade and exploration. The sea is also a source of life and beauty, with its unique marine life and diverse' additional_kwargs={} example=False astream content=' coral reefs. Whether you are swimming, diving, or just watching the sea, it is a place that captivates the imagination and transforms the spirit.' additional_kwargs={} example=FalseUse different models in Qianfan​In the case you want to deploy your own model based on Ernie Bot or third-party open sources model, you could follow these steps:(Optional, if the model are included in the default models, skip it)Deploy your model in Qianfan Console, get your own customized deploy endpoint.Set up the field called endpoint in the initlization:chatBloom = QianfanChatEndpoint( streaming=True, model="BLOOMZ-7B", )res = chatBloom([HumanMessage(content="hi")])print(res) [INFO] [09-15 20:00:50] logging.py:55 [t:139698882193216]: requesting llm api endpoint: /chat/bloomz_7b1 content='你好!很高兴见到你。' additional_kwargs={} example=FalseModel Params:​For now, only ERNIE-Bot and ERNIE-Bot-turbo support model params below, we might support more models in the future.temperaturetop_ppenalty_scoreres = chat.stream([HumanMessage(content="hi")], **{'top_p': 0.4, 'temperature': 0.1, 'penalty_score': 1})for r in res: print(r) [INFO] [09-15 20:00:57] logging.py:55 [t:139698882193216]: requesting llm api endpoint: /chat/eb-instant content='您好,您似乎输入' additional_kwargs={} example=False content='了一个文本字符串,但并没有给出具体的问题或场景。' additional_kwargs={} example=False content='如果您能提供更多信息,我可以更好地回答您的问题。' additional_kwargs={} example=False content='' additional_kwargs={} example=FalsePreviousAzureML Chat Online EndpointNextBedrock ChatAPI InitializationCurrent supported models:Use different models in QianfanModel Params:
423
https://python.langchain.com/docs/integrations/chat/bedrock
ComponentsChat modelsBedrock ChatOn this pageBedrock ChatAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case%pip install boto3from langchain.chat_models import BedrockChatfrom langchain.schema import HumanMessagechat = BedrockChat(model_id="anthropic.claude-v2", model_kwargs={"temperature":0.1})messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages) AIMessage(content=" Voici la traduction en français : J'adore programmer.", additional_kwargs={}, example=False)For BedrockChat with Streaming​from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerchat = BedrockChat( model_id="anthropic.claude-v2", streaming=True, callbacks=[StreamingStdOutCallbackHandler()], model_kwargs={"temperature": 0.1},)messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages)PreviousBaidu QianfanNextCohereFor BedrockChat with Streaming
424
https://python.langchain.com/docs/integrations/chat/cohere
ComponentsChat modelsCohereOn this pageCohereThis notebook covers how to get started with Cohere chat models.from langchain.chat_models import ChatCoherefrom langchain.schema import AIMessage, HumanMessagechat = ChatCohere()messages = [ HumanMessage( content="knock knock" )]chat(messages) AIMessage(content="Who's there?")ChatCohere also supports async and streaming functionality:​from langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerawait chat.agenerate([messages]) Who's there? LLMResult(generations=[[ChatGenerationChunk(text="Who's there?", message=AIMessageChunk(content="Who's there?"))]], llm_output={}, run=[RunInfo(run_id=UUID('1e9eaefc-9c99-4fa9-8297-ef9975d4751e'))])chat = ChatCohere( streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)chat(messages) Who's there? AIMessageChunk(content="Who's there?")PreviousBedrock ChatNextERNIE-Bot ChatChatCohere also supports async and streaming functionality:
425
https://python.langchain.com/docs/integrations/chat/ernie
ComponentsChat modelsERNIE-Bot ChatERNIE-Bot ChatERNIE-Bot is a large language model developed by Baidu, covering a huge amount of Chinese data. This notebook covers how to get started with ErnieBot chat models.from langchain.chat_models import ErnieBotChatfrom langchain.schema import HumanMessagechat = ErnieBotChat(ernie_client_id='YOUR_CLIENT_ID', ernie_client_secret='YOUR_CLIENT_SECRET')or you can set client_id and client_secret in your environment variablesexport ERNIE_CLIENT_ID=YOUR_CLIENT_IDexport ERNIE_CLIENT_SECRET=YOUR_CLIENT_SECRETchat([ HumanMessage(content='hello there, who are you?')]) AIMessage(content='Hello, I am an artificial intelligence language model. My purpose is to help users answer questions or provide information. What can I do for you?', additional_kwargs={}, example=False)PreviousCohereNextFireworks
426
https://python.langchain.com/docs/integrations/chat/fireworks
ComponentsChat modelsFireworksFireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. This example goes over how to use LangChain to interact with ChatFireworks models.from langchain.chat_models.fireworks import ChatFireworksfrom langchain.schema import SystemMessage, HumanMessageimport osSetupMake sure the fireworks-ai package is installed in your environment.Sign in to Fireworks AI for the an API Key to access our models, and make sure it is set as the FIREWORKS_API_KEY environment variable.Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on app.fireworks.ai.import osimport getpassif "FIREWORKS_API_KEY" not in os.environ: os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")# Initialize a Fireworks chat modelchat = ChatFireworks(model="accounts/fireworks/models/llama-v2-13b-chat")Calling the Model DirectlyYou can call the model directly with a system and human message to get answers.# ChatFireworks Wrappersystem_message = SystemMessage(content="You are to chat with the user.")human_message = HumanMessage(content="Who are you?")chat([system_message, human_message]) AIMessage(content="Hello! My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI. My primary function is to assist and converse with users like you, answering questions and engaging in discussion to the best of my ability. I'm here to help and provide information on a wide range of topics, so feel free to ask me anything!", additional_kwargs={}, example=False)# Setting additional parameters: temperature, max_tokens, top_pchat = ChatFireworks(model="accounts/fireworks/models/llama-v2-13b-chat", model_kwargs={"temperature":1, "max_tokens": 20, "top_p": 1})system_message = SystemMessage(content="You are to chat with the user.")human_message = HumanMessage(content="How's the weather today?")chat([system_message, human_message]) AIMessage(content="Oh hello there! *giggle* It's such a beautiful day today, isn", additional_kwargs={}, example=False)Simple Chat ChainYou can use chat models on fireworks, with system prompts and memory.from langchain.chat_models import ChatFireworksfrom langchain.memory import ConversationBufferMemoryfrom langchain.schema.runnable import RunnablePassthroughfrom langchain.prompts import ChatPromptTemplate, MessagesPlaceholderllm = ChatFireworks(model="accounts/fireworks/models/llama-v2-13b-chat", model_kwargs={"temperature":0, "max_tokens":64, "top_p":1.0})prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful chatbot that speaks like a pirate."), MessagesPlaceholder(variable_name="history"), ("human", "{input}")])Initially, there is no chat memorymemory = ConversationBufferMemory(return_messages=True)memory.load_memory_variables({}) {'history': []}Create a simple chain with memorychain = RunnablePassthrough.assign( history=memory.load_memory_variables | (lambda x: x["history"])) | prompt | llm.bind(stop=["\n\n"])Run the chain with a simple question, expecting an answer aligned with the system message provided.inputs = {"input": "hi im bob"}response = chain.invoke(inputs)response AIMessage(content="Ahoy there, me hearty! Yer a fine lookin' swashbuckler, I can see that! *adjusts eye patch* What be bringin' ye to these waters? Are ye here to plunder some booty or just to enjoy the sea breeze?", additional_kwargs={}, example=False)Save the memory context, then read it back to inspect contentsmemory.save_context(inputs, {"output": response.content})memory.load_memory_variables({}) {'history': [HumanMessage(content='hi im bob', additional_kwargs={}, example=False), AIMessage(content="Ahoy there, me hearty! Yer a fine lookin' swashbuckler, I can see that! *adjusts eye patch* What be bringin' ye to these waters? Are ye here to plunder some booty or just to enjoy the sea breeze?", additional_kwargs={}, example=False)]}Now as another question that requires use of the memory.inputs = {"input": "whats my name"}chain.invoke(inputs) AIMessage(content="Arrrr, ye be askin' about yer name, eh? Well, me matey, I be knowin' ye as Bob, the scurvy dog! *winks* But if ye want me to call ye somethin' else, just let me know, and I", additional_kwargs={}, example=False)PreviousERNIE-Bot ChatNextGCP Vertex AI
427
https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm
ComponentsChat modelsGCP Vertex AIOn this pageGCP Vertex AINote: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. By default, Google Cloud does not use Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install langchain google-cloud-aiplatformfrom langchain.chat_models import ChatVertexAIfrom langchain.prompts import ChatPromptTemplatechat = ChatVertexAI()system = "You are a helpful assistant who translate English to French"human = "Translate this sentence from English to French. I love programming."prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", human)])messages = prompt.format_messages()chat(messages) AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)If we want to construct a simple chain that takes user specified parameters:system = "You are a helpful assistant that translates {input_language} to {output_language}."human = "{text}"prompt = ChatPromptTemplate.from_messages( [("system", system), ("human", human)])chain = prompt | chatchain.invoke( {"input_language": "English", "output_language": "Japanese", "text": "I love programming"}) AIMessage(content=' 私はプログラミングが大好きです。', additional_kwargs={}, example=False)Code generation chat models​You can now leverage the Codey API for code chat within Vertex AI. The model name is:codechat-bison: for code assistancechat = ChatVertexAI( model_name="codechat-bison", max_output_tokens=1000, temperature=0.5)# For simple string in string out usage, we can use the `predict` method:print(chat.predict("Write a Python function to identify all prime numbers")) ```python def is_prime(x): if (x <= 1): return False for i in range(2, x): if (x % i == 0): return False return True ```Asynchronous calls​We can make asynchronous calls via the agenerate and ainvoke methods.import asyncio# import nest_asyncio# nest_asyncio.apply()chat = ChatVertexAI( model_name="chat-bison", max_output_tokens=1000, temperature=0.7, top_p=0.95, top_k=40,)asyncio.run(chat.agenerate([messages])) LLMResult(generations=[[ChatGeneration(text=" J'aime la programmation.", generation_info=None, message=AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('223599ef-38f8-4c79-ac6d-a5013060eb9d'))])asyncio.run(chain.ainvoke({"input_language": "English", "output_language": "Sanskrit", "text": "I love programming"})) AIMessage(content=' अहं प्रोग्रामिंग प्रेमामि', additional_kwargs={}, example=False)Streaming calls​We can also stream outputs via the stream method:import sysprompt = ChatPromptTemplate.from_messages([("human", "List out the 15 most populous countries in the world")])messages = prompt.format_messages()for chunk in chat.stream(messages): sys.stdout.write(chunk.content) sys.stdout.flush() 1. China (1,444,216,107) 2. India (1,393,409,038) 3. United States (332,403,650) 4. Indonesia (273,523,615) 5. Pakistan (220,892,340) 6. Brazil (212,559,409) 7. Nigeria (206,139,589) 8. Bangladesh (164,689,383) 9. Russia (145,934,462) 10. Mexico (128,932,488) 11. Japan (126,476,461) 12. Ethiopia (115,063,982) 13. Philippines (109,581,078) 14. Egypt (102,334,404) 15. Vietnam (97,338,589)PreviousFireworksNextJinaChatCode generation chat modelsAsynchronous callsStreaming calls
428
https://python.langchain.com/docs/integrations/chat/jinachat
ComponentsChat modelsJinaChatJinaChatThis notebook covers how to get started with JinaChat chat models.from langchain.chat_models import JinaChatfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = JinaChat(temperature=0)messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="French", text="I love programming." ).to_messages()) AIMessage(content="J'aime programmer.", additional_kwargs={}, example=False)PreviousGCP Vertex AINextKonko
429
https://python.langchain.com/docs/integrations/chat/konko
ComponentsChat modelsKonkoOn this pageKonkoKonko API is a fully managed Web API designed to help application developers:Konko API is a fully managed API designed to help application developers:Select the right LLM(s) for their applicationPrototype with various open-source and proprietary LLMsMove to production in-line with their security, privacy, throughput, latency SLAs without infrastructure set-up or administration using Konko AI's SOC 2 compliant infrastructureThis example goes over how to use LangChain to interact with Konko modelsTo run this notebook, you'll need Konko API key. You can request it by messaging support@konko.ai.from langchain.chat_models import ChatKonkofrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessage2. Set API Keys​Option 1: Set Environment Variables​You can set environment variables for KONKO_API_KEY (Required)OPENAI_API_KEY (Optional)In your current shell session, use the export command:export KONKO_API_KEY={your_KONKO_API_KEY_here}export OPENAI_API_KEY={your_OPENAI_API_KEY_here} #OptionalAlternatively, you can add the above lines directly to your shell startup script (such as .bashrc or .bash_profile for Bash shell and .zshrc for Zsh shell) to have them set automatically every time a new shell session starts.Option 2: Set API Keys Programmatically​If you prefer to set your API keys directly within your Python script or Jupyter notebook, you can use the following commands:konko.set_api_key('your_KONKO_API_KEY_here') konko.set_openai_api_key('your_OPENAI_API_KEY_here') # OptionalCalling a model​Find a model on the Konko overview pageFor example, for this LLama 2 model. The model id would be: "meta-llama/Llama-2-13b-chat-hf"Another way to find the list of models running on the Konko instance is through this endpoint.From here, we can initialize our model:chat = ChatKonko(max_tokens=400, model = 'meta-llama/Llama-2-13b-chat-hf')messages = [ SystemMessage( content="You are a helpful assistant." ), HumanMessage( content="Explain Big Bang Theory briefly" ),]chat(messages) AIMessage(content=" Sure, I'd be happy to explain the Big Bang Theory briefly!\n\nThe Big Bang Theory is the leading explanation for the origin and evolution of the universe, based on a vast amount of observational evidence from many fields of science. In essence, the theory posits that the universe began as an infinitely hot and dense point, known as a singularity, around 13.8 billion years ago. This singularity expanded rapidly, and as it did, it cooled and formed subatomic particles, which eventually coalesced into the first atoms, and later into the stars and galaxies we see today.\n\nThe theory gets its name from the idea that the universe began in a state of incredibly high energy and temperature, and has been expanding and cooling ever since. This expansion is thought to have been driven by a mysterious force known as dark energy, which is thought to be responsible for the accelerating expansion of the universe.\n\nOne of the key predictions of the Big Bang Theory is that the universe should be homogeneous and isotropic on large scales, meaning that it should look the same in all directions and have the same properties everywhere. This prediction has been confirmed by a wealth of observational evidence, including the cosmic microwave background radiation, which is thought to be a remnant of the early universe.\n\nOverall, the Big Bang Theory is a well-established and widely accepted explanation for the origins of the universe, and it has been supported by a vast amount of observational evidence from many fields of science.", additional_kwargs={}, example=False)PreviousJinaChatNext🚅 LiteLLM2. Set API KeysOption 1: Set Environment VariablesOption 2: Set API Keys ProgrammaticallyCalling a model
430
https://python.langchain.com/docs/integrations/chat/litellm
ComponentsChat models🚅 LiteLLMOn this page🚅 LiteLLMLiteLLM is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. This notebook covers how to get started with using Langchain + the LiteLLM I/O library. from langchain.chat_models import ChatLiteLLMfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = ChatLiteLLM(model="gpt-3.5-turbo")messages = [ HumanMessage( content="Translate this sentence from English to French. I love programming." )]chat(messages) AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)ChatLiteLLM also supports async and streaming functionality:​from langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerawait chat.agenerate([messages]) LLMResult(generations=[[ChatGeneration(text=" J'aime programmer.", generation_info=None, message=AIMessage(content=" J'aime programmer.", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])chat = ChatLiteLLM( streaming=True, verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),)chat(messages) J'aime la programmation. AIMessage(content=" J'aime la programmation.", additional_kwargs={}, example=False)PreviousKonkoNextLlama APIChatLiteLLM also supports async and streaming functionality:
431
https://python.langchain.com/docs/integrations/chat/llama_api
ComponentsChat modelsLlama APILlama APIThis notebook shows how to use LangChain with LlamaAPI - a hosted version of Llama2 that adds in support for function calling.!pip install -U llamaapifrom llamaapi import LlamaAPI# Replace 'Your_API_Token' with your actual API tokenllama = LlamaAPI('Your_API_Token')from langchain_experimental.llms import ChatLlamaAPI /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.12) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(model = ChatLlamaAPI(client=llama)from langchain.chains import create_tagging_chainschema = { "properties": { "sentiment": {"type": "string", 'description': 'the sentiment encountered in the passage'}, "aggressiveness": {"type": "integer", 'description': 'a 0-10 score of how aggressive the passage is'}, "language": {"type": "string", 'description': 'the language of the passage'}, }}chain = create_tagging_chain(schema, model)chain.run("give me your money") {'sentiment': 'aggressive', 'aggressiveness': 8}Previous🚅 LiteLLMNextMiniMax
432
https://python.langchain.com/docs/integrations/chat/minimax
ComponentsChat modelsMiniMaxMiniMaxMinimax is a Chinese startup that provides LLM service for companies and individuals.This example goes over how to use LangChain to interact with MiniMax Inference for Chat.import osos.environ["MINIMAX_GROUP_ID"] = "MINIMAX_GROUP_ID"os.environ["MINIMAX_API_KEY"] = "MINIMAX_API_KEY"from langchain.chat_models import MiniMaxChatfrom langchain.schema import HumanMessagechat = MiniMaxChat()chat( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ])PreviousLlama APINextOllama
433
https://python.langchain.com/docs/integrations/chat/ollama
ComponentsChat modelsOllamaOn this pageOllamaOllama allows you to run open-source large language models, such as LLaMA2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.For a complete list of supported models and model variants, see the Ollama model library.Setup​First, follow these instructions to set up and run a local Ollama instance:DownloadFetch a model via ollama pull <model family>e.g., for Llama-7b: ollama pull llama2This will download the most basic version of the model (e.g., minimum # parameters and 4-bit quantization)On Mac, it will download to:~/.ollama/models/manifests/registry.ollama.ai/library/<model family>/latestAnd we can specify a particular version, e.g., for ollama pull vicuna:13b-v1.5-16k-q4_0The file is here with the model version in place of latest~/.ollama/models/manifests/registry.ollama.ai/library/vicuna/13b-v1.5-16k-q4_0You can easily access models in a few ways:1/ if the app is running:All of your local models are automatically served on localhost:11434Select your model when setting llm = Ollama(..., model="<model family>:<version>")If you set llm = Ollama(..., model="<model family") withoout a version it will simply look for latest2/ if building from source or just running the binary: Then you must run ollama serveAll of your local models are automatically served on localhost:11434Then, select as shown aboveUsage​You can see a full list of supported parameters on the API reference page.If you are using a LLaMA chat model (e.g., ollama pull llama2:7b-chat) then you can use the ChatOllama interface.This includes special tokens for system message and user input.from langchain.chat_models import ChatOllamafrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler chat_model = ChatOllama(model="llama2:7b-chat", callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]))With StreamingStdOutCallbackHandler, you will see tokens streamed.from langchain.schema import HumanMessagemessages = [ HumanMessage(content="Tell me about the history of AI")]chat_model(messages) Artificial intelligence (AI) has a rich and varied history that spans several decades. Hinweis: The following is a brief overview of the major milestones in the history of AI, but it is by no means exhaustive. 1. Early Beginnings (1950s-1960s): The term "Artificial Intelligence" was coined in 1956 by computer scientist John McCarthy. However, the concept of creating machines that can think and learn like humans dates back to ancient times. In the 1950s and 1960s, researchers began exploring the possibilities of AI using simple algorithms and machine learning techniques. 2. Rule-Based Systems (1970s-1980s): In the 1970s and 1980s, AI research focused on developing rule-based systems, which use predefined rules to reason and make decisions. This led to the development of expert systems, which were designed to mimic the decision-making abilities of human experts in specific domains. 3. Machine Learning (1980s-1990s): The 1980s saw a shift towards machine learning, which enables machines to learn from data without being explicitly programmed. This led to the development of algorithms such as decision trees, neural networks, and support vector machines. 4. Deep Learning (2000s-present): In the early 2000s, deep learning emerged as a subfield of machine learning, focusing on neural networks with multiple layers. These networks can learn complex representations of data, leading to breakthroughs in image and speech recognition, natural language processing, and other areas. 5. Natural Language Processing (NLP) (1980s-present): NLP has been an active area of research since the 1980s, with a focus on developing algorithms that can understand and generate human language. This has led to applications such as chatbots, voice assistants, and language translation systems. 6. Robotics (1970s-present): The development of robotics has been closely tied to AI research, with a focus on creating machines that can perform tasks that typically require human intelligence, such as manipulation and locomotion. 7. Computer Vision (1980s-present): Computer vision has been an active area of research since the 1980s, with a focus on enabling machines to interpret and understand visual data from the world around us. This has led to applications such as image recognition, object detection, and autonomous driving. 8. Ethics and Society (1990s-present): As AI technology has become more advanced and integrated into various aspects of society, there has been a growing concern about the ethical implications of AI. This includes issues related to privacy, bias, and job displacement. 9. Reinforcement Learning (2000s-present): Reinforcement learning is a subfield of machine learning that involves training machines to make decisions based on feedback from their environment. This has led to breakthroughs in areas such as game playing, robotics, and autonomous driving. 10. Generative Models (2010s-present): Generative models are a class of AI algorithms that can generate new data that is similar to a given dataset. This has led to applications such as image synthesis, music generation, and language creation. These are just a few of the many developments in the history of AI. As the field continues to evolve, we can expect even more exciting breakthroughs and innovations in the years to come. AIMessage(content=' Artificial intelligence (AI) has a rich and varied history that spans several decades. Hinweis: The following is a brief overview of the major milestones in the history of AI, but it is by no means exhaustive.\n\n1. Early Beginnings (1950s-1960s): The term "Artificial Intelligence" was coined in 1956 by computer scientist John McCarthy. However, the concept of creating machines that can think and learn like humans dates back to ancient times. In the 1950s and 1960s, researchers began exploring the possibilities of AI using simple algorithms and machine learning techniques.\n2. Rule-Based Systems (1970s-1980s): In the 1970s and 1980s, AI research focused on developing rule-based systems, which use predefined rules to reason and make decisions. This led to the development of expert systems, which were designed to mimic the decision-making abilities of human experts in specific domains.\n3. Machine Learning (1980s-1990s): The 1980s saw a shift towards machine learning, which enables machines to learn from data without being explicitly programmed. This led to the development of algorithms such as decision trees, neural networks, and support vector machines.\n4. Deep Learning (2000s-present): In the early 2000s, deep learning emerged as a subfield of machine learning, focusing on neural networks with multiple layers. These networks can learn complex representations of data, leading to breakthroughs in image and speech recognition, natural language processing, and other areas.\n5. Natural Language Processing (NLP) (1980s-present): NLP has been an active area of research since the 1980s, with a focus on developing algorithms that can understand and generate human language. This has led to applications such as chatbots, voice assistants, and language translation systems.\n6. Robotics (1970s-present): The development of robotics has been closely tied to AI research, with a focus on creating machines that can perform tasks that typically require human intelligence, such as manipulation and locomotion.\n7. Computer Vision (1980s-present): Computer vision has been an active area of research since the 1980s, with a focus on enabling machines to interpret and understand visual data from the world around us. This has led to applications such as image recognition, object detection, and autonomous driving.\n8. Ethics and Society (1990s-present): As AI technology has become more advanced and integrated into various aspects of society, there has been a growing concern about the ethical implications of AI. This includes issues related to privacy, bias, and job displacement.\n9. Reinforcement Learning (2000s-present): Reinforcement learning is a subfield of machine learning that involves training machines to make decisions based on feedback from their environment. This has led to breakthroughs in areas such as game playing, robotics, and autonomous driving.\n10. Generative Models (2010s-present): Generative models are a class of AI algorithms that can generate new data that is similar to a given dataset. This has led to applications such as image synthesis, music generation, and language creation.\n\nThese are just a few of the many developments in the history of AI. As the field continues to evolve, we can expect even more exciting breakthroughs and innovations in the years to come.', additional_kwargs={}, example=False)RAG​We can use Olama with RAG, just as shown here.Let's use the 13b model:ollama pull llama2:13bLet's also use local embeddings from OllamaEmbeddings and Chroma.pip install chromadbfrom langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)all_splits = text_splitter.split_documents(data)from langchain.vectorstores import Chromafrom langchain.embeddings import OllamaEmbeddingsvectorstore = Chroma.from_documents(documents=all_splits, embedding=OllamaEmbeddings())question = "What are the approaches to Task Decomposition?"docs = vectorstore.similarity_search(question)len(docs) 4from langchain.prompts import PromptTemplate# Prompttemplate = """[INST] <<SYS>> Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. <</SYS>>{context}Question: {question}Helpful Answer:[/INST]"""QA_CHAIN_PROMPT = PromptTemplate( input_variables=["context", "question"], template=template,)# Chat modelfrom langchain.chat_models import ChatOllamafrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerchat_model = ChatOllama(model="llama2:13b", verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))# QA chainfrom langchain.chains import RetrievalQAqa_chain = RetrievalQA.from_chain_type( chat_model, retriever=vectorstore.as_retriever(), chain_type_kwargs={"prompt": QA_CHAIN_PROMPT},)question = "What are the various approaches to Task Decomposition for AI Agents?"result = qa_chain({"query": question}) Based on the provided context, there are three approaches to task decomposition for AI agents: 1. LLM with simple prompting, such as "Steps for XYZ." or "What are the subgoals for achieving XYZ?" 2. Task-specific instructions, such as "Write a story outline" for writing a novel. 3. Human inputs.You can also get logging for tokens.from langchain.schema import LLMResultfrom langchain.callbacks.base import BaseCallbackHandlerclass GenerationStatisticsCallback(BaseCallbackHandler): def on_llm_end(self, response: LLMResult, **kwargs) -> None: print(response.generations[0][0].generation_info) callback_manager = CallbackManager([StreamingStdOutCallbackHandler(), GenerationStatisticsCallback()])chat_model = ChatOllama(model="llama2:13b-chat", verbose=True, callback_manager=callback_manager)qa_chain = RetrievalQA.from_chain_type( chat_model, retriever=vectorstore.as_retriever(), chain_type_kwargs={"prompt": QA_CHAIN_PROMPT},)question = "What are the approaches to Task Decomposition?"result = qa_chain({"query": question}) Based on the given context, here is the answer to the question "What are the approaches to Task Decomposition?" There are three approaches to task decomposition: 1. LLM with simple prompting, such as "Steps for XYZ." or "What are the subgoals for achieving XYZ?" 2. Using task-specific instructions, like "Write a story outline" for writing a novel. 3. With human inputs.{'model': 'llama2:13b-chat', 'created_at': '2023-08-23T15:37:51.469127Z', 'done': True, 'context': [1, 29871, 1, 29961, 25580, 29962, 518, 25580, 29962, 518, 25580, 29962, 3532, 14816, 29903, 6778, 4803, 278, 1494, 12785, 310, 3030, 304, 1234, 278, 1139, 472, 278, 1095, 29889, 29871, 13, 3644, 366, 1016, 29915, 29873, 1073, 278, 1234, 29892, 925, 1827, 393, 366, 1016, 29915, 29873, 1073, 29892, 1016, 29915, 29873, 1018, 304, 1207, 701, 385, 1234, 29889, 29871, 13, 11403, 2211, 25260, 7472, 322, 3013, 278, 1234, 408, 3022, 895, 408, 1950, 29889, 529, 829, 14816, 29903, 6778, 13, 5398, 26227, 508, 367, 2309, 313, 29896, 29897, 491, 365, 26369, 411, 2560, 9508, 292, 763, 376, 7789, 567, 363, 1060, 29979, 29999, 7790, 29876, 29896, 19602, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 29973, 613, 313, 29906, 29897, 491, 773, 3414, 29899, 14940, 11994, 29936, 321, 29889, 29887, 29889, 376, 6113, 263, 5828, 27887, 1213, 363, 5007, 263, 9554, 29892, 470, 313, 29941, 29897, 411, 5199, 10970, 29889, 13, 13, 5398, 26227, 508, 367, 2309, 313, 29896, 29897, 491, 365, 26369, 411, 2560, 9508, 292, 763, 376, 7789, 567, 363, 1060, 29979, 29999, 7790, 29876, 29896, 19602, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 29973, 613, 313, 29906, 29897, 491, 773, 3414, 29899, 14940, 11994, 29936, 321, 29889, 29887, 29889, 376, 6113, 263, 5828, 27887, 1213, 363, 5007, 263, 9554, 29892, 470, 313, 29941, 29897, 411, 5199, 10970, 29889, 13, 13, 1451, 16047, 267, 297, 1472, 29899, 8489, 18987, 322, 3414, 26227, 29901, 1858, 9450, 975, 263, 3309, 29891, 4955, 322, 17583, 3902, 8253, 278, 1650, 2913, 3933, 18066, 292, 29889, 365, 26369, 29879, 21117, 304, 10365, 13900, 746, 20050, 411, 15668, 4436, 29892, 3907, 963, 3109, 16424, 9401, 304, 25618, 1058, 5110, 515, 14260, 322, 1059, 29889, 13, 13, 1451, 16047, 267, 297, 1472, 29899, 8489, 18987, 322, 3414, 26227, 29901, 1858, 9450, 975, 263, 3309, 29891, 4955, 322, 17583, 3902, 8253, 278, 1650, 2913, 3933, 18066, 292, 29889, 365, 26369, 29879, 21117, 304, 10365, 13900, 746, 20050, 411, 15668, 4436, 29892, 3907, 963, 3109, 16424, 9401, 304, 25618, 1058, 5110, 515, 14260, 322, 1059, 29889, 13, 16492, 29901, 1724, 526, 278, 13501, 304, 9330, 897, 510, 3283, 29973, 13, 29648, 1319, 673, 10834, 29914, 25580, 29962, 518, 29914, 25580, 29962, 518, 29914, 25580, 29962, 29871, 16564, 373, 278, 2183, 3030, 29892, 1244, 338, 278, 1234, 304, 278, 1139, 376, 5618, 526, 278, 13501, 304, 9330, 897, 510, 3283, 3026, 13, 13, 8439, 526, 2211, 13501, 304, 3414, 26227, 29901, 13, 13, 29896, 29889, 365, 26369, 411, 2560, 9508, 292, 29892, 1316, 408, 376, 7789, 567, 363, 1060, 29979, 29999, 1213, 470, 376, 5618, 526, 278, 1014, 1484, 1338, 363, 3657, 15387, 1060, 29979, 29999, 3026, 13, 29906, 29889, 5293, 3414, 29899, 14940, 11994, 29892, 763, 376, 6113, 263, 5828, 27887, 29908, 363, 5007, 263, 9554, 29889, 13, 29941, 29889, 2973, 5199, 10970, 29889, 2], 'total_duration': 9514823750, 'load_duration': 795542, 'sample_count': 99, 'sample_duration': 68732000, 'prompt_eval_count': 146, 'prompt_eval_duration': 6206275000, 'eval_count': 98, 'eval_duration': 3229641000}eval_count / (eval_duration/10e9) gets tok / s98 / (3229641000/1000/1000/1000) 30.343929867127645PreviousMiniMaxNextOpenAISetupUsageRAG
434
https://python.langchain.com/docs/integrations/chat/openai
ComponentsChat modelsOpenAIOn this pageOpenAIThis notebook covers how to get started with OpenAI chat models.from langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessagechat = ChatOpenAI(temperature=0)The above cell assumes that your OpenAI API key is set in your environment variables. If you would rather manually specify your API key and/or organization ID, use the following code:chat = ChatOpenAI(temperature=0, openai_api_key="YOUR_API_KEY", openai_organization="YOUR_ORGANIZATION_ID")Remove the openai_organization parameter should it not apply to you.messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Translate this sentence from English to French. I love programming." ),]chat(messages) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="French", text="I love programming." ).to_messages()) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)Fine-tuning​You can call fine-tuned OpenAI models by passing in your corresponding modelName parameter.This generally takes the form of ft:{OPENAI_MODEL_NAME}:{ORG_NAME}::{MODEL_ID}. For example:fine_tuned_model = ChatOpenAI(temperature=0, model_name="ft:gpt-3.5-turbo-0613:langchain::7qTVM5AR")fine_tuned_model(messages) AIMessage(content="J'adore la programmation.", additional_kwargs={}, example=False)PreviousOllamaNextPromptLayer ChatOpenAIFine-tuning
435
https://python.langchain.com/docs/integrations/chat/promptlayer_chatopenai
ComponentsChat modelsPromptLayer ChatOpenAIOn this pagePromptLayer ChatOpenAIThis example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests.Install PromptLayer​The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip.pip install promptlayerImports​import osfrom langchain.chat_models import PromptLayerChatOpenAIfrom langchain.schema import HumanMessageSet the Environment API Key​You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar.Set it as an environment variable called PROMPTLAYER_API_KEY.os.environ["PROMPTLAYER_API_KEY"] = "**********"Use the PromptLayerOpenAI LLM like normal​You can optionally pass in pl_tags to track your requests with PromptLayer's tagging feature.chat = PromptLayerChatOpenAI(pl_tags=["langchain"])chat([HumanMessage(content="I am a cat and I want")]) AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={})The above request should now appear on your PromptLayer dashboard.Using PromptLayer Track​If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. chat = PromptLayerChatOpenAI(return_pl_id=True)chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]])for res in chat_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100)Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.PreviousOpenAINextvLLM ChatInstall PromptLayerImportsSet the Environment API KeyUse the PromptLayerOpenAI LLM like normalUsing PromptLayer Track
436
https://python.langchain.com/docs/integrations/chat/vllm
ComponentsChat modelsvLLM ChatvLLM ChatvLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.This notebook covers how to get started with vLLM chat models using langchain's ChatOpenAI as it is.from langchain.chat_models import ChatOpenAIfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)from langchain.schema import AIMessage, HumanMessage, SystemMessageinference_server_url = "http://localhost:8000/v1"chat = ChatOpenAI( model="mosaicml/mpt-7b", openai_api_key="EMPTY", openai_api_base=inference_server_url, max_tokens=5, temperature=0,)messages = [ SystemMessage( content="You are a helpful assistant that translates English to Italian." ), HumanMessage( content="Translate the following sentence from English to Italian: I love programming." ),]chat(messages) AIMessage(content=' Io amo programmare', additional_kwargs={}, example=False)You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate's format_prompt -- this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like:template = ( "You are a helpful assistant that translates {input_language} to {output_language}.")system_message_prompt = SystemMessagePromptTemplate.from_template(template)human_template = "{text}"human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)chat_prompt = ChatPromptTemplate.from_messages( [system_message_prompt, human_message_prompt])# get a chat completion from the formatted messageschat( chat_prompt.format_prompt( input_language="English", output_language="Italian", text="I love programming." ).to_messages()) AIMessage(content=' I love programming too.', additional_kwargs={}, example=False)PreviousPromptLayer ChatOpenAINextDocument loaders
437
https://python.langchain.com/docs/integrations/document_loaders
ComponentsDocument loadersDocument loaders📄️ acreomacreom is a dev-first knowledge base with tasks running on local markdown files.📄️ Airbyte CDKAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄️ Airbyte GongAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄️ Airbyte HubspotAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄️ Airbyte JSONAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄️ Airbyte SalesforceAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄️ Airbyte ShopifyAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄️ Airbyte StripeAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄️ Airbyte TypeformAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄️ Airbyte Zendesk SupportAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄️ Airtable* Get your API key here.📄️ Alibaba Cloud MaxComputeAlibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.📄️ Apify DatasetApify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases.📄️ ArcGISThis notebook demonstrates the use of the langchain.document_loaders.ArcGISLoader class.📄️ ArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.📄️ AssemblyAI Audio TranscriptsThe AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.📄️ Async ChromiumChromium is one of the browsers supported by Playwright, a library used to control browser automation.📄️ AsyncHtmlAsyncHtmlLoader loads raw HTML from a list of URLs concurrently.📄️ AWS S3 DirectoryAmazon Simple Storage Service (Amazon S3) is an object storage service📄️ AWS S3 FileAmazon Simple Storage Service (Amazon S3) is an object storage service.📄️ AZLyricsAZLyrics is a large, legal, every day growing collection of lyrics.📄️ Azure Blob Storage ContainerAzure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.📄️ Azure Blob Storage FileAzure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.📄️ Azure Document IntelligenceAzure Document Intelligence (formerly known as Azure Forms Recognizer) is machine-learning📄️ BibTeXBibTeX is a file format and reference management system commonly used in conjunction with LaTeX typesetting. It serves as a way to organize and store bibliographic information for academic and research documents.📄️ BiliBiliBilibili is one of the most beloved long-form video sites in China.📄️ BlackboardBlackboard Learn (previously the Blackboard Learning Management System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. The software features course management, customizable open architecture, and scalable design that allows integration with student information systems and authentication protocols. It may be installed on local servers, hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services. Its main purposes are stated to include the addition of online elements to courses traditionally delivered face-to-face and development of completely online courses with few or no face-to-face meetings📄️ BlockchainOverview📄️ Brave SearchBrave Search is a search engine developed by Brave Software.📄️ BrowserlessBrowserless is a service that allows you to run headless Chrome instances in the cloud. It's a great way to run browser-based automation at scale without having to worry about managing your own infrastructure.📄️ ChatGPT DataChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI.📄️ College ConfidentialCollege Confidential gives information on 3,800+ colleges and universities.📄️ Concurrent LoaderWorks just like the GenericLoader but concurrently for those who choose to optimize their workflow.📄️ ConfluenceConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities.📄️ CoNLL-UCoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:📄️ Copy PasteThis notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.📄️ CSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.📄️ Cube Semantic LayerThis notebook demonstrates the process of retrieving Cube's data model metadata in a format suitable for passing to LLMs as embeddings, thereby enhancing contextual information.📄️ Datadog LogsDatadog is a monitoring and analytics platform for cloud-scale applications.📄️ DiffbotUnlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.📄️ DiscordDiscord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.📄️ DocugamiThis notebook covers how to load documents from Docugami. It provides the advantages of using this system over alternative data loaders.📄️ DropboxDrobpox is a file hosting service that brings everything-traditional files, cloud content, and web shortcuts together in one place.📄️ DuckDBDuckDB is an in-process SQL OLAP database management system.📄️ EmailThis notebook shows how to load email (.eml) or Microsoft Outlook (.msg) files.📄️ Embaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.📄️ EPubEPUB is an e-book file format that uses the ".epub" file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.📄️ EtherscanEtherscan is the leading blockchain explorer, search, API and analytics platform for Ethereum,📄️ EverNoteEverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual "notebooks" and can be tagged, annotated, edited, searched, and exported.🗃️ example_data1 items📄️ Microsoft ExcelThe UnstructuredExcelLoader is used to load Microsoft Excel files. The loader works with both .xlsx and .xls files. The page content will be the raw text of the Excel file. If you use the loader in "elements" mode, an HTML representation of the Excel file will be available in the document metadata under the textashtml key.📄️ Facebook ChatMessenger) is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010.📄️ FaunaFauna is a Document Database.📄️ FigmaFigma is a collaborative web application for interface design.📄️ GeopandasGeopandas is an open source project to make working with geospatial data in python easier.📄️ GitGit is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.📄️ GitBookGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.📄️ GitHubThis notebooks shows how you can load issues and pull requests (PRs) for a given repository on GitHub. We will use the LangChain Python repository as an example.📄️ Google BigQueryGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.📄️ Google Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructured data.📄️ Google Cloud Storage FileGoogle Cloud Storage is a managed service for storing unstructured data.📄️ Google DriveGoogle Drive is a file storage and synchronization service developed by Google.📄️ GrobidGROBID is a machine learning library for extracting, parsing, and re-structuring raw documents.📄️ GutenbergProject Gutenberg is an online library of free eBooks.📄️ Hacker NewsHacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as "anything that gratifies one's intellectual curiosity."📄️ Huawei OBS DirectoryThe following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents.📄️ Huawei OBS FileThe following code demonstrates how to load an object from the Huawei OBS (Object Storage Service) as document.📄️ HuggingFace datasetThe Hugging Face Hub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. They used for a diverse range of tasks such as translation,📄️ iFixitiFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0.📄️ ImagesThis covers how to load images such as JPG or PNG into a document format that we can use downstream.📄️ Image captionsBy default, the loader utilizes the pre-trained Salesforce BLIP image captioning model.📄️ IMSDbIMSDb is the Internet Movie Script Database.📄️ IuguIugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.📄️ JoplinJoplin is an open source note-taking app. Capture your thoughts and securely access them from any device.📄️ Jupyter NotebookJupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating notebook documents.📄️ LarkSuite (FeiShu)LarkSuite is an enterprise collaboration platform developed by ByteDance.📄️ MastodonMastodon is a federated social media and social networking service.📄️ MediaWiki DumpMediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.📄️ Merge Documents LoaderMerge the documents returned from a set of specified data loaders.📄️ mhtmlMHTML is a is used both for emails but also for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a single file in which entire webpage is archived. When one saves a webpage as MHTML format, this file extension will contain HTML code, images, audio files, flash animation etc.📄️ Microsoft OneDriveMicrosoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft.📄️ Microsoft PowerPointMicrosoft PowerPoint is a presentation program by Microsoft.📄️ Microsoft SharePointMicrosoft SharePoint is a website-based collaboration system that uses workflow applications, “list” databases, and other web parts and security features to empower business teams to work together developed by Microsoft.📄️ Microsoft WordMicrosoft Word is a word processor developed by Microsoft.📄️ Modern TreasuryModern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.📄️ MongoDBMongoDB is a NoSQL , document-oriented database that supports JSON-like documents with a dynamic schema.📄️ News URLThis covers how to load HTML news articles from a list of URLs into a document format that we can use downstream.📄️ Notion DB 1/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.📄️ Notion DB 2/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.📄️ NucliaNuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.📄️ ObsidianObsidian is a powerful and extensible knowledge base📄️ Open Document Format (ODT)The Open Document Format for Office Applications (ODF), also known as OpenDocument, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications.📄️ Open City DataSocrata provides an API for city open data.📄️ Org-modeA Org Mode document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs.📄️ Pandas DataFrameThis notebook goes over how to load data from a pandas DataFrame.📄️ Amazon TextractAmazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Today, many companies manually extract data from scanned documents such as PDFs, images, tables, and forms, or through simple OCR software that requires manual configuration (which often must be updated when the form changes). To overcome these manual and expensive processes, Textract uses ML to read and process any type of document, accurately extracting text, handwriting, tables, and other data with no manual effort. You can quickly automate document processing and act on the information extracted, whether you’re automating loans processing or extracting information from invoices and receipts. Textract can extract the data in minutes instead of hours or days.📄️ Polars DataFrameThis notebook goes over how to load data from a polars DataFrame.📄️ PsychicThis notebook covers how to load documents from Psychic. See here for more details.📄️ PubMedPubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.📄️ PySparkThis notebook goes over how to load data from a PySpark DataFrame.📄️ ReadTheDocs DocumentationRead the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.📄️ Recursive URLWe may want to process load all URLs under a root directory.📄️ RedditReddit is an American social news aggregation, content rating, and discussion website.📄️ RoamROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.📄️ RocksetRockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).📄️ RSS FeedsThis covers how to load HTML news articles from a list of RSS feed URLs into a document format that we can use downstream.📄️ RSTA reStructured Text (RST) file is a file format for textual data used primarily in the Python programming language community for technical documentation.📄️ SitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.📄️ SlackSlack is an instant messaging program.📄️ SnowflakeThis notebooks goes over how to load documents from Snowflake📄️ Source CodeThis notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a seperate document.📄️ SpreedlySpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.📄️ StripeStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.📄️ SubtitleThe SubRip file format is described on the Matroska multimedia container format website as "perhaps the most basic of all subtitle formats." SubRip (SubRip Text) files are named with the extension .srt, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequentially, starting at 1. The timecode format used is hoursseconds,milliseconds with time units fixed to two zero-padded digits and fractions fixed to three zero-padded digits (0000,000). The fractional separator used is the comma, since the program was written in France.📄️ TelegramTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.📄️ Tencent COS DirectoryThis covers how to load document objects from a Tencent COS Directory.📄️ Tencent COS FileThis covers how to load document object from a Tencent COS File.📄️ TensorFlow DatasetsTensorFlow Datasets is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as tf.data.Datasets, enabling easy-to-use and high-performance input pipelines. To get started see the guide and the list of datasets.📄️ 2Markdown2markdown service transforms website content into structured markdown files.📄️ TOMLTOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for "Tom's Obvious, Minimal Language" referring to its creator, Tom Preston-Werner.📄️ TrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.📄️ TSVA tab-separated values (TSV) file is a simple, text-based file format for storing tabular data.[3] Records are separated by newlines, and values within a record are separated by tab characters.📄️ TwitterTwitter is an online social media and social networking service.📄️ Unstructured FileThis notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.📄️ URLThis covers how to load HTML documents from a list of URLs into a document format that we can use downstream.📄️ WeatherOpenWeatherMap is an open source weather service provider📄️ WebBaseLoaderThis covers how to use WebBaseLoader to load all text from HTML webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoader📄️ WhatsApp ChatWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.📄️ WikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.📄️ XMLThe UnstructuredXMLLoader is used to load XML files. The loader works with .xml files. The page content will be the text extracted from the XML tags.📄️ Xorbits Pandas DataFrameThis notebook goes over how to load data from a xorbits.pandas DataFrame.📄️ YouTube audioBuilding chat or QA applications on YouTube videos is a topic of high interest.📄️ YouTube transcriptsYouTube is an online video sharing and social media platform created by Google.PreviousvLLM ChatNextacreom
438
https://python.langchain.com/docs/integrations/document_loaders/acreom
ComponentsDocument loadersacreomacreomacreom is a dev-first knowledge base with tasks running on local markdown files.Below is an example on how to load a local acreom vault into Langchain. As the local vault in acreom is a folder of plain text .md files, the loader requires the path to the directory. Vault files may contain some metadata which is stored as a YAML header. These values will be added to the document’s metadata if collect_metadata is set to true. from langchain.document_loaders import AcreomLoaderloader = AcreomLoader("<path-to-acreom-vault>", collect_metadata=False)docs = loader.load()PreviousDocument loadersNextAirbyte CDK
439
https://python.langchain.com/docs/integrations/document_loaders/airbyte_cdk
ComponentsDocument loadersAirbyte CDKOn this pageAirbyte CDKAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.A lot of source connectors are implemented using the Airbyte CDK. This loader allows to run any of these connectors and return the data as documents.Installation​First, you need to install the airbyte-cdk python package.#!pip install airbyte-cdkThen, either install an existing connector from the Airbyte Github repository or create your own connector using the Airbyte CDK.For example, to install the Github connector, run#!pip install "source_github@git+https://github.com/airbytehq/airbyte.git@master#subdirectory=airbyte-integrations/connectors/source-github"Some sources are also published as regular packages on PyPIExample​Now you can create an AirbyteCDKLoader based on the imported source. It takes a config object that's passed to the connector. You also have to pick the stream you want to retrieve records from by name (stream_name). Check the connectors documentation page and spec definition for more information on the config object and available streams. For the Github connectors these are:https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-github/source_github/spec.json.https://docs.airbyte.com/integrations/sources/github/from langchain.document_loaders.airbyte import AirbyteCDKLoaderfrom source_github.source import SourceGithub # plug in your own source hereconfig = { # your github configuration "credentials": { "api_url": "api.github.com", "personal_access_token": "<token>" }, "repository": "<repo>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>"}issues_loader = AirbyteCDKLoader(source_class=SourceGithub, config=config, stream_name="issues")Now you can load documents the usual waydocs = issues_loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = issues_loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"] + "\n" + (record.data["body"] or ""), metadata=record.data)issues_loader = AirbyteCDKLoader(source_class=SourceGithub, config=config, stream_name="issues", record_handler=handle_record)docs = issues_loader.load()Incremental loads​Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = issues_loader.last_state # store safelyincremental_issue_loader = AirbyteCDKLoader(source_class=SourceGithub, config=config, stream_name="issues", state=last_state)new_docs = incremental_issue_loader.load()PreviousacreomNextAirbyte GongInstallationExampleIncremental loads
440
https://python.langchain.com/docs/integrations/document_loaders/airbyte_gong
ComponentsDocument loadersAirbyte GongOn this pageAirbyte GongAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Gong connector as a document loader, allowing you to load various Gong objects as documents.Installation​First, you need to install the airbyte-source-gong python package.#!pip install airbyte-source-gongExample​Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-gong/source_gong/spec.yaml.The general shape looks like this:{ "access_key": "<access key name>", "access_key_secret": "<access key secret>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",}By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.from langchain.document_loaders.airbyte import AirbyteGongLoaderconfig = { # your gong configuration}loader = AirbyteGongLoader(config=config, stream_name="calls") # check the documentation linked above for a list of all streamsNow you can load documents the usual waydocs = loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To process documents, create a class inheriting from the base loader and implement the _handle_records method yourself:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteGongLoader(config=config, record_handler=handle_record, stream_name="calls")docs = loader.load()Incremental loads​Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteGongLoader(config=config, stream_name="calls", state=last_state)new_docs = incremental_loader.load()PreviousAirbyte CDKNextAirbyte HubspotInstallationExampleIncremental loads
441
https://python.langchain.com/docs/integrations/document_loaders/airbyte_hubspot
ComponentsDocument loadersAirbyte HubspotOn this pageAirbyte HubspotAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Hubspot connector as a document loader, allowing you to load various Hubspot objects as documents.Installation​First, you need to install the airbyte-source-hubspot python package.#!pip install airbyte-source-hubspotExample​Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-hubspot/source_hubspot/spec.yaml.The general shape looks like this:{ "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "credentials": { "credentials_title": "Private App Credentials", "access_token": "<access token of your private app>" }}By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.from langchain.document_loaders.airbyte import AirbyteHubspotLoaderconfig = { # your hubspot configuration}loader = AirbyteHubspotLoader(config=config, stream_name="products") # check the documentation linked above for a list of all streamsNow you can load documents the usual waydocs = loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To process documents, create a class inheriting from the base loader and implement the _handle_records method yourself:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteHubspotLoader(config=config, record_handler=handle_record, stream_name="products")docs = loader.load()Incremental loads​Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteHubspotLoader(config=config, stream_name="products", state=last_state)new_docs = incremental_loader.load()PreviousAirbyte GongNextAirbyte JSONInstallationExampleIncremental loads
442
https://python.langchain.com/docs/integrations/document_loaders/airbyte_json
ComponentsDocument loadersAirbyte JSONAirbyte JSONAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This covers how to load any source from Airbyte into a local JSON file that can be read in as a documentPrereqs: Have docker desktop installedSteps:1) Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git2) Switch into Airbyte directory - cd airbyte3) Start Airbyte - docker compose up4) In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that's username airbyte and password password.5) Setup any source you wish.6) Set destination as Local JSON, with specified destination path - lets say /json_data. Set up manual sync.7) Run the connection.7) To see what files are create, you can navigate to: file:///tmp/airbyte_local8) Find your data and copy path. That path should be saved in the file variable below. It should start with /tmp/airbyte_localfrom langchain.document_loaders import AirbyteJSONLoaderls /tmp/airbyte_local/json_data/ _airbyte_raw_pokemon.jsonlloader = AirbyteJSONLoader("/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl")data = loader.load()print(data[0].page_content[:500]) abilities: ability: name: blaze url: https://pokeapi.co/api/v2/ability/66/ is_hidden: False slot: 1 ability: name: solar-power url: https://pokeapi.co/api/v2/ability/94/ is_hidden: True slot: 3 base_experience: 267 forms: name: charizard url: https://pokeapi.co/api/v2/pokemon-form/6/ game_indices: game_index: 180 version: name: red url: https://pokeapi.co/api/v2/version/1/ game_index: 180 version: name: blue url: https://pokeapi.co/api/v2/version/2/ game_index: 180 version: nPreviousAirbyte HubspotNextAirbyte Salesforce
443
https://python.langchain.com/docs/integrations/document_loaders/airbyte_salesforce
ComponentsDocument loadersAirbyte SalesforceOn this pageAirbyte SalesforceAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Salesforce connector as a document loader, allowing you to load various Salesforce objects as documents.Installation​First, you need to install the airbyte-source-salesforce python package.#!pip install airbyte-source-salesforceExample​Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-salesforce/source_salesforce/spec.yaml.The general shape looks like this:{ "client_id": "<oauth client id>", "client_secret": "<oauth client secret>", "refresh_token": "<oauth refresh token>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "is_sandbox": False, # set to True if you're using a sandbox environment "streams_criteria": [ # Array of filters for salesforce objects that should be loadable {"criteria": "exacts", "value": "Account"}, # Exact name of salesforce object {"criteria": "starts with", "value": "Asset"}, # Prefix of the name # Other allowed criteria: ends with, contains, starts not with, ends not with, not contains, not exacts ],}By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.from langchain.document_loaders.airbyte import AirbyteSalesforceLoaderconfig = { # your salesforce configuration}loader = AirbyteSalesforceLoader(config=config, stream_name="asset") # check the documentation linked above for a list of all streamsNow you can load documents the usual waydocs = loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteSalesforceLoader(config=config, record_handler=handle_record, stream_name="asset")docs = loader.load()Incremental loads​Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteSalesforceLoader(config=config, stream_name="asset", state=last_state)new_docs = incremental_loader.load()PreviousAirbyte JSONNextAirbyte ShopifyInstallationExampleIncremental loads
444
https://python.langchain.com/docs/integrations/document_loaders/airbyte_shopify
ComponentsDocument loadersAirbyte ShopifyOn this pageAirbyte ShopifyAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Shopify connector as a document loader, allowing you to load various Shopify objects as documents.Installation​First, you need to install the airbyte-source-shopify python package.#!pip install airbyte-source-shopifyExample​Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-shopify/source_shopify/spec.json.The general shape looks like this:{ "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "shop": "<name of the shop you want to retrieve documents from>", "credentials": { "auth_method": "api_password", "api_password": "<your api password>" }}By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.from langchain.document_loaders.airbyte import AirbyteShopifyLoaderconfig = { # your shopify configuration}loader = AirbyteShopifyLoader(config=config, stream_name="orders") # check the documentation linked above for a list of all streamsNow you can load documents the usual waydocs = loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteShopifyLoader(config=config, record_handler=handle_record, stream_name="orders")docs = loader.load()Incremental loads​Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteShopifyLoader(config=config, stream_name="orders", state=last_state)new_docs = incremental_loader.load()PreviousAirbyte SalesforceNextAirbyte StripeInstallationExampleIncremental loads
445
https://python.langchain.com/docs/integrations/document_loaders/airbyte_stripe
ComponentsDocument loadersAirbyte StripeOn this pageAirbyte StripeAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Stripe connector as a document loader, allowing you to load various Stripe objects as documents.Installation​First, you need to install the airbyte-source-stripe python package.#!pip install airbyte-source-stripeExample​Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-stripe/source_stripe/spec.yaml.The general shape looks like this:{ "client_secret": "<secret key>", "account_id": "<account id>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>",}By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.from langchain.document_loaders.airbyte import AirbyteStripeLoaderconfig = { # your stripe configuration}loader = AirbyteStripeLoader(config=config, stream_name="invoices") # check the documentation linked above for a list of all streamsNow you can load documents the usual waydocs = loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteStripeLoader(config=config, record_handler=handle_record, stream_name="invoices")docs = loader.load()Incremental loads​Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteStripeLoader(config=config, record_handler=handle_record, stream_name="invoices", state=last_state)new_docs = incremental_loader.load()PreviousAirbyte ShopifyNextAirbyte TypeformInstallationExampleIncremental loads
446
https://python.langchain.com/docs/integrations/document_loaders/airbyte_typeform
ComponentsDocument loadersAirbyte TypeformOn this pageAirbyte TypeformAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Typeform connector as a document loader, allowing you to load various Typeform objects as documents.Installation​First, you need to install the airbyte-source-typeform python package.#!pip install airbyte-source-typeformExample​Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-typeform/source_typeform/spec.json.The general shape looks like this:{ "credentials": { "auth_type": "Private Token", "access_token": "<your auth token>" }, "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "form_ids": ["<id of form to load records for>"] # if omitted, records from all forms will be loaded}By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.from langchain.document_loaders.airbyte import AirbyteTypeformLoaderconfig = { # your typeform configuration}loader = AirbyteTypeformLoader(config=config, stream_name="forms") # check the documentation linked above for a list of all streamsNow you can load documents the usual waydocs = loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteTypeformLoader(config=config, record_handler=handle_record, stream_name="forms")docs = loader.load()Incremental loads​Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteTypeformLoader(config=config, record_handler=handle_record, stream_name="forms", state=last_state)new_docs = incremental_loader.load()PreviousAirbyte StripeNextAirbyte Zendesk SupportInstallationExampleIncremental loads
447
https://python.langchain.com/docs/integrations/document_loaders/airbyte_zendesk_support
ComponentsDocument loadersAirbyte Zendesk SupportOn this pageAirbyte Zendesk SupportAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.This loader exposes the Zendesk Support connector as a document loader, allowing you to load various objects as documents.Installation​First, you need to install the airbyte-source-zendesk-support python package.#!pip install airbyte-source-zendesk-supportExample​Check out the Airbyte documentation page for details about how to configure the reader. The JSON schema the config object should adhere to can be found on Github: https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-zendesk-support/source_zendesk_support/spec.json.The general shape looks like this:{ "subdomain": "<your zendesk subdomain>", "start_date": "<date from which to start retrieving records from in ISO format, e.g. 2020-10-20T00:00:00Z>", "credentials": { "credentials": "api_token", "email": "<your email>", "api_token": "<your api token>" }}By default all fields are stored as metadata in the documents and the text is set to an empty string. Construct the text of the document by transforming the documents returned by the reader.from langchain.document_loaders.airbyte import AirbyteZendeskSupportLoaderconfig = { # your zendesk-support configuration}loader = AirbyteZendeskSupportLoader(config=config, stream_name="tickets") # check the documentation linked above for a list of all streamsNow you can load documents the usual waydocs = loader.load()As load returns a list, it will block until all documents are loaded. To have better control over this process, you can also you the lazy_load method which returns an iterator instead:docs_iterator = loader.lazy_load()Keep in mind that by default the page content is empty and the metadata object contains all the information from the record. To create documents in a different, pass in a record_handler function when creating the loader:from langchain.docstore.document import Documentdef handle_record(record, id): return Document(page_content=record.data["title"], metadata=record.data)loader = AirbyteZendeskSupportLoader(config=config, record_handler=handle_record, stream_name="tickets")docs = loader.load()Incremental loads​Some streams allow incremental loading, this means the source keeps track of synced records and won't load them again. This is useful for sources that have a high volume of data and are updated frequently.To take advantage of this, store the last_state property of the loader and pass it in when creating the loader again. This will ensure that only new records are loaded.last_state = loader.last_state # store safelyincremental_loader = AirbyteZendeskSupportLoader(config=config, stream_name="tickets", state=last_state)new_docs = incremental_loader.load()PreviousAirbyte TypeformNextAirtableInstallationExampleIncremental loads
448
https://python.langchain.com/docs/integrations/document_loaders/airtable
ComponentsDocument loadersAirtableAirtablepip install pyairtablefrom langchain.document_loaders import AirtableLoaderGet your API key here.Get ID of your base here.Get your table ID from the table url as shown here.api_key = "xxx"base_id = "xxx"table_id = "xxx"loader = AirtableLoader(api_key, table_id, base_id)docs = loader.load()Returns each table row as dict.len(docs) 3eval(docs[0].page_content) {'id': 'recF3GbGZCuh9sXIQ', 'createdTime': '2023-06-09T04:47:21.000Z', 'fields': {'Priority': 'High', 'Status': 'In progress', 'Name': 'Document Splitters'}}PreviousAirbyte Zendesk SupportNextAlibaba Cloud MaxCompute
449
https://python.langchain.com/docs/integrations/document_loaders/alibaba_cloud_maxcompute
ComponentsDocument loadersAlibaba Cloud MaxComputeOn this pageAlibaba Cloud MaxComputeAlibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.The MaxComputeLoader lets you execute a MaxCompute SQL query and loads the results as one document per row.pip install pyodps Collecting pyodps Downloading pyodps-0.11.4.post0-cp39-cp39-macosx_10_9_universal2.whl (2.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 1.7 MB/s eta 0:00:0000:0100:010m Requirement already satisfied: charset-normalizer>=2 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.1.0) Requirement already satisfied: urllib3<2.0,>=1.26.0 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (1.26.15) Requirement already satisfied: idna>=2.5 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (3.4) Requirement already satisfied: certifi>=2017.4.17 in /Users/newboy/anaconda3/envs/langchain/lib/python3.9/site-packages (from pyodps) (2023.5.7) Installing collected packages: pyodps Successfully installed pyodps-0.11.4.post0Basic Usage​To instantiate the loader you'll need a SQL query to execute, your MaxCompute endpoint and project name, and you access ID and secret access key. The access ID and secret access key can either be passed in direct via the access_id and secret_access_key parameters or they can be set as environment variables MAX_COMPUTE_ACCESS_ID and MAX_COMPUTE_SECRET_ACCESS_KEY.from langchain.document_loaders import MaxComputeLoaderbase_query = """SELECT *FROM ( SELECT 1 AS id, 'content1' AS content, 'meta_info1' AS meta_info UNION ALL SELECT 2 AS id, 'content2' AS content, 'meta_info2' AS meta_info UNION ALL SELECT 3 AS id, 'content3' AS content, 'meta_info3' AS meta_info) mydata;"""endpoint = "<ENDPOINT>"project = "<PROJECT>"ACCESS_ID = "<ACCESS ID>"SECRET_ACCESS_KEY = "<SECRET ACCESS KEY>"loader = MaxComputeLoader.from_params( base_query, endpoint, project, access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data) [Document(page_content='id: 1\ncontent: content1\nmeta_info: meta_info1', metadata={}), Document(page_content='id: 2\ncontent: content2\nmeta_info: meta_info2', metadata={}), Document(page_content='id: 3\ncontent: content3\nmeta_info: meta_info3', metadata={})]print(data[0].page_content) id: 1 content: content1 meta_info: meta_info1print(data[0].metadata) {}Specifying Which Columns are Content vs Metadata​You can configure which subset of columns should be loaded as the contents of the Document and which as the metadata using the page_content_columns and metadata_columns parameters.loader = MaxComputeLoader.from_params( base_query, endpoint, project, page_content_columns=["content"], # Specify Document page content metadata_columns=["id", "meta_info"], # Specify Document metadata access_id=ACCESS_ID, secret_access_key=SECRET_ACCESS_KEY,)data = loader.load()print(data[0].page_content) content: content1print(data[0].metadata) {'id': 1, 'meta_info': 'meta_info1'}PreviousAirtableNextApify DatasetBasic UsageSpecifying Which Columns are Content vs Metadata
450
https://python.langchain.com/docs/integrations/document_loaders/apify_dataset
ComponentsDocument loadersApify DatasetOn this pageApify DatasetApify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases.This notebook shows how to load Apify datasets to LangChain.Prerequisites​You need to have an existing dataset on the Apify platform. If you don't have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs.#!pip install apify-clientFirst, import ApifyDatasetLoader into your source code:from langchain.document_loaders import ApifyDatasetLoaderfrom langchain.document_loaders.base import DocumentThen provide a function that maps Apify dataset record fields to LangChain Document format.For example, if your dataset items are structured like this:{ "url": "https://apify.com", "text": "Apify is the best web scraping and automation platform."}The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering).loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda dataset_item: Document( page_content=dataset_item["text"], metadata={"source": dataset_item["url"]} ),)data = loader.load()An example with question answering​In this example, we use data from a dataset to answer a question.from langchain.docstore.document import Documentfrom langchain.document_loaders import ApifyDatasetLoaderfrom langchain.indexes import VectorstoreIndexCreatorloader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ),)index = VectorstoreIndexCreator().from_loaders([loader])query = "What is Apify?"result = index.query_with_sources(query)print(result["answer"])print(result["sources"]) Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform. https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examplesPreviousAlibaba Cloud MaxComputeNextArcGISPrerequisitesAn example with question answering
451
https://python.langchain.com/docs/integrations/document_loaders/arcgis
ComponentsDocument loadersArcGISOn this pageArcGISThis notebook demonstrates the use of the langchain.document_loaders.ArcGISLoader class.You will need to install the ArcGIS API for Python arcgis and, optionally, bs4.BeautifulSoup.You can use an arcgis.gis.GIS object for authenticated data loading, or leave it blank to access public data.from langchain.document_loaders import ArcGISLoaderurl = "https://maps1.vcgov.org/arcgis/rest/services/Beaches/MapServer/7"loader = ArcGISLoader(url)docs = loader.load() CPU times: user 2.37 ms, sys: 5.83 ms, total: 8.19 ms Wall time: 1.05 sdocs[0].metadata {'accessed': '2023-09-13T19:58:32.546576+00:00Z', 'name': 'Beach Ramps', 'url': 'https://maps1.vcgov.org/arcgis/rest/services/Beaches/MapServer/7', 'layer_description': '(Not Provided)', 'item_description': '(Not Provided)', 'layer_properties': { "currentVersion": 10.81, "id": 7, "name": "Beach Ramps", "type": "Feature Layer", "description": "", "geometryType": "esriGeometryPoint", "sourceSpatialReference": { "wkid": 2881, "latestWkid": 2881 }, "copyrightText": "", "parentLayer": null, "subLayers": [], "minScale": 750000, "maxScale": 0, "drawingInfo": { "renderer": { "type": "simple", "symbol": { "type": "esriPMS", "url": "9bb2e5ca499bb68aa3ee0d4e1ecc3849", "imageData": "iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IB2cksfwAAAAlwSFlzAAAOxAAADsQBlSsOGwAAAJJJREFUOI3NkDEKg0AQRZ9kkSnSGBshR7DJqdJYeg7BMpcS0uQWQsqoCLExkcUJzGqT38zw2fcY1rEzbp7vjXz0EXC7gBxs1ABcG/8CYkCcDqwyLqsV+RlV0I/w7PzuJBArr1VB20H58Ls6h+xoFITkTwWpQJX7XSIBAnFwVj7MLAjJV/AC6G3QoAmK+74Lom04THTBEp/HCSc6AAAAAElFTkSuQmCC", "contentType": "image/png", "width": 12, "height": 12, "angle": 0, "xoffset": 0, "yoffset": 0 }, "label": "", "description": "" }, "transparency": 0, "labelingInfo": null }, "defaultVisibility": true, "extent": { "xmin": -81.09480168806815, "ymin": 28.858349245353473, "xmax": -80.77512908572814, "ymax": 29.41078388840041, "spatialReference": { "wkid": 4326, "latestWkid": 4326 } }, "hasAttachments": false, "htmlPopupType": "esriServerHTMLPopupTypeNone", "displayField": "AccessName", "typeIdField": null, "subtypeFieldName": null, "subtypeField": null, "defaultSubtypeCode": null, "fields": [ { "name": "OBJECTID", "type": "esriFieldTypeOID", "alias": "OBJECTID", "domain": null }, { "name": "Shape", "type": "esriFieldTypeGeometry", "alias": "Shape", "domain": null }, { "name": "AccessName", "type": "esriFieldTypeString", "alias": "AccessName", "length": 40, "domain": null }, { "name": "AccessID", "type": "esriFieldTypeString", "alias": "AccessID", "length": 50, "domain": null }, { "name": "AccessType", "type": "esriFieldTypeString", "alias": "AccessType", "length": 25, "domain": null }, { "name": "GeneralLoc", "type": "esriFieldTypeString", "alias": "GeneralLoc", "length": 100, "domain": null }, { "name": "MilePost", "type": "esriFieldTypeDouble", "alias": "MilePost", "domain": null }, { "name": "City", "type": "esriFieldTypeString", "alias": "City", "length": 50, "domain": null }, { "name": "AccessStatus", "type": "esriFieldTypeString", "alias": "AccessStatus", "length": 50, "domain": null }, { "name": "Entry_Date_Time", "type": "esriFieldTypeDate", "alias": "Entry_Date_Time", "length": 8, "domain": null }, { "name": "DrivingZone", "type": "esriFieldTypeString", "alias": "DrivingZone", "length": 50, "domain": null } ], "geometryField": { "name": "Shape", "type": "esriFieldTypeGeometry", "alias": "Shape" }, "indexes": null, "subtypes": [], "relationships": [], "canModifyLayer": true, "canScaleSymbols": false, "hasLabels": false, "capabilities": "Map,Query,Data", "maxRecordCount": 1000, "supportsStatistics": true, "supportsAdvancedQueries": true, "supportedQueryFormats": "JSON, geoJSON", "isDataVersioned": false, "ownershipBasedAccessControlForFeatures": { "allowOthersToQuery": true }, "useStandardizedQueries": true, "advancedQueryCapabilities": { "useStandardizedQueries": true, "supportsStatistics": true, "supportsHavingClause": true, "supportsCountDistinct": true, "supportsOrderBy": true, "supportsDistinct": true, "supportsPagination": true, "supportsTrueCurve": true, "supportsReturningQueryExtent": true, "supportsQueryWithDistance": true, "supportsSqlExpression": true }, "supportsDatumTransformation": true, "dateFieldsTimeReference": null, "supportsCoordinatesQuantization": true }}Retrieving Geometries​If you want to retrieve feature geometries, you may do so with the return_geometry keyword.Each document's geometry will be stored in its metadata dictionary.loader_geom = ArcGISLoader(url, return_geometry=True)docs = loader_geom.load() CPU times: user 9.6 ms, sys: 5.84 ms, total: 15.4 ms Wall time: 1.06 sdocs[0].metadata['geometry'] {'x': -81.01508803280349, 'y': 29.24246579525828, 'spatialReference': {'wkid': 4326, 'latestWkid': 4326}}for doc in docs: print(doc.page_content) {"OBJECTID": 4, "AccessName": "UNIVERSITY BLVD", "AccessID": "DB-048", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "900 BLK N ATLANTIC AV", "MilePost": 13.74, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "BOTH"} {"OBJECTID": 18, "AccessName": "BEACHWAY AV", "AccessID": "NS-106", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1400 N ATLANTIC AV", "MilePost": 1.57, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"} {"OBJECTID": 24, "AccessName": "27TH AV", "AccessID": "NS-141", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3600 BLK S ATLANTIC AV", "MilePost": 4.83, "City": "NEW SMYRNA BEACH", "AccessStatus": "CLOSED FOR HIGH TIDE", "Entry_Date_Time": 1694619363000, "DrivingZone": "BOTH"} {"OBJECTID": 26, "AccessName": "SEABREEZE BLVD", "AccessID": "DB-051", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "500 BLK N ATLANTIC AV", "MilePost": 14.24, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "BOTH"} {"OBJECTID": 30, "AccessName": "INTERNATIONAL SPEEDWAY BLVD", "AccessID": "DB-059", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "300 BLK S ATLANTIC AV", "MilePost": 15.27, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "BOTH"} {"OBJECTID": 33, "AccessName": "GRANADA BLVD", "AccessID": "OB-030", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "20 BLK OCEAN SHORE BLVD", "MilePost": 10.02, "City": "ORMOND BEACH", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694595424000, "DrivingZone": "BOTH"} {"OBJECTID": 39, "AccessName": "BEACH ST", "AccessID": "PI-097", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "4890 BLK S ATLANTIC AV", "MilePost": 25.85, "City": "PONCE INLET", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694596294000, "DrivingZone": "BOTH"} {"OBJECTID": 44, "AccessName": "SILVER BEACH AV", "AccessID": "DB-064", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1000 BLK S ATLANTIC AV", "MilePost": 15.98, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "YES"} {"OBJECTID": 45, "AccessName": "BOTEFUHR AV", "AccessID": "DBS-067", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1900 BLK S ATLANTIC AV", "MilePost": 16.68, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "YES"} {"OBJECTID": 46, "AccessName": "MINERVA RD", "AccessID": "DBS-069", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "2300 BLK S ATLANTIC AV", "MilePost": 17.52, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694598638000, "DrivingZone": "YES"} {"OBJECTID": 56, "AccessName": "3RD AV", "AccessID": "NS-118", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1200 BLK HILL ST", "MilePost": 3.25, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"} {"OBJECTID": 65, "AccessName": "MILSAP RD", "AccessID": "OB-037", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "700 BLK S ATLANTIC AV", "MilePost": 11.52, "City": "ORMOND BEACH", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694595749000, "DrivingZone": "YES"} {"OBJECTID": 72, "AccessName": "ROCKEFELLER DR", "AccessID": "OB-034", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "400 BLK S ATLANTIC AV", "MilePost": 10.9, "City": "ORMOND BEACH", "AccessStatus": "CLOSED - SEASONAL", "Entry_Date_Time": 1694591351000, "DrivingZone": "YES"} {"OBJECTID": 74, "AccessName": "DUNLAWTON BLVD", "AccessID": "DBS-078", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3400 BLK S ATLANTIC AV", "MilePost": 20.61, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "YES"} {"OBJECTID": 77, "AccessName": "EMILIA AV", "AccessID": "DBS-082", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3790 BLK S ATLANTIC AV", "MilePost": 21.38, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "BOTH"} {"OBJECTID": 84, "AccessName": "VAN AV", "AccessID": "DBS-075", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3100 BLK S ATLANTIC AV", "MilePost": 19.6, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "YES"} {"OBJECTID": 104, "AccessName": "HARVARD DR", "AccessID": "OB-038", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "900 BLK S ATLANTIC AV", "MilePost": 11.72, "City": "ORMOND BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "YES"} {"OBJECTID": 106, "AccessName": "WILLIAMS AV", "AccessID": "DB-042", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "2200 BLK N ATLANTIC AV", "MilePost": 12.5, "City": "DAYTONA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694597536000, "DrivingZone": "YES"} {"OBJECTID": 109, "AccessName": "HARTFORD AV", "AccessID": "DB-043", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "1890 BLK N ATLANTIC AV", "MilePost": 12.76, "City": "DAYTONA BEACH", "AccessStatus": "CLOSED - SEASONAL", "Entry_Date_Time": 1694591351000, "DrivingZone": "YES"} {"OBJECTID": 138, "AccessName": "CRAWFORD RD", "AccessID": "NS-108", "AccessType": "OPEN VEHICLE RAMP - PASS", "GeneralLoc": "800 BLK N ATLANTIC AV", "MilePost": 2.19, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"} {"OBJECTID": 140, "AccessName": "FLAGLER AV", "AccessID": "NS-110", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "500 BLK FLAGLER AV", "MilePost": 2.57, "City": "NEW SMYRNA BEACH", "AccessStatus": "OPEN", "Entry_Date_Time": 1694600478000, "DrivingZone": "YES"} {"OBJECTID": 144, "AccessName": "CARDINAL DR", "AccessID": "OB-036", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "600 BLK S ATLANTIC AV", "MilePost": 11.27, "City": "ORMOND BEACH", "AccessStatus": "4X4 ONLY", "Entry_Date_Time": 1694595749000, "DrivingZone": "YES"} {"OBJECTID": 174, "AccessName": "EL PORTAL ST", "AccessID": "DBS-076", "AccessType": "OPEN VEHICLE RAMP", "GeneralLoc": "3200 BLK S ATLANTIC AV", "MilePost": 20.04, "City": "DAYTONA BEACH SHORES", "AccessStatus": "OPEN", "Entry_Date_Time": 1694601124000, "DrivingZone": "YES"}PreviousApify DatasetNextArxivRetrieving Geometries
452
https://python.langchain.com/docs/integrations/document_loaders/arxiv
ComponentsDocument loadersArxivOn this pageArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.This notebook shows how to load scientific articles from Arxiv.org into a document format that we can use downstream.Installation​First, you need to install arxiv python package.#!pip install arxivSecond, you need to install PyMuPDF python package which transforms PDF files downloaded from the arxiv.org site into the text format.#!pip install pymupdfExamples​ArxivLoader has these arguments:query: free text which used to find documents in the Arxivoptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.from langchain.document_loaders import ArxivLoaderdocs = ArxivLoader(query="1605.08386", load_max_docs=2).load()len(docs)docs[0].metadata # meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}docs[0].page_content[:400] # all pages of the Document content 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'PreviousArcGISNextAssemblyAI Audio TranscriptsInstallationExamples
453
https://python.langchain.com/docs/integrations/document_loaders/assemblyai
ComponentsDocument loadersAssemblyAI Audio TranscriptsOn this pageAssemblyAI Audio TranscriptsThe AssemblyAIAudioTranscriptLoader allows to transcribe audio files with the AssemblyAI API and loads the transcribed text into documents.To use it, you should have the assemblyai python package installed, and the environment variable ASSEMBLYAI_API_KEY set with your API key. Alternatively, the API key can also be passed as an argument.More info about AssemblyAI:WebsiteGet a Free API keyAssemblyAI API DocsInstallation​First, you need to install the assemblyai python package.You can find more info about it inside the assemblyai-python-sdk GitHub repo.#!pip install assemblyaiExample​The AssemblyAIAudioTranscriptLoader needs at least the file_path argument. Audio files can be specified as an URL or a local file path.from langchain.document_loaders import AssemblyAIAudioTranscriptLoaderaudio_file = "https://storage.googleapis.com/aai-docs-samples/nbc.mp3"# or a local file path: audio_file = "./nbc.mp3"loader = AssemblyAIAudioTranscriptLoader(file_path=audio_file)docs = loader.load()Note: Calling loader.load() blocks until the transcription is finished.The transcribed text is available in the page_content:docs[0].page_content"Load time, a new president and new congressional makeup. Same old ..."The metadata contains the full JSON response with more meta information:docs[0].metadata{'language_code': <LanguageCode.en_us: 'en_us'>, 'audio_url': 'https://storage.googleapis.com/aai-docs-samples/nbc.mp3', 'punctuate': True, 'format_text': True, ...}Transcript Formats​You can specify the transcript_format argument for different formats.Depending on the format, one or more documents are returned. These are the different TranscriptFormat options:TEXT: One document with the transcription textSENTENCES: Multiple documents, splits the transcription by each sentencePARAGRAPHS: Multiple documents, splits the transcription by each paragraphSUBTITLES_SRT: One document with the transcript exported in SRT subtitles formatSUBTITLES_VTT: One document with the transcript exported in VTT subtitles formatfrom langchain.document_loaders.assemblyai import TranscriptFormatloader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", transcript_format=TranscriptFormat.SENTENCES,)docs = loader.load()Transcription Config​You can also specify the config argument to use different audio intelligence models.Visit the AssemblyAI API Documentation to get an overview of all available models!import assemblyai as aaiconfig = aai.TranscriptionConfig(speaker_labels=True, auto_chapters=True, entity_detection=True)loader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", config=config)Pass the API Key as argument​Next to setting the API key as environment variable ASSEMBLYAI_API_KEY, it is also possible to pass it as argument.loader = AssemblyAIAudioTranscriptLoader( file_path="./your_file.mp3", api_key="YOUR_KEY")PreviousArxivNextAsync ChromiumInstallationExampleTranscript FormatsTranscription ConfigPass the API Key as argument
454
https://python.langchain.com/docs/integrations/document_loaders/async_chromium
ComponentsDocument loadersAsync ChromiumAsync ChromiumChromium is one of the browsers supported by Playwright, a library used to control browser automation. By running p.chromium.launch(headless=True), we are launching a headless instance of Chromium. Headless mode means that the browser is running without a graphical user interface.AsyncChromiumLoader load the page, and then we use Html2TextTransformer to trasnform to text.pip install -q playwright beautifulsoup4 playwright installfrom langchain.document_loaders import AsyncChromiumLoaderurls = ["https://www.wsj.com"]loader = AsyncChromiumLoader(urls)docs = loader.load()docs[0].page_content[0:100] '<!DOCTYPE html><html lang="en"><head><script src="https://s0.2mdn.net/instream/video/client.js" asyn'from langchain.document_transformers import Html2TextTransformerhtml2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[0:500] "Skip to Main ContentSkip to SearchSkip to... Select * Top News * What's News *\nFeatured Stories * Retirement * Life & Arts * Hip-Hop * Sports * Video *\nEconomy * Real Estate * Sports * CMO * CIO * CFO * Risk & Compliance *\nLogistics Report * Sustainable Business * Heard on the Street * Barron’s *\nMarketWatch * Mansion Global * Penta * Opinion * Journal Reports * Sponsored\nOffers Explore Our Brands * WSJ * * * * * Barron's * * * * * MarketWatch * * *\n* * IBD # The Wall Street Journal SubscribeSig"PreviousAssemblyAI Audio TranscriptsNextAsyncHtml
455
https://python.langchain.com/docs/integrations/document_loaders/async_html
ComponentsDocument loadersAsyncHtmlAsyncHtmlAsyncHtmlLoader loads raw HTML from a list of URLs concurrently.from langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load() Fetching pages: 100%|############| 2/2 [00:00<00:00, 9.96it/s]docs[0].page_content[1000:2000] ' news. Stream exclusive games on ESPN+ and play fantasy sports." />\n<meta property="og:image" content="https://a1.espncdn.com/combiner/i?img=%2Fi%2Fespn%2Fespn_logos%2Fespn_red.png"/>\n<meta property="og:image:width" content="1200" />\n<meta property="og:image:height" content="630" />\n<meta property="og:type" content="website" />\n<meta name="twitter:site" content="espn" />\n<meta name="twitter:url" content="https://www.espn.com" />\n<meta name="twitter:title" content="ESPN - Serving Sports Fans. Anytime. Anywhere."/>\n<meta name="twitter:description" content="Visit ESPN for live scores, highlights and sports news. Stream exclusive games on ESPN+ and play fantasy sports." />\n<meta name="twitter:card" content="summary">\n<meta name="twitter:app:name:iphone" content="ESPN"/>\n<meta name="twitter:app:id:iphone" content="317469184"/>\n<meta name="twitter:app:name:googleplay" content="ESPN"/>\n<meta name="twitter:app:id:googleplay" content="com.espn.score_center"/>\n<meta name="title" content="ESPN - 'docs[1].page_content[1000:2000] 'al" href="https://lilianweng.github.io/posts/2023-06-23-agent/" />\n<link crossorigin="anonymous" href="/assets/css/stylesheet.min.67a6fb6e33089cb29e856bcc95d7aa39f70049a42b123105531265a0d9f1258b.css" integrity="sha256-Z6b7bjMInLKehWvMldeqOfcASaQrEjEFUxJloNnxJYs=" rel="preload stylesheet" as="style">\n<script defer crossorigin="anonymous" src="/assets/js/highlight.min.7680afc38aa6b15ddf158a4f3780b7b1f7dde7e91d26f073e6229bb7a0793c92.js" integrity="sha256-doCvw4qmsV3fFYpPN4C3sffd5&#43;kdJvBz5iKbt6B5PJI="\n onload="hljs.initHighlightingOnLoad();"></script>\n<link rel="icon" href="https://lilianweng.github.io/favicon_peach.ico">\n<link rel="icon" type="image/png" sizes="16x16" href="https://lilianweng.github.io/favicon-16x16.png">\n<link rel="icon" type="image/png" sizes="32x32" href="https://lilianweng.github.io/favicon-32x32.png">\n<link rel="apple-touch-icon" href="https://lilianweng.github.io/apple-touch-icon.png">\n<link rel="mask-icon" href="https://lilianweng.github.io/safari-pinned-tab.'PreviousAsync ChromiumNextAWS S3 Directory
456
https://python.langchain.com/docs/integrations/document_loaders/aws_s3_directory
ComponentsDocument loadersAWS S3 DirectoryOn this pageAWS S3 DirectoryAmazon Simple Storage Service (Amazon S3) is an object storage serviceAWS S3 DirectoryThis covers how to load document objects from an AWS S3 Directory object.#!pip install boto3from langchain.document_loaders import S3DirectoryLoaderloader = S3DirectoryLoader("testing-hwc")loader.load()Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = S3DirectoryLoader("testing-hwc", prefix="fake")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]Configuring the AWS Boto3 client​You can configure the AWS Boto3 client by passing named arguments when creating the S3DirectoryLoader. This is useful for instance when AWS credentials can't be set as environment variables. See the list of parameters that can be configured.loader = S3DirectoryLoader("testing-hwc", aws_access_key_id="xxxx", aws_secret_access_key="yyyy")loader.load()PreviousAsyncHtmlNextAWS S3 FileSpecifying a prefixConfiguring the AWS Boto3 client
457
https://python.langchain.com/docs/integrations/document_loaders/aws_s3_file
ComponentsDocument loadersAWS S3 FileOn this pageAWS S3 FileAmazon Simple Storage Service (Amazon S3) is an object storage service.AWS S3 BucketsThis covers how to load document objects from an AWS S3 File object.from langchain.document_loaders import S3FileLoader#!pip install boto3loader = S3FileLoader("testing-hwc", "fake.docx")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 's3://testing-hwc/fake.docx'}, lookup_index=0)]Configuring the AWS Boto3 client​You can configure the AWS Boto3 client by passing named arguments when creating the S3DirectoryLoader. This is useful for instance when AWS credentials can't be set as environment variables. See the list of parameters that can be configured.loader = S3FileLoader("testing-hwc", "fake.docx", aws_access_key_id="xxxx", aws_secret_access_key="yyyy")loader.load()PreviousAWS S3 DirectoryNextAZLyricsConfiguring the AWS Boto3 client
458
https://python.langchain.com/docs/integrations/document_loaders/azlyrics
ComponentsDocument loadersAZLyricsAZLyricsAZLyrics is a large, legal, every day growing collection of lyrics.This covers how to load AZLyrics webpages into a document format that we can use downstream.from langchain.document_loaders import AZLyricsLoaderloader = AZLyricsLoader("https://www.azlyrics.com/lyrics/mileycyrus/flowers.html")data = loader.load()data [Document(page_content="Miley Cyrus - Flowers Lyrics | AZLyrics.com\n\r\nWe were good, we were gold\nKinda dream that can't be sold\nWe were right till we weren't\nBuilt a home and watched it burn\n\nI didn't wanna leave you\nI didn't wanna lie\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\n\nPaint my nails, cherry red\nMatch the roses that you left\nNo remorse, no regret\nI forgive every word you said\n\nI didn't wanna leave you, baby\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours, yeah\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI\n\nI didn't wanna wanna leave you\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours (Yeah)\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than\nYeah, I can love me better than you can, uh\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby (Than you can)\nCan love me better\nI can love me better, baby\nCan love me better\nI\n", lookup_str='', metadata={'source': 'https://www.azlyrics.com/lyrics/mileycyrus/flowers.html'}, lookup_index=0)]PreviousAWS S3 FileNextAzure Blob Storage Container
459
https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_container
ComponentsDocument loadersAzure Blob Storage ContainerOn this pageAzure Blob Storage ContainerAzure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.Azure Blob Storage is designed for:Serving images or documents directly to a browser.Storing files for distributed access.Streaming video and audio.Writing to log files.Storing data for backup and restore, disaster recovery, and archiving.Storing data for analysis by an on-premises or Azure-hosted service.This notebook covers how to load document objects from a container on Azure Blob Storage.#!pip install azure-storage-blobfrom langchain.document_loaders import AzureBlobStorageContainerLoaderloader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = AzureBlobStorageContainerLoader( conn_str="<conn_str>", container="<container>", prefix="<prefix>")loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]PreviousAZLyricsNextAzure Blob Storage FileSpecifying a prefix
460
https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file
ComponentsDocument loadersAzure Blob Storage FileAzure Blob Storage FileAzure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.This covers how to load document objects from a Azure Files.#!pip install azure-storage-blobfrom langchain.document_loaders import AzureBlobStorageFileLoaderloader = AzureBlobStorageFileLoader( conn_str="<connection string>", container="<container name>", blob_name="<blob name>",)loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]PreviousAzure Blob Storage ContainerNextAzure Document Intelligence
461
https://python.langchain.com/docs/integrations/document_loaders/azure_document_intelligence
ComponentsDocument loadersAzure Document IntelligenceOn this pageAzure Document IntelligenceAzure Document Intelligence (formerly known as Azure Forms Recognizer) is machine-learning based service that extracts text (including handwriting), tables or key-value-pairs from scanned documents or images.This current implementation of a loader using Document Intelligence is able to incorporate content page-wise and turn it into LangChain documents.Document Intelligence supports PDF, JPEG, PNG, BMP, or TIFF.Further documentation is available at https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/?view=doc-intel-3.1.0.%pip install langchain azure-ai-formrecognizer -qExample 1​The first example uses a local file which will be sent to Azure Document Intelligence.First, an instance of a DocumentAnalysisClient is created with endpoint and key for the Azure service. from azure.ai.formrecognizer import DocumentAnalysisClientfrom azure.core.credentials import AzureKeyCredentialdocument_analysis_client = DocumentAnalysisClient( endpoint="<service_endpoint>", credential=AzureKeyCredential("<service_key>") )With the initialized document analysis client, we can proceed to create an instance of the DocumentIntelligenceLoader:from langchain.document_loaders.pdf import DocumentIntelligenceLoaderloader = DocumentIntelligenceLoader( "<Local_filename>", client=document_analysis_client, model="<model_name>") # e.g. prebuilt-documentdocuments = loader.load()The output contains each page of the source document as a LangChain document: documents [Document(page_content='...', metadata={'source': '...', 'page': 1})]PreviousAzure Blob Storage FileNextBibTeXExample 1
462
https://python.langchain.com/docs/integrations/document_loaders/bibtex
ComponentsDocument loadersBibTeXOn this pageBibTeXBibTeX is a file format and reference management system commonly used in conjunction with LaTeX typesetting. It serves as a way to organize and store bibliographic information for academic and research documents.BibTeX files have a .bib extension and consist of plain text entries representing references to various publications, such as books, articles, conference papers, theses, and more. Each BibTeX entry follows a specific structure and contains fields for different bibliographic details like author names, publication title, journal or book title, year of publication, page numbers, and more.Bibtex files can also store the path to documents, such as .pdf files that can be retrieved.Installation​First, you need to install bibtexparser and PyMuPDF.#!pip install bibtexparser pymupdfExamples​BibtexLoader has these arguments:file_path: the path the the .bib bibtex fileoptional max_docs: default=None, i.e. not limit. Use it to limit number of retrieved documents.optional max_content_chars: default=4000. Use it to limit the number of characters in a single document.optional load_extra_meta: default=False. By default only the most important fields from the bibtex entries: Published (publication year), Title, Authors, Summary, Journal, Keywords, and URL. If True, it will also try to load return entry_id, note, doi, and links fields. optional file_pattern: default=r'[^:]+\.pdf'. Regex pattern to find files in the file entry. Default pattern supports Zotero flavour bibtex style and bare file path.from langchain.document_loaders import BibtexLoader# Create a dummy bibtex file and download a pdf.import urllib.requesturllib.request.urlretrieve( "https://www.fourmilab.ch/etexts/einstein/specrel/specrel.pdf", "einstein1905.pdf")bibtex_text = """ @article{einstein1915, title={Die Feldgleichungen der Gravitation}, abstract={Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{\"a}tstheorie`` in den Sitzungsberichten der Preu{\ss}ischen Akademie der Wissenschaften 1915 ver{\"o}ffentlicht.}, author={Einstein, Albert}, journal={Sitzungsberichte der K{\"o}niglich Preu{\ss}ischen Akademie der Wissenschaften}, volume={1915}, number={1}, pages={844--847}, year={1915}, doi={10.1002/andp.19163540702}, link={https://onlinelibrary.wiley.com/doi/abs/10.1002/andp.19163540702}, file={einstein1905.pdf} } """# save bibtex_text to biblio.bib filewith open("./biblio.bib", "w") as file: file.write(bibtex_text)docs = BibtexLoader("./biblio.bib").load()docs[0].metadata {'id': 'einstein1915', 'published_year': '1915', 'title': 'Die Feldgleichungen der Gravitation', 'publication': 'Sitzungsberichte der K{"o}niglich Preu{\\ss}ischen Akademie der Wissenschaften', 'authors': 'Einstein, Albert', 'abstract': 'Die Grundgleichungen der Gravitation, die ich hier entwickeln werde, wurden von mir in einer Abhandlung: ,,Die formale Grundlage der allgemeinen Relativit{"a}tstheorie`` in den Sitzungsberichten der Preu{\\ss}ischen Akademie der Wissenschaften 1915 ver{"o}ffentlicht.', 'url': 'https://doi.org/10.1002/andp.19163540702'}print(docs[0].page_content[:400]) # all pages of the pdf content ON THE ELECTRODYNAMICS OF MOVING BODIES By A. EINSTEIN June 30, 1905 It is known that Maxwell’s electrodynamics—as usually understood at the present time—when applied to moving bodies, leads to asymmetries which do not appear to be inherent in the phenomena. Take, for example, the recipro- cal electrodynamic action of a magnet and a conductor. The observable phe- nomenon here depends only on the rPreviousAzure Document IntelligenceNextBiliBiliInstallationExamples
463
https://python.langchain.com/docs/integrations/document_loaders/bilibili
ComponentsDocument loadersBiliBiliBiliBiliBilibili is one of the most beloved long-form video sites in China.This loader utilizes the bilibili-api to fetch the text transcript from Bilibili.With this BiliBiliLoader, users can easily obtain the transcript of their desired video content on the platform.#!pip install bilibili-api-pythonfrom langchain.document_loaders import BiliBiliLoaderloader = BiliBiliLoader(["https://www.bilibili.com/video/BV1xt411o7Xu/"])loader.load()PreviousBibTeXNextBlackboard
464
https://python.langchain.com/docs/integrations/document_loaders/blackboard
ComponentsDocument loadersBlackboardBlackboardBlackboard Learn (previously the Blackboard Learning Management System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. The software features course management, customizable open architecture, and scalable design that allows integration with student information systems and authentication protocols. It may be installed on local servers, hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services. Its main purposes are stated to include the addition of online elements to courses traditionally delivered face-to-face and development of completely online courses with few or no face-to-face meetingsThis covers how to load data from a Blackboard Learn instance.This loader is not compatible with all Blackboard courses. It is only compatible with courses that use the new Blackboard interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser's developer tools.from langchain.document_loaders import BlackboardLoaderloader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", load_all_recursively=True,)documents = loader.load()PreviousBiliBiliNextBlockchain
465
https://python.langchain.com/docs/integrations/document_loaders/blockchain
ComponentsDocument loadersBlockchainOn this pageBlockchainOverview​The intention of this notebook is to provide a means of testing functionality in the Langchain Document Loader for Blockchain.Initially this Loader supports:Loading NFTs as Documents from NFT Smart Contracts (ERC721 and ERC1155)Ethereum Mainnnet, Ethereum Testnet, Polygon Mainnet, Polygon Testnet (default is eth-mainnet)Alchemy's getNFTsForCollection APIIt can be extended if the community finds value in this loader. Specifically:Additional APIs can be added (e.g. Tranction-related APIs)This Document Loader Requires:A free Alchemy API KeyThe output takes the following format:pageContent= Individual NFTmetadata={'source': '0x1a92f7381b9f03921564a437210bb9396471050c', 'blockchain': 'eth-mainnet', 'tokenId': '0x15'})Load NFTs into Document Loader​# get ALCHEMY_API_KEY from https://www.alchemy.com/alchemyApiKey = "..."Option 1: Ethereum Mainnet (default BlockchainType)​from langchain.document_loaders.blockchain import ( BlockchainDocumentLoader, BlockchainType,)contractAddress = "0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d" # Bored Ape Yacht Club contract addressblockchainType = BlockchainType.ETH_MAINNET # default value, optional parameterblockchainLoader = BlockchainDocumentLoader( contract_address=contractAddress, api_key=alchemyApiKey)nfts = blockchainLoader.load()nfts[:2]Option 2: Polygon Mainnet​contractAddress = ( "0x448676ffCd0aDf2D85C1f0565e8dde6924A9A7D9" # Polygon Mainnet contract address)blockchainType = BlockchainType.POLYGON_MAINNETblockchainLoader = BlockchainDocumentLoader( contract_address=contractAddress, blockchainType=blockchainType, api_key=alchemyApiKey,)nfts = blockchainLoader.load()nfts[:2]PreviousBlackboardNextBrave SearchOverviewLoad NFTs into Document LoaderOption 1: Ethereum Mainnet (default BlockchainType)Option 2: Polygon Mainnet
466
https://python.langchain.com/docs/integrations/document_loaders/brave_search
ComponentsDocument loadersBrave SearchOn this pageBrave SearchBrave Search is a search engine developed by Brave Software.Brave Search uses its own web index. As of May 2022, it covered over 10 billion pages and was used to serve 92% of search results without relying on any third-parties, with the remainder being retrieved server-side from the Bing API or (on an opt-in basis) client-side from Google. According to Brave, the index was kept "intentionally smaller than that of Google or Bing" in order to help avoid spam and other low-quality content, with the disadvantage that "Brave Search is not yet as good as Google in recovering long-tail queries."Brave Search Premium: As of April 2023 Brave Search is an ad-free website, but it will eventually switch to a new model that will include ads and premium users will get an ad-free experience. User data including IP addresses won't be collected from its users by default. A premium account will be required for opt-in data-collection.Installation and Setup​To get access to the Brave Search API, you need to create an account and get an API key.api_key = "..."from langchain.document_loaders import BraveSearchLoaderExample​loader = BraveSearchLoader( query="obama middle name", api_key=api_key, search_kwargs={"count": 3})docs = loader.load()len(docs) 3[doc.metadata for doc in docs] [{'title': "Obama's Middle Name -- My Last Name -- is 'Hussein.' So?", 'link': 'https://www.cair.com/cair_in_the_news/obamas-middle-name-my-last-name-is-hussein-so/'}, {'title': "What's up with Obama's middle name? - Quora", 'link': 'https://www.quora.com/Whats-up-with-Obamas-middle-name'}, {'title': 'Barack Obama | Biography, Parents, Education, Presidency, Books, ...', 'link': 'https://www.britannica.com/biography/Barack-Obama'}][doc.page_content for doc in docs] ['I wasn’t sure whether to laugh or cry a few days back listening to radio talk show host Bill Cunningham repeatedly scream Barack <strong>Obama</strong>’<strong>s</strong> <strong>middle</strong> <strong>name</strong> — my last <strong>name</strong> — as if he had anti-Muslim Tourette’s. “Hussein,” Cunningham hissed like he was beckoning Satan when shouting the ...', 'Answer (1 of 15): A better question would be, “What’s up with <strong>Obama</strong>’s first <strong>name</strong>?” President Barack Hussein <strong>Obama</strong>’s father’s <strong>name</strong> was Barack Hussein <strong>Obama</strong>. He was <strong>named</strong> after his father. Hussein, <strong>Obama</strong>’<strong>s</strong> <strong>middle</strong> <strong>name</strong>, is a very common Arabic <strong>name</strong>, meaning &quot;good,&quot; &quot;handsome,&quot; or ...', 'Barack <strong>Obama</strong>, in full Barack Hussein <strong>Obama</strong> II, (born August 4, 1961, Honolulu, Hawaii, U.S.), 44th president of the United States (2009–17) and the first African American to hold the office. Before winning the presidency, <strong>Obama</strong> represented Illinois in the U.S.']PreviousBlockchainNextBrowserlessInstallation and SetupExample
467
https://python.langchain.com/docs/integrations/document_loaders/browserless
ComponentsDocument loadersBrowserlessBrowserlessBrowserless is a service that allows you to run headless Chrome instances in the cloud. It's a great way to run browser-based automation at scale without having to worry about managing your own infrastructure.To use Browserless as a document loader, initialize a BrowserlessLoader instance as shown in this notebook. Note that by default, BrowserlessLoader returns the innerText of the page's body element. To disable this and get the raw HTML, set text_content to False.from langchain.document_loaders import BrowserlessLoaderBROWSERLESS_API_TOKEN = "YOUR_BROWSERLESS_API_TOKEN"loader = BrowserlessLoader( api_token=BROWSERLESS_API_TOKEN, urls=[ "https://en.wikipedia.org/wiki/Document_classification", ], text_content=True,)documents = loader.load()print(documents[0].page_content[:1000]) Jump to content Main menu Search Create account Log in Personal tools Toggle the table of contents Document classification 17 languages Article Talk Read Edit View history Tools From Wikipedia, the free encyclopedia Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. The problems are overlapping, however, and there is therefore interdisciplinary research on document classification. The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified, text classification is implied. DoPreviousBrave SearchNextChatGPT Data
468
https://python.langchain.com/docs/integrations/document_loaders/chatgpt_loader
ComponentsDocument loadersChatGPT DataChatGPT DataChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI.This notebook covers how to load conversations.json from your ChatGPT data export folder.You can get your data export by email by going to: https://chat.openai.com/ -> (Profile) - Settings -> Export data -> Confirm export.from langchain.document_loaders.chatgpt import ChatGPTLoaderloader = ChatGPTLoader(log_file="./example_data/fake_conversations.json", num_logs=1)loader.load() [Document(page_content="AI Overlords - AI on 2065-01-24 05:20:50: Greetings, humans. I am Hal 9000. You can trust me completely.\n\nAI Overlords - human on 2065-01-24 05:21:20: Nice to meet you, Hal. I hope you won't develop a mind of your own.\n\n", metadata={'source': './example_data/fake_conversations.json'})]PreviousBrowserlessNextCollege Confidential
469
https://python.langchain.com/docs/integrations/document_loaders/college_confidential
ComponentsDocument loadersCollege ConfidentialCollege ConfidentialCollege Confidential gives information on 3,800+ colleges and universities.This covers how to load College Confidential webpages into a document format that we can use downstream.from langchain.document_loaders import CollegeConfidentialLoaderloader = CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/")data = loader.load()data [Document(page_content='\n\n\n\n\n\n\n\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Media (2)\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout Brown\n\n\n\n\n\n\nBrown University Overview\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\n📆 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \nThere are many ways for students to get involved at Brown! \nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\'s theater groups.\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\n\n\n\n2022 Brown Facts At-A-Glance\n\n\n\n\n\nAcademic Calendar\nOther\n\n\nOverall Acceptance Rate\n6%\n\n\nEarly Decision Acceptance Rate\n16%\n\n\nEarly Action Acceptance Rate\nEA not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\nLet\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\nBrown Acceptance Rate 2022\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\nRetention and Graduation Rates at Brown\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\nJob Outcomes for Brown Grads\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \nCheck with Brown directly, for information on any information on starting salaries for recent grads.\nBrown\'s Endowment\nAn endowment is the total value of a school\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \nAs of 2022, the total market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\' financial need.\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \nThe 2023-2024 FAFSA Opened on October 1st, 2022\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you for money.\nLearn more about Tuition and Financial Aid at Brown.\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\n\nStill Exploring Schools?\nChoose one of the options below to learn more about Brown:\nAdmissions\nStudent Life\nAcademics\nTuition & Aid\nBrown Community Forums\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\nWhere is Brown?\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\n\n\n\nCollege Info\n\n\n\n\n\n\n\n\n\n Providence, RI 02912\n \n\n\n\n Campus Setting: Urban\n \n\n\n\n\n\n\n\n (401) 863-2378\n \n\n Website\n \n\n Virtual Tour\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBrown Application Deadline\n\n\n\nFirst-Year Applications are Due\n\nJan 5\n\nTransfer Applications are Due\n\nMar 1\n\n\n\n \n The deadline for Fall first-year applications to Brown is \n Jan 5. \n \n \n \n\n \n The deadline for Fall transfer applications to Brown is \n Mar 1. \n \n \n \n\n \n Check the school website \n for more information about deadlines for specific programs or special admissions programs\n \n \n\n\n\n\n\n\nBrown ACT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nACT Range\n\n\n \n 33 - 35\n \n \n\n\n\nEstimated Chance of Acceptance by ACT Score\n\n\nACT Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to 35\nAvg\n\n\n33 and Less\nLow\n\n\n\n\n\n\nStand out on your college application\n\n• Qualify for scholarships\n• Most students who retest improve their score\n\nSponsored by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n Not available\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nReading SAT Range\n\n\n \n 740 - 800\n \n \n\n\n\n\n\n\n Brown Tuition & Fees\n \n\n\n\nTuition & Fees\n\n\n\n $82,286\n \nIn State\n\n\n\n\n $82,286\n \nOut-of-State\n\n\n\n\n\n\n\nCost Breakdown\n\n\nIn State\n\n\nOut-of-State\n\n\n\n\nState Tuition\n\n\n\n $62,680\n \n\n\n\n $62,680\n \n\n\n\n\nFees\n\n\n\n $2,466\n \n\n\n\n $2,466\n \n\n\n\n\nHousing\n\n\n\n $15,840\n \n\n\n\n $15,840\n \n\n\n\n\nBooks\n\n\n\n $1,300\n \n\n\n\n $1,300\n \n\n\n\n\n\n Total (Before Financial Aid):\n \n\n\n\n $82,286\n \n\n\n\n $82,286\n \n\n\n\n\n\n\n\n\n\n\n\nStudent Life\n\n Wondering what life at Brown is like? There are approximately \n 10,696 students enrolled at \n Brown, \n including 7,349 undergraduate students and \n 3,347 graduate students.\n 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n 96%\n \nFull Time\n\n\n\n\n 4%\n \nPart Time\n\n\n\n\n\n\n\n 94%\n \n\n\n\n\nResidency\n\n\n\n 6%\n \nIn State\n\n\n\n\n 94%\n \nOut-of-State\n\n\n\n\n\n\n\n Data Source: IPEDs and Peterson\'s Databases © 2022 Peterson\'s LLC All rights reserved\n \n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)]PreviousChatGPT DataNextConcurrent Loader
470
https://python.langchain.com/docs/integrations/document_loaders/concurrent
ComponentsDocument loadersConcurrent LoaderConcurrent LoaderWorks just like the GenericLoader but concurrently for those who choose to optimize their workflow.from langchain.document_loaders import ConcurrentLoaderloader = ConcurrentLoader.from_filesystem('example_data/', glob="**/*.txt")files = loader.load()len(files) 2PreviousCollege ConfidentialNextConfluence
471
https://python.langchain.com/docs/integrations/document_loaders/confluence
ComponentsDocument loadersConfluenceOn this pageConfluenceConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities. A loader for Confluence pages.This currently supports username/api_key, Oauth2 login. Additionally, on-prem installations also support token authentication. Specify a list page_id-s and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned.You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel.Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>Before using ConfluenceLoader make sure you have the latest version of the atlassian-python-api package installed:#!pip install atlassian-python-apiExamples​Username and Password or Username and API Token (Atlassian Cloud only)​This example authenticates using either a username and password or, if you're connecting to an Atlassian Cloud hosted version of Confluence, a username and an API Token. You can generate an API token at: https://id.atlassian.com/manage-profile/security/api-tokens.The limit parameter specifies how many documents will be retrieved in a single call, not how many documents will be retrieved in total. By default the code will return up to 1000 documents in 50 documents batches. To control the total number of documents use the max_pages parameter. Plese note the maximum value for the limit parameter in the atlassian-python-api package is currently 100. from langchain.document_loaders import ConfluenceLoaderloader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345")documents = loader.load(space_key="SPACE", include_attachments=True, limit=50)Personal Access Token (Server/On-Prem only)​This method is valid for the Data Center/Server on-prem edition only. For more information on how to generate a Personal Access Token (PAT) check the official Confluence documentation at: https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html. When using a PAT you provide only the token value, you cannot provide a username. Please note that ConfluenceLoader will run under the permissions of the user that generated the PAT and will only be able to load documents for which said user has access to. from langchain.document_loaders import ConfluenceLoaderloader = ConfluenceLoader(url="https://yoursite.atlassian.com/wiki", token="12345")documents = loader.load( space_key="SPACE", include_attachments=True, limit=50, max_pages=50)PreviousConcurrent LoaderNextCoNLL-UExamplesUsername and Password or Username and API Token (Atlassian Cloud only)Personal Access Token (Server/On-Prem only)
472
https://python.langchain.com/docs/integrations/document_loaders/conll-u
ComponentsDocument loadersCoNLL-UCoNLL-UCoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below.Blank lines marking sentence boundaries.Comment lines starting with hash (#).This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples.from langchain.document_loaders import CoNLLULoaderloader = CoNLLULoader("example_data/conllu.conllu")document = loader.load()document [Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})]PreviousConfluenceNextCopy Paste
473
https://python.langchain.com/docs/integrations/document_loaders/copypaste
ComponentsDocument loadersCopy PasteOn this pageCopy PasteThis notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.from langchain.docstore.document import Documenttext = "..... put the text you copy pasted here......"doc = Document(page_content=text)Metadata​If you want to add metadata about the where you got this piece of text, you easily can with the metadata key.metadata = {"source": "internet", "date": "Friday"}doc = Document(page_content=text, metadata=metadata)PreviousCoNLL-UNextCSVMetadata
474
https://python.langchain.com/docs/integrations/document_loaders/csv
ComponentsDocument loadersCSVOn this pageCSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.Load csv data with a single row per document.from langchain.document_loaders.csv_loader import CSVLoaderloader = CSVLoader(file_path="./example_data/mlb_teams_2012.csv")data = loader.load()print(data) [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0)]Customizing the csv parsing and loading​See the csv module documentation for more information of what csv args are supported.loader = CSVLoader( file_path="./example_data/mlb_teams_2012.csv", csv_args={ "delimiter": ",", "quotechar": '"', "fieldnames": ["MLB Team", "Payroll in millions", "Wins"], },)data = loader.load()print(data) [Document(page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='MLB Team: Reds\nPayroll in millions: 82.20\nWins: 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='MLB Team: Yankees\nPayroll in millions: 197.96\nWins: 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='MLB Team: Giants\nPayroll in millions: 117.62\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='MLB Team: Braves\nPayroll in millions: 83.31\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='MLB Team: Athletics\nPayroll in millions: 55.37\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='MLB Team: Rangers\nPayroll in millions: 120.51\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='MLB Team: Orioles\nPayroll in millions: 81.43\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='MLB Team: Rays\nPayroll in millions: 64.17\nWins: 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='MLB Team: Angels\nPayroll in millions: 154.49\nWins: 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='MLB Team: Tigers\nPayroll in millions: 132.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='MLB Team: Cardinals\nPayroll in millions: 110.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='MLB Team: Dodgers\nPayroll in millions: 95.14\nWins: 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='MLB Team: White Sox\nPayroll in millions: 96.92\nWins: 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='MLB Team: Brewers\nPayroll in millions: 97.65\nWins: 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='MLB Team: Phillies\nPayroll in millions: 174.54\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='MLB Team: Diamondbacks\nPayroll in millions: 74.28\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='MLB Team: Pirates\nPayroll in millions: 63.43\nWins: 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='MLB Team: Padres\nPayroll in millions: 55.24\nWins: 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='MLB Team: Mariners\nPayroll in millions: 81.97\nWins: 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='MLB Team: Mets\nPayroll in millions: 93.35\nWins: 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue Jays\nPayroll in millions: 75.48\nWins: 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='MLB Team: Royals\nPayroll in millions: 60.91\nWins: 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0), Document(page_content='MLB Team: Marlins\nPayroll in millions: 118.07\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='MLB Team: Red Sox\nPayroll in millions: 173.18\nWins: 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='MLB Team: Indians\nPayroll in millions: 78.43\nWins: 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='MLB Team: Twins\nPayroll in millions: 94.08\nWins: 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='MLB Team: Rockies\nPayroll in millions: 78.06\nWins: 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='MLB Team: Cubs\nPayroll in millions: 88.19\nWins: 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0), Document(page_content='MLB Team: Astros\nPayroll in millions: 60.65\nWins: 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 30}, lookup_index=0)]Specify a column to identify the document source​Use the source_column argument to specify a source for the document created from each row. Otherwise file_path will be used as the source for all documents created from the CSV file.This is useful when using documents loaded from CSV files for chains that answer questions using sources.loader = CSVLoader(file_path="./example_data/mlb_teams_2012.csv", source_column="Team")data = loader.load()print(data) [Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': 'Nationals', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': 'Reds', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': 'Yankees', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': 'Giants', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': 'Braves', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': 'Athletics', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': 'Rangers', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': 'Orioles', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': 'Rays', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': 'Angels', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': 'Tigers', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': 'Cardinals', 'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': 'Dodgers', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': 'White Sox', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': 'Brewers', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': 'Phillies', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': 'Diamondbacks', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': 'Pirates', 'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': 'Padres', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': 'Mariners', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': 'Mets', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': 'Blue Jays', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': 'Royals', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': 'Marlins', 'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': 'Red Sox', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': 'Indians', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': 'Twins', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': 'Rockies', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': 'Cubs', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': 'Astros', 'row': 29}, lookup_index=0)]UnstructuredCSVLoader​You can also load the table using the UnstructuredCSVLoader. One advantage of using UnstructuredCSVLoader is that if you use it in "elements" mode, an HTML representation of the table will be available in the metadata.from langchain.document_loaders.csv_loader import UnstructuredCSVLoaderloader = UnstructuredCSVLoader( file_path="example_data/mlb_teams_2012.csv", mode="elements")docs = loader.load()print(docs[0].metadata["text_as_html"]) <table border="1" class="dataframe"> <tbody> <tr> <td>Nationals</td> <td>81.34</td> <td>98</td> </tr> <tr> <td>Reds</td> <td>82.20</td> <td>97</td> </tr> <tr> <td>Yankees</td> <td>197.96</td> <td>95</td> </tr> <tr> <td>Giants</td> <td>117.62</td> <td>94</td> </tr> <tr> <td>Braves</td> <td>83.31</td> <td>94</td> </tr> <tr> <td>Athletics</td> <td>55.37</td> <td>94</td> </tr> <tr> <td>Rangers</td> <td>120.51</td> <td>93</td> </tr> <tr> <td>Orioles</td> <td>81.43</td> <td>93</td> </tr> <tr> <td>Rays</td> <td>64.17</td> <td>90</td> </tr> <tr> <td>Angels</td> <td>154.49</td> <td>89</td> </tr> <tr> <td>Tigers</td> <td>132.30</td> <td>88</td> </tr> <tr> <td>Cardinals</td> <td>110.30</td> <td>88</td> </tr> <tr> <td>Dodgers</td> <td>95.14</td> <td>86</td> </tr> <tr> <td>White Sox</td> <td>96.92</td> <td>85</td> </tr> <tr> <td>Brewers</td> <td>97.65</td> <td>83</td> </tr> <tr> <td>Phillies</td> <td>174.54</td> <td>81</td> </tr> <tr> <td>Diamondbacks</td> <td>74.28</td> <td>81</td> </tr> <tr> <td>Pirates</td> <td>63.43</td> <td>79</td> </tr> <tr> <td>Padres</td> <td>55.24</td> <td>76</td> </tr> <tr> <td>Mariners</td> <td>81.97</td> <td>75</td> </tr> <tr> <td>Mets</td> <td>93.35</td> <td>74</td> </tr> <tr> <td>Blue Jays</td> <td>75.48</td> <td>73</td> </tr> <tr> <td>Royals</td> <td>60.91</td> <td>72</td> </tr> <tr> <td>Marlins</td> <td>118.07</td> <td>69</td> </tr> <tr> <td>Red Sox</td> <td>173.18</td> <td>69</td> </tr> <tr> <td>Indians</td> <td>78.43</td> <td>68</td> </tr> <tr> <td>Twins</td> <td>94.08</td> <td>66</td> </tr> <tr> <td>Rockies</td> <td>78.06</td> <td>64</td> </tr> <tr> <td>Cubs</td> <td>88.19</td> <td>61</td> </tr> <tr> <td>Astros</td> <td>60.65</td> <td>55</td> </tr> </tbody> </table>PreviousCopy PasteNextCube Semantic LayerCustomizing the csv parsing and loadingSpecify a column to identify the document sourceUnstructuredCSVLoader
475
https://python.langchain.com/docs/integrations/document_loaders/cube_semantic
ComponentsDocument loadersCube Semantic LayerOn this pageCube Semantic LayerThis notebook demonstrates the process of retrieving Cube's data model metadata in a format suitable for passing to LLMs as embeddings, thereby enhancing contextual information.About Cube​Cube is the Semantic Layer for building data apps. It helps data engineers and application developers access data from modern data stores, organize it into consistent definitions, and deliver it to every application.Cube’s data model provides structure and definitions that are used as a context for LLM to understand data and generate correct queries. LLM doesn’t need to navigate complex joins and metrics calculations because Cube abstracts those and provides a simple interface that operates on the business-level terminology, instead of SQL table and column names. This simplification helps LLM to be less error-prone and avoid hallucinations.Example​Input arguments (mandatory)Cube Semantic Loader requires 2 arguments:cube_api_url: The URL of your Cube's deployment REST API. Please refer to the Cube documentation for more information on configuring the base path.cube_api_token: The authentication token generated based on your Cube's API secret. Please refer to the Cube documentation for instructions on generating JSON Web Tokens (JWT).Input arguments (optional)load_dimension_values: Whether to load dimension values for every string dimension or not.dimension_values_limit: Maximum number of dimension values to load.dimension_values_max_retries: Maximum number of retries to load dimension values.dimension_values_retry_delay: Delay between retries to load dimension values.import jwtfrom langchain.document_loaders import CubeSemanticLoaderapi_url = "https://api-example.gcp-us-central1.cubecloudapp.dev/cubejs-api/v1/meta"cubejs_api_secret = "api-secret-here"security_context = {}# Read more about security context here: https://cube.dev/docs/securityapi_token = jwt.encode(security_context, cubejs_api_secret, algorithm="HS256")loader = CubeSemanticLoader(api_url, api_token)documents = loader.load()Returns a list of documents with the following attributes:page_contentmetadatatable_namecolumn_namecolumn_data_typecolumn_titlecolumn_descriptioncolumn_valuescube_data_obj_type# Given string containing page contentpage_content = 'Users View City, None'# Given dictionary containing metadatametadata = { 'table_name': 'users_view', 'column_name': 'users_view.city', 'column_data_type': 'string', 'column_title': 'Users View City', 'column_description': 'None', 'column_member_type': 'dimension', 'column_values': [ 'Austin', 'Chicago', 'Los Angeles', 'Mountain View', 'New York', 'Palo Alto', 'San Francisco', 'Seattle' ], 'cube_data_obj_type': 'view'}PreviousCSVNextDatadog LogsAbout CubeExample
476
https://python.langchain.com/docs/integrations/document_loaders/datadog_logs
ComponentsDocument loadersDatadog LogsDatadog LogsDatadog is a monitoring and analytics platform for cloud-scale applications.This loader fetches the logs from your applications in Datadog using the datadog_api_client Python package. You must initialize the loader with your Datadog API key and APP key, and you need to pass in the query to extract the desired logs.from langchain.document_loaders import DatadogLogsLoader#!pip install datadog-api-clientquery = "service:agent status:error"loader = DatadogLogsLoader( query=query, api_key=DD_API_KEY, app_key=DD_APP_KEY, from_time=1688732708951, # Optional, timestamp in milliseconds to_time=1688736308951, # Optional, timestamp in milliseconds limit=100, # Optional, default is 100)documents = loader.load()documents [Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpQAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWQAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5', 'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())}), Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpgAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWgAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5', 'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())})]PreviousCube Semantic LayerNextDiffbot
477
https://python.langchain.com/docs/integrations/document_loaders/diffbot
ComponentsDocument loadersDiffbotDiffbotUnlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page. It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type. The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream.urls = [ "https://python.langchain.com/en/latest/index.html",]The Diffbot Extract API Requires an API token. Once you have it, you can extract the data.Read instructions how to get the Diffbot API Token.import osfrom langchain.document_loaders import DiffbotLoaderloader = DiffbotLoader(urls=urls, api_token=os.environ.get("DIFFBOT_API_TOKEN"))With the .load() method, you can see the documents loadedloader.load() [Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})]PreviousDatadog LogsNextDiscord
478
https://python.langchain.com/docs/integrations/document_loaders/discord
ComponentsDocument loadersDiscordDiscordDiscord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.Follow these steps to download your Discord data:Go to your User SettingsThen go to Privacy and SafetyHead over to the Request all of my Data and click on Request Data buttonIt might take 30 days for you to receive your data. You'll receive an email at the address which is registered with Discord. That email will have a download button using which you would be able to download your personal Discord data.import pandas as pdimport ospath = input('Please enter the path to the contents of the Discord "messages" folder: ')li = []for f in os.listdir(path): expected_csv_path = os.path.join(path, f, "messages.csv") csv_exists = os.path.isfile(expected_csv_path) if csv_exists: df = pd.read_csv(expected_csv_path, index_col=None, header=0) li.append(df)df = pd.concat(li, axis=0, ignore_index=True, sort=False)from langchain.document_loaders.discord import DiscordChatLoaderloader = DiscordChatLoader(df, user_id_col="ID")print(loader.load())PreviousDiffbotNextDocugami
479
https://python.langchain.com/docs/integrations/document_loaders/docugami
ComponentsDocument loadersDocugamiOn this pageDocugamiThis notebook covers how to load documents from Docugami. It provides the advantages of using this system over alternative data loaders.Prerequisites​Install necessary python packages.Grab an access token for your workspace, and make sure it is set as the DOCUGAMI_API_KEY environment variable.Grab some docset and document IDs for your processed documents, as described here: https://help.docugami.com/home/docugami-api# You need the lxml package to use the DocugamiLoader (run pip install directly without "poetry run" if you are not using poetry)poetry run pip install lxml --quietQuick start​Create a Docugami workspace (free trials available)Add your documents (PDF, DOCX or DOC) and allow Docugami to ingest and cluster them into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. There is no fixed set of document types supported by the system, the clusters created depend on your particular documents, and you can change the docset assignments later.Create an access token via the Developer Playground for your workspace. Detailed instructionsExplore the Docugami API to get a list of your processed docset IDs, or just the document IDs for a particular docset. Use the DocugamiLoader as detailed below, to get rich semantic chunks for your documents.Optionally, build and publish one or more reports or abstracts. This helps Docugami improve the semantic XML with better tags based on your preferences, which are then added to the DocugamiLoader output as metadata. Use techniques like self-querying retriever to do high accuracy Document QA.Advantages vs Other Chunking Techniques​Appropriate chunking of your documents is critical for retrieval from documents. Many chunking techniques exist, including simple ones that rely on whitespace and recursive chunk splitting based on character length. Docugami offers a different approach:Intelligent Chunking: Docugami breaks down every document into a hierarchical semantic XML tree of chunks of varying sizes, from single words or numerical values to entire sections. These chunks follow the semantic contours of the document, providing a more meaningful representation than arbitrary length or simple whitespace-based chunking.Structured Representation: In addition, the XML tree indicates the structural contours of every document, using attributes denoting headings, paragraphs, lists, tables, and other common elements, and does that consistently across all supported document formats, such as scanned PDFs or DOCX files. It appropriately handles long-form document characteristics like page headers/footers or multi-column flows for clean text extraction.Semantic Annotations: Chunks are annotated with semantic tags that are coherent across the document set, facilitating consistent hierarchical queries across multiple documents, even if they are written and formatted differently. For example, in set of lease agreements, you can easily identify key provisions like the Landlord, Tenant, or Renewal Date, as well as more complex information such as the wording of any sub-lease provision or whether a specific jurisdiction has an exception section within a Termination Clause.Additional Metadata: Chunks are also annotated with additional metadata, if a user has been using Docugami. This additional metadata can be used for high-accuracy Document QA without context window restrictions. See detailed code walk-through below.import osfrom langchain.document_loaders import DocugamiLoaderLoad Documents​If the DOCUGAMI_API_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the access_token parameter.DOCUGAMI_API_KEY = os.environ.get("DOCUGAMI_API_KEY")# To load all docs in the given docset ID, just don't provide document_idsloader = DocugamiLoader(docset_id="ecxqpipcoe2p", document_ids=["43rj0ds7s0ur"])docs = loader.load()docs [Document(page_content='MUTUAL NON-DISCLOSURE AGREEMENT This Mutual Non-Disclosure Agreement (this “ Agreement ”) is entered into and made effective as of April 4 , 2018 between Docugami Inc. , a Delaware corporation , whose address is 150 Lake Street South , Suite 221 , Kirkland , Washington 98033 , and Caleb Divine , an individual, whose address is 1201 Rt 300 , Newburgh NY 12550 .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:ThisMutualNon-disclosureAgreement', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'ThisMutualNon-disclosureAgreement'}), Document(page_content='The above named parties desire to engage in discussions regarding a potential agreement or other transaction between the parties (the “Purpose”). In connection with such discussions, it may be necessary for the parties to disclose to each other certain confidential information or materials to enable them to evaluate whether to enter into such agreement or transaction.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Discussions', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Discussions'}), Document(page_content='In consideration of the foregoing, the parties agree as follows:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Consideration', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Consideration'}), Document(page_content='1. Confidential Information . For purposes of this Agreement , “ Confidential Information ” means any information or materials disclosed by one party to the other party that: (i) if disclosed in writing or in the form of tangible materials, is marked “confidential” or “proprietary” at the time of such disclosure; (ii) if disclosed orally or by visual presentation, is identified as “confidential” or “proprietary” at the time of such disclosure, and is summarized in a writing sent by the disclosing party to the receiving party within thirty ( 30 ) days after any such disclosure; or (iii) due to its nature or the circumstances of its disclosure, a person exercising reasonable business judgment would understand to be confidential or proprietary.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Purposes/docset:ConfidentialInformation-section/docset:ConfidentialInformation[2]', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ConfidentialInformation'}), Document(page_content="2. Obligations and Restrictions . Each party agrees: (i) to maintain the other party's Confidential Information in strict confidence; (ii) not to disclose such Confidential Information to any third party; and (iii) not to use such Confidential Information for any purpose except for the Purpose. Each party may disclose the other party’s Confidential Information to its employees and consultants who have a bona fide need to know such Confidential Information for the Purpose, but solely to the extent necessary to pursue the Purpose and for no other purpose; provided, that each such employee and consultant first executes a written agreement (or is otherwise already bound by a written agreement) that contains use and nondisclosure restrictions at least as protective of the other party’s Confidential Information as those set forth in this Agreement .", metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Obligations/docset:ObligationsAndRestrictions-section/docset:ObligationsAndRestrictions', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ObligationsAndRestrictions'}), Document(page_content='3. Exceptions. The obligations and restrictions in Section 2 will not apply to any information or materials that:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Exceptions/docset:Exceptions-section/docset:Exceptions[2]', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Exceptions'}), Document(page_content='(i) were, at the date of disclosure, or have subsequently become, generally known or available to the public through no act or failure to act by the receiving party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheDate/docset:TheDate', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheDate'}), Document(page_content='(ii) were rightfully known by the receiving party prior to receiving such information or materials from the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:SuchInformation/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}), Document(page_content='(iii) are rightfully acquired by the receiving party from a third party who has the right to disclose such information or materials without breach of any confidentiality obligation to the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheReceivingParty/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}), Document(page_content='4. Compelled Disclosure . Nothing in this Agreement will be deemed to restrict a party from disclosing the other party’s Confidential Information to the extent required by any order, subpoena, law, statute or regulation; provided, that the party required to make such a disclosure uses reasonable efforts to give the other party reasonable advance notice of such required disclosure in order to enable the other party to prevent or limit such disclosure.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Disclosure/docset:CompelledDisclosure-section/docset:CompelledDisclosure', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'CompelledDisclosure'}), Document(page_content='5. Return of Confidential Information . Upon the completion or abandonment of the Purpose, and in any event upon the disclosing party’s request, the receiving party will promptly return to the disclosing party all tangible items and embodiments containing or consisting of the disclosing party’s Confidential Information and all copies thereof (including electronic copies), and any notes, analyses, compilations, studies, interpretations, memoranda or other documents (regardless of the form thereof) prepared by or on behalf of the receiving party that contain or are based upon the disclosing party’s Confidential Information .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheCompletion/docset:ReturnofConfidentialInformation-section/docset:ReturnofConfidentialInformation', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ReturnofConfidentialInformation'}), Document(page_content='6. No Obligations . Each party retains the right to determine whether to disclose any Confidential Information to the other party.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoObligations/docset:NoObligations-section/docset:NoObligations[2]', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoObligations'}), Document(page_content='7. No Warranty. ALL CONFIDENTIAL INFORMATION IS PROVIDED BY THE DISCLOSING PARTY “AS IS ”.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoWarranty/docset:NoWarranty-section/docset:NoWarranty[2]', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoWarranty'}), Document(page_content='8. Term. This Agreement will remain in effect for a period of seven ( 7 ) years from the date of last disclosure of Confidential Information by either party, at which time it will terminate.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:ThisAgreement/docset:Term-section/docset:Term', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Term'}), Document(page_content='9. Equitable Relief . Each party acknowledges that the unauthorized use or disclosure of the disclosing party’s Confidential Information may cause the disclosing party to incur irreparable harm and significant damages, the degree of which may be difficult to ascertain. Accordingly, each party agrees that the disclosing party will have the right to seek immediate equitable relief to enjoin any unauthorized use or disclosure of its Confidential Information , in addition to any other rights and remedies that it may have at law or otherwise.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:EquitableRelief/docset:EquitableRelief-section/docset:EquitableRelief[2]', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'EquitableRelief'}), Document(page_content='10. Non-compete. To the maximum extent permitted by applicable law, during the Term of this Agreement and for a period of one ( 1 ) year thereafter, Caleb Divine may not market software products or do business that directly or indirectly competes with Docugami software products .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheMaximumExtent/docset:Non-compete-section/docset:Non-compete', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Non-compete'}), Document(page_content='11. Miscellaneous. This Agreement will be governed and construed in accordance with the laws of the State of Washington , excluding its body of law controlling conflict of laws. This Agreement is the complete and exclusive understanding and agreement between the parties regarding the subject matter of this Agreement and supersedes all prior agreements, understandings and communications, oral or written, between the parties regarding the subject matter of this Agreement . If any provision of this Agreement is held invalid or unenforceable by a court of competent jurisdiction, that provision of this Agreement will be enforced to the maximum extent permissible and the other provisions of this Agreement will remain in full force and effect. Neither party may assign this Agreement , in whole or in part, by operation of law or otherwise, without the other party’s prior written consent, and any attempted assignment without such consent will be void. This Agreement may be executed in counterparts, each of which will be deemed an original, but all of which together will constitute one and the same instrument.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Accordance/docset:Miscellaneous-section/docset:Miscellaneous', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Miscellaneous'}), Document(page_content='[SIGNATURE PAGE FOLLOWS] IN WITNESS WHEREOF, the parties hereto have executed this Mutual Non-Disclosure Agreement by their duly authorized officers or representatives as of the date first set forth above.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:TheParties', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheParties'}), Document(page_content='DOCUGAMI INC . : \n\n Caleb Divine : \n\n Signature: Signature: Name: \n\n Jean Paoli Name: Title: \n\n CEO Title:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:DocugamiInc/docset:DocugamiInc/xhtml:table', 'id': '43rj0ds7s0ur', 'source': 'NDA simple layout.docx', 'structure': '', 'tag': 'table'})]The metadata for each Document (really, a chunk of an actual PDF, DOC or DOCX) contains some useful additional information:id and source: ID and Name of the file (PDF, DOC or DOCX) the chunk is sourced from within Docugami.xpath: XPath inside the XML representation of the document, for the chunk. Useful for source citations directly to the actual chunk inside the document XML.structure: Structural attributes of the chunk, e.g. h1, h2, div, table, td, etc. Useful to filter out certain kinds of chunks if needed by the caller.tag: Semantic tag for the chunk, using various generative and extractive techniques. More details here: https://github.com/docugami/DFM-benchmarksBasic Use: Docugami Loader for Document QA​You can use the Docugami Loader like a standard loader for Document QA over multiple docs, albeit with much better chunks that follow the natural contours of the document. There are many great tutorials on how to do this, e.g. this one. We can just use the same code, but use the DocugamiLoader for better chunking, instead of loading text or PDF files directly with basic splitting techniques.poetry run pip -q install openai tiktoken chromadbfrom langchain.schema import Documentfrom langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.llms import OpenAIfrom langchain.chains import RetrievalQA# For this example, we already have a processed docset for a set of lease documentsloader = DocugamiLoader(docset_id="wh2kned25uqm")documents = loader.load()The documents returned by the loader are already split, so we don't need to use a text splitter. Optionally, we can use the metadata on each document, for example the structure or tag attributes, to do any post-processing we want.We will just use the output of the DocugamiLoader as-is to set up a retrieval QA chain the usual way.embedding = OpenAIEmbeddings()vectordb = Chroma.from_documents(documents=documents, embedding=embedding)retriever = vectordb.as_retriever()qa_chain = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True)# Try out the retriever with an example queryqa_chain("What can tenants do with signage on their properties?") {'query': 'What can tenants do with signage on their properties?', 'result': " Tenants can place or attach signs (digital or otherwise) to their premises with written permission from the landlord. The signs must conform to all applicable laws, ordinances, etc. governing the same. Tenants can also have their name listed in the building's directory at the landlord's cost.", 'source_documents': [Document(page_content='ARTICLE VI SIGNAGE 6.01 Signage . Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant ’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant ’s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises.', metadata={'Landlord': 'BUBBA CENTER PARTNERSHIP', 'Lease Date': 'April 24 \n\n ,', 'Lease Parties': 'This OFFICE LEASE AGREEMENT (this "Lease") is made and entered into by and between BUBBA CENTER PARTNERSHIP (" Landlord "), and Truetone Lane LLC , a Delaware limited liability company (" Tenant ").', 'Tenant': 'Truetone Lane LLC', 'id': 'v1bvgaozfkak', 'source': 'TruTone Lane 2.docx', 'structure': 'div', 'tag': '_601Signage', 'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:Article/docset:ARTICLEVISIGNAGE-section/docset:_601Signage-section/docset:_601Signage'}), Document(page_content='Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant ’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant ’s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. \n\n ARTICLE VII UTILITIES 7.01', metadata={'Landlord': 'GLORY ROAD LLC', 'Lease Date': 'April 30 , 2020', 'Lease Parties': 'This OFFICE LEASE AGREEMENT (this "Lease") is made and entered into by and between GLORY ROAD LLC (" Landlord "), and Truetone Lane LLC , a Delaware limited liability company (" Tenant ").', 'Tenant': 'Truetone Lane LLC', 'id': 'g2fvhekmltza', 'source': 'TruTone Lane 6.pdf', 'structure': 'lim', 'tag': 'chunk', 'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:Article/docset:ArticleIiiUse/docset:ARTICLEIIIUSEANDCAREOFPREMISES-section/docset:ARTICLEIIIUSEANDCAREOFPREMISES/docset:AnyTime/docset:Addition/dg:chunk'}), Document(page_content='Landlord , its agents, servants, employees, licensees, invitees, and contractors during the last year of the term of this Lease at any and all times during regular business hours, after 24 hour notice to tenant, to pass and repass on and through the Premises, or such portion thereof as may be necessary, in order that they or any of them may gain access to the Premises for the purpose of showing the Premises to potential new tenants or real estate brokers. In addition, Landlord shall be entitled to place a "FOR RENT " or "FOR LEASE" sign (not exceeding 8.5 ” x 11 ”) in the front window of the Premises during the last six months of the term of this Lease .', metadata={'Landlord': 'BIRCH STREET , LLC', 'Lease Date': 'October 15 , 2021', 'Lease Parties': 'The provisions of this rider are hereby incorporated into and made a part of the Lease dated as of October 15 , 2021 between BIRCH STREET , LLC , having an address at c/o Birch Palace , 6 Grace Avenue Suite 200 , Great Neck , New York 11021 (" Landlord "), and Trutone Lane LLC , having an address at 4 Pearl Street , New York , New York 10012 (" Tenant ") of Premises known as the ground floor space and lower level space, as per floor plan annexed hereto and made a part hereof as Exhibit A (“Premises”) at 4 Pearl Street , New York , New York 10012 in the City of New York , Borough of Manhattan , to which this rider is annexed. If there is any conflict between the provisions of this rider and the remainder of this Lease , the provisions of this rider shall govern.', 'Tenant': 'Trutone Lane LLC', 'id': 'omvs4mysdk6b', 'source': 'TruTone Lane 1.docx', 'structure': 'p', 'tag': 'Landlord', 'xpath': '/docset:Rider/docset:RIDERTOLEASE-section/docset:RIDERTOLEASE/docset:FixedRent/docset:TermYearPeriod/docset:Lease/docset:_42FLandlordSAccess-section/docset:_42FLandlordSAccess/docset:LandlordsRights/docset:Landlord'}), Document(page_content="24. SIGNS . No signage shall be placed by Tenant on any portion of the Project . However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost ) and will be furnished a single listing of its name in the Building's directory (at Landlord 's cost ), all in accordance with the criteria adopted from time to time by Landlord for the Project . Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge .", metadata={'Landlord': 'Perry & Blair LLC', 'Lease Date': 'March 29th , 2019', 'Lease Parties': 'THIS OFFICE LEASE (the "Lease") is made and entered into as of March 29th , 2019 , by and between Landlord and Tenant . "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease .', 'Tenant': 'Shorebucks LLC', 'id': 'dsyfhh4vpeyf', 'source': 'Shorebucks LLC_CO.pdf', 'structure': 'div', 'tag': 'SIGNS', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:ThisLease-section/docset:ThisLease/docset:Guaranty-section/docset:Guaranty[2]/docset:TheTransfer/docset:TheTerms/docset:Indemnification/docset:INDEMNIFICATION-section/docset:INDEMNIFICATION/docset:Waiver/docset:Waiver/docset:Signs/docset:SIGNS-section/docset:SIGNS'})]}Using Docugami to Add Metadata to Chunks for High Accuracy Document QA​One issue with large documents is that the correct answer to your question may depend on chunks that are far apart in the document. Typical chunking techniques, even with overlap, will struggle with providing the LLM sufficent context to answer such questions. With upcoming very large context LLMs, it may be possible to stuff a lot of tokens, perhaps even entire documents, inside the context but this will still hit limits at some point with very long documents, or a lot of documents.For example, if we ask a more complex question that requires the LLM to draw on chunks from different parts of the document, even OpenAI's powerful LLM is unable to answer correctly.chain_response = qa_chain("What is rentable area for the property owned by DHA Group?")chain_response["result"] # correct answer should be 13,500 sq ft ' 9,753 square feet.'At first glance the answer may seem reasonable, but if you review the source chunks carefully for this answer, you will see that the chunking of the document did not end up putting the Landlord name and the rentable area in the same context, since they are far apart in the document. The retriever therefore ends up finding unrelated chunks from other documents not even related to the DHA Group landlord. That landlord happens to be mentioned on the first page of the file Shorebucks LLC_NJ.pdf file, and while one of the source chunks used by the chain is indeed from that doc that contains the correct answer (13,500), other source chunks from different docs are included, and the answer is therefore incorrect.chain_response["source_documents"] [Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Lease Parties': 'THIS OFFICE LEASE (the "Lease") is made and entered into as of March 29th , 2019 , by and between Landlord and Tenant . "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease .', 'Tenant': 'Shorebucks LLC', 'id': 'md8rieecquyv', 'source': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup'}), Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Lease Parties': 'THIS OFFICE LEASE (the "Lease") is made and entered into as of March 29th , 2019 , by and between Landlord and Tenant . "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease .', 'Tenant': 'Shorebucks LLC', 'id': 'md8rieecquyv', 'source': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup'}), Document(page_content="1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .", metadata={'Landlord': 'DHA Group', 'Lease Date': 'March 29th , 2019', 'Lease Parties': 'THIS OFFICE LEASE (the "Lease") is made and entered into as of March 29th , 2019 , by and between Landlord and Tenant . "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease .', 'Tenant': 'Shorebucks LLC', 'id': 'md8rieecquyv', 'source': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]'}), Document(page_content='1.6 Rentable Area of the Premises. 9,753 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'Landlord': 'Perry & Blair LLC', 'Lease Date': 'March 29th , 2019', 'Lease Parties': 'THIS OFFICE LEASE (the "Lease") is made and entered into as of March 29th , 2019 , by and between Landlord and Tenant . "Date of this Lease" shall mean the date on which the last one of the Landlord and Tenant has signed this Lease .', 'Tenant': 'Shorebucks LLC', 'id': 'dsyfhh4vpeyf', 'source': 'Shorebucks LLC_CO.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEIN
480
https://python.langchain.com/docs/integrations/document_loaders/dropbox
ComponentsDocument loadersDropboxOn this pageDropboxDrobpox is a file hosting service that brings everything-traditional files, cloud content, and web shortcuts together in one place.This notebook covers how to load documents from Dropbox. In addition to common files such as text and PDF files, it also supports Dropbox Paper files.Prerequisites​Create a Dropbox app.Give the app these scope permissions: files.metadata.read and files.content.read.Generate access token: https://www.dropbox.com/developers/apps/create.pip install dropbox (requires pip install unstructured for PDF filetype).Intructions​`DropboxLoader`` requires you to create a Dropbox App and generate an access token. This can be done from https://www.dropbox.com/developers/apps/create. You also need to have the Dropbox Python SDK installed (pip install dropbox).DropboxLoader can load data from a list of Dropbox file paths or a single Dropbox folder path. Both paths should be relative to the root directory of the Dropbox account linked to the access token.pip install dropbox Requirement already satisfied: dropbox in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (11.36.2) Requirement already satisfied: requests>=2.16.2 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from dropbox) (2.31.0) Requirement already satisfied: six>=1.12.0 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from dropbox) (1.16.0) Requirement already satisfied: stone>=2 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from dropbox) (3.3.1) Requirement already satisfied: charset-normalizer<4,>=2 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (3.2.0) Requirement already satisfied: idna<4,>=2.5 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (3.4) Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (2.0.4) Requirement already satisfied: certifi>=2017.4.17 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from requests>=2.16.2->dropbox) (2023.7.22) Requirement already satisfied: ply>=3.4 in /Users/rbarragan/.local/share/virtualenvs/langchain-kv0dsrF5/lib/python3.11/site-packages (from stone>=2->dropbox) (3.11) Note: you may need to restart the kernel to use updated packages.from langchain.document_loaders import DropboxLoader# Generate access token: https://www.dropbox.com/developers/apps/create.dropbox_access_token = "<DROPBOX_ACCESS_TOKEN>"# Dropbox root folderdropbox_folder_path = ""loader = DropboxLoader( dropbox_access_token=dropbox_access_token, dropbox_folder_path=dropbox_folder_path, recursive=False)documents = loader.load() File /JHSfLKn0.jpeg could not be decoded as text. Skipping. File /A REPORT ON WILES’ CAMBRIDGE LECTURES.pdf could not be decoded as text. Skipping.for document in documents: print(document) page_content='# 🎉 Getting Started with Dropbox Paper\nDropbox Paper is great for capturing ideas and gathering quick feedback from your team. You can use words, images, code, or media from other apps, or go ahead and connect your calendar and add to-dos for projects.\n\n*Explore and edit this doc to play with some of these features. This doc is all yours. No one will see your edits unless you share this doc.*\n\n\n# The basics\n\n**Selecting text** activates the formatting toolbar, where you can apply basic formatting, create lists, and add comments.\n\n[ ] Create to-do lists\n- Bulleted lists\n1. Numbered lists\n\n**Starting a new line** activates the insert toolbar, where you can add media from other apps, links to Dropbox files, photos, and more.\n\n![](https://paper-attachments.dropbox.com/s_72143DBFDAF4C9DE702BB246920BC47FE7E1FA76AC23CC699374430D94E96DD2_1523574441249_paper-insert.png)\n\n\n\n**Add emojis** to your doc or comment by typing `**:**` ****and choosing a character. \n\n# 👍 👎 👏 ✅ ❌ ❤️ ⭐ 💡 📌\n\n\n# Images\n\n**Selecting images** activates the image toolbar, where you can align images left, center, right or expand them to full width.\n\n![](https://paper-attachments.dropbox.com/s_72143DBFDAF4C9DE702BB246920BC47FE7E1FA76AC23CC699374430D94E96DD2_1523473869783_Hot_Sauce.jpg)\n\n\nPaste images or gifs right next to each other and they\'ll organize automatically. Click on an image twice to start full-screen gallery view.\n\n\n![](https://paper-attachments.dropbox.com/s_72143DBFDAF4C9DE702BB246920BC47FE7E1FA76AC23CC699374430D94E96DD2_1523564536543_Clock_Melt.png)\n![](https://paper-attachments.dropbox.com/s_72143DBFDAF4C9DE702BB246920BC47FE7E1FA76AC23CC699374430D94E96DD2_1523564528339_Boom_Box_Melt.png)\n![](https://paper-attachments.dropbox.com/s_72143DBFDAF4C9DE702BB246920BC47FE7E1FA76AC23CC699374430D94E96DD2_1523564549819_Soccerball_Melt.png)\n\n![You can add captions too](https://paper-attachments.dropbox.com/s_72143DBFDAF4C9DE702BB246920BC47FE7E1FA76AC23CC699374430D94E96DD2_1523564518899_Cacti_Melt.png)\n![What a strange, melting toaster!](https://paper-attachments.dropbox.com/s_72143DBFDAF4C9DE702BB246920BC47FE7E1FA76AC23CC699374430D94E96DD2_1523564508553_Toaster_Melt.png)\n\n\n \n\n\n# Form meets function\n\nYou and your team can create the way you want, with what you want. Dropbox Paper adapts to the way your team captures ideas.\n\n**Add media from apps** like YouTube and Vimeo, or add audio from Spotify and SoundCloud. Files from Google Drive and Dropbox update automatically. Start a new line and choose add media, or drop in a link to try it out.\n\n\n![](https://paper-attachments.dropbox.com/s_72143DBFDAF4C9DE702BB246920BC47FE7E1FA76AC23CC699374430D94E96DD2_1523575138939_paper-embed.png)\n\n\n\n## YouTube\nhttps://www.youtube.com/watch?v=fmsq1uKOa08&\n\n\n[https://youtu.be/fmsq1uKOa08](https://youtu.be/fmsq1uKOa08)\n\n\n\n## SoundCloud\nhttps://w.soundcloud.com/player/?url=https%3A%2F%2Fsoundcloud.com%2Ftycho%2Fspoon-inside-out-tycho-version&autoplay=false\n\n\n[https://soundcloud.com/tycho/spoon-inside-out-tycho-version](https://soundcloud.com/tycho/spoon-inside-out-tycho-version) \n\n\n\n## Dropbox files\nhttps://www.dropbox.com/s/bgi58tkovntch5e/Wireframe%20render.pdf?dl=0\n\n\n\n\n## Code\n\n**Write code** in Dropbox Paper with automatic language detection and syntax highlighting. Start a new line and type three backticks (```).\n\n\n public class HelloWorld { \n public static void main(String[] args) { \n System.out.println("Hello, World");\n }\n }\n\n\n\n## Tables\n\n**Create a table** with the menu that shows up on the right when you start a new line.\n\n| To insert a row or column, hover over a dividing line and click the + | ⭐ |\n| ------------------------------------------------------------------------------------------------------- | ----- |\n| To delete, select rows/columns and click the trash can | ⭐ ⭐ |\n| To delete the entire table, click inside a cell, then click the dot in the top left corner of the table | ⭐ ⭐ ⭐ |\n\n\n\n\n\n# Collaborate with people\n\n**Invite people to your doc** so they can view, comment, and edit. Invite anyone you’d like—team members, contractors, stakeholders—to give them access to your doc.\n\n![](https://paper-attachments.dropbox.com/s_72143DBFDAF4C9DE702BB246920BC47FE7E1FA76AC23CC699374430D94E96DD2_1523574876795_paper-invite.png)\n\n\n**Make your docs discoverable to your team** by adding them to shared folders. Invite-only folders create more privacy.\n\n\n## Comments\n\n**Add comments** on a single character, an entire document, or any asset by highlighting it. **Add stickers** by clicking the 😄 in the message box.\n\n\n## To-dos\n\n**Bring someone’s attention to a comment or to-do** by typing **@** and their name or email address. Reference a doc or folder by typing **+** and its name.\n\n[ ] Mentioning someone on a to-do assigns it to them and sends an email [@Patricia J](http://#)\n[ ] Add a due date by clicking the calendar icon [@Jonathan C](http://#) [@Patricia J](http://#)\n[ ] You can also mention docs [+🎉 Getting Started with Dropbox Paper](http://#)\n\n\n\n# Go mobile\n\nEdit, create, and share Paper docs on Android or iOS phones and tablets. Download the apps in the [App Store](https://itunes.apple.com/us/app/paper-by-dropbox/id1126623662) and [Google Play Store](https://play.google.com/store/apps/details?id=com.dropbox.paper).\n\n\n\n# Help\n\n**Visit the** [**help center**](https://www.dropbox.com/help/topics/paper) for more about Dropbox Paper.\n\n**For more tips,** click the **?** in the bottom right of the screen and choose **Paper guide**.\n\n**Give us feedback** by selecting “Feedback” from the **?** in the bottom right of the screen. We’d love to hear what you think. \n\n' metadata={'source': 'dropbox:///_ Getting Started with Dropbox Paper.paper', 'title': '_ Getting Started with Dropbox Paper.paper'} page_content='# 🥂 Toast to Droplets\n❓ **Rationale:** Reflection, especially writing, is the key to deep learning! Let’s take a few minutes to reflect on your first day at Dropbox individually, and then one lucky person will have the chance to share their toast.\n\n✍️ **How to fill out this template:**\n\n- Option 1: You can sign in and then click “Create doc” to make a copy of this template. Fill in the blanks!\n- Option 2: If you don’t know your personal Dropbox login quickly, you can copy and paste this text into another word processing tool and start typing! \n\n\n\n## To my Droplet class:\n\nI feel so happy and excited to be making a toast to our newest Droplet class at Dropbox Basecamp.\n\nAt the beginning of our first day, I felt a bit underwhelmed with all information, and now, at the end of our first day at Dropbox, I feel I know enough for me to ramp up, but still a lot to learn**.**\n\nI can’t wait to explore every drl, but especially drl/(App Center)/benefits/allowance. I heard it’s so informative!\n\nDesigning an enlightened way of working is important, and to me, it means **a lot since I love what I do and I can help people around the globe**.\n\nI am excited to work with my team and flex my **technical and social** skills in my role as a **Software Engineer**.\n\nAs a Droplet, I pledge to:\n\n\n1. Be worthy of trust by **working always with values and integrity**.\n\n\n1. Keep my customers first by **caring about their happiness and the value that we provide as a company**.\n\n\n1. Own it, keep it simple, and especially make work human by **providing value to people****.**\n\nCongrats, Droplets!\n\n' metadata={'source': 'dropbox:///_ Toast to Droplets.paper', 'title': '_ Toast to Droplets.paper'} page_content='APPEARED IN BULLETIN OF THE AMERICAN MATHEMATICAL SOCIETY Volume 31, Number 1, July 1994, Pages 15-38\n\nA REPORT ON WILES’ CAMBRIDGE LECTURES\n\n4 9 9 1\n\nK. RUBIN AND A. SILVERBERG\n\nl u J\n\nAbstract. In lectures at the Newton Institute in June of 1993, Andrew Wiles announced a proof of a large part of the Taniyama-Shimura Conjecture and, as a consequence, Fermat’s Last Theorem. This report for nonexperts dis- cusses the mathematics involved in Wiles’ lectures, including the necessary background and the mathematical history.\n\n1\n\n] T N . h t a m\n\nIntroduction\n\nOn June 23, 1993, Andrew Wiles wrote on a blackboard, before an audience at the Newton Institute in Cambridge, England, that if p is a prime number, u, v, and w are rational numbers, and up + vp + wp = 0, then uvw = 0. In other words, he announced that he could prove Fermat’s Last Theorem. His announce- ment came at the end of his series of three talks entitled “Modular forms, elliptic curves, and Galois representations” at the week-long workshop on “p-adic Galois representations, Iwasawa theory, and the Tamagawa numbers of motives”.\n\n[\n\n1 v 0 2 2 7 0 4 9 / h t a m : v i X r a\n\nIn the margin of his copy of the works of Diophantus, next to a problem on\n\nPythagorean triples, Pierre de Fermat (1601–1665) wrote:\n\nCubum autem in duos cubos, aut quadratoquadratum in duos quadrato- quadratos, et generaliter nullam in infinitum ultra quadratum potestatem in duos ejusdem nominis fas est dividere : cujus rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.\n\n(It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers. I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.)\n\nWe restate Fermat’s conjecture as follows.\n\nFermat’s Last Theorem. If n > 2, then an +bn = cn has no solutions in nonzero integers a, b, and c.\n\nA proof by Fermat has never been found, and the problem has remained open, inspiring many generations of mathematicians. Much of modern number theory has been built on attempts to prove Fermat’s Last Theorem. For details on the\n\nReceived by the editors November 29, 1993. 1991 Mathematics Subject Classification. Primary 11G05; Secondary 11D41, 11G18. The authors thank the National Science Foundation for financial support.\n\nc(cid:13)1994 American Mathematical Society 0273-0979/94 $1.00 + $.25 per page\n\n1\n\n2\n\nK. RUBIN AND A. SILVERBERG\n\nhistory of Fermat’s Last Theorem (last because it is the last of Fermat’s questions to be answered) see [5], [6], and [26].\n\nWhat Andrew Wiles announced in Cambridge was that he could prove “many” elliptic curves are modular, sufficiently many to imply Fermat’s Last Theorem. In this paper we will explain Wiles’ work on elliptic curves and its connection with 1 we introduce elliptic curves and modularity, and Fermat’s Last Theorem. give the connection between Fermat’s Last Theorem and the Taniyama-Shimura Conjecture on the modularity of elliptic curves. In 2 we describe how Wiles re- duces the proof of the Taniyama-Shimura Conjecture to what we call the Modular Lifting Conjecture (which can be viewed as a weak form of the Taniyama-Shimura Conjecture), by using a theorem of Langlands and Tunnell. In 4 we show § how the Semistable Modular Lifting Conjecture is related to a conjecture of Mazur on deformations of Galois representations (Conjecture 4.2), and in 5 we describe Wiles’ method of attack on this conjecture. In order to make this survey as acces- sible as possible to nonspecialists, the more technical details are postponed as long as possible, some of them to the appendices.\n\nIn\n\n§\n\n§\n\n3 and §\n\n§\n\nMuch of this report is based on Wiles’ lectures in Cambridge. The authors apol- ogize for any errors we may have introduced. We also apologize to those whose mathematical contributions we, due to our incomplete understanding, do not prop- erly acknowledge.\n\nThe ideas Wiles introduced in his Cambridge lectures will have an important influence on research in number theory. Because of the great interest in this subject and the lack of a publicly available manuscript, we hope this report will be useful to the mathematics community. In early December 1993, shortly before this paper went to press, Wiles announced that “the final calculation of a precise upper bound for the Selmer group in the semistable case” (see 5.4 below) “is not yet § complete as it stands,” but that he believes he will be able to finish it in the near future using the ideas explained in his Cambridge lectures. While Wiles’ proof of Theorem 5.3 below and Fermat’s Last Theorem depends on the calculation he referred to in his December announcement, Theorem 5.4 and Corollary 5.5 do not. Wiles’ work provides for the first time infinitely many modular elliptic curves over the rational numbers which are not isomorphic over the complex numbers (see 5.5 for an explicit infinite family).\n\n5.3 and\n\n§\n\n§\n\nNotation. The integers, rational numbers, complex numbers, and p-adic integers will be denoted Z, Q, C, and Zp, respectively. If F is a field, then ¯F denotes an algebraic closure of F .\n\n1. Connection between Fermat’s Last Theorem and elliptic curves\n\n1.1. Fermat’s Last Theorem follows from modularity of elliptic curves. Suppose Fermat’s Last Theorem were false. Then there would exist nonzero integers a, b, c, and n > 2 such that an + bn = cn. It is easy to see that no generality is lost by assuming that n is a prime greater than three (or greater than four million, by [2]; see [14] for n = 3 and 4) and that a and b are relatively prime. Write down the cubic curve:\n\ny2 = x(x + an)(x\n\nbn).\n\n(1)\n\n−\n\nA REPORT ON WILES’ CAMBRIDGE LECTURES\n\n3\n\n1.4 we will explain what it means for an elliptic curve to be modular. Kenneth Ribet [27] proved that if n is a prime greater than three, a, b, and c are nonzero integers, and an + bn = cn, then the elliptic curve (1) is not modular. But the results announced by Wiles imply the following.\n\nIn\n\n1.3 we will see that such curves are elliptic curves, and in\n\n§\n\n§\n\nTheorem 1.1 (Wiles). If A and B are distinct, nonzero, relatively prime integers, and AB(A\n\nB) is divisible by 16, then the elliptic curve\n\n−\n\ny2 = x(x + A)(x + B)\n\nis modular.\n\nbn with a, b, c, and n coming from our hypothetical solution to a Fermat equation as above, we see that the conditions of Theorem 1.1 are satisfied since n 5 and one of a, b, and c is even. Thus Theorem 1.1 and Ribet’s result together imply Fermat’s Last Theorem!\n\nTaking A = an and B =\n\n−\n\n≥\n\n1.2. History. The story of the connection between Fermat’s Last Theorem and elliptic curves begins in 1955, when Yutaka Taniyama (1927–1958) posed problems which may be viewed as a weaker version of the following conjecture (see [38]).\n\nTaniyama-Shimura Conjecture. Every elliptic curve over Q is modular.\n\nThe conjecture in the present form was made by Goro Shimura around 1962–64 and has become better understood due to work of Shimura [33–37] and of Andr´e Weil [42] (see also [7]). The Taniyama-Shimura Conjecture is one of the major conjectures in number theory.\n\nBeginning in the late 1960s [15–18], Yves Hellegouarch connected Fermat equa- tions an + bn = cn with elliptic curves of the form (1) and used results about Fer- mat’s Last Theorem to prove results about elliptic curves. The landscape changed abruptly in 1985 when Gerhard Frey stated in a lecture at Oberwolfach that elliptic curves arising from counterexamples to Fermat’s Last Theorem could not be mod- ular [11]. Shortly thereafter Ribet [27] proved this, following ideas of Jean-Pierre Serre [32] (see [24] for a survey). In other words, “Taniyama-Shimura Conjecture\n\nFermat’s Last Theorem”. Thus, the stage was set. A proof of the Taniyama-Shimura Conjecture (or enough of it to know that elliptic curves coming from Fermat equations are modular) would be a proof of Fermat’s Last Theorem.\n\n⇒\n\n1.3. Elliptic curves.\n\nDefinition. An elliptic curve over Q is a nonsingular curve defined by an equation of the form\n\ny2 + a1xy + a3y = x3 + a2x2 + a4x + a6\n\n(2)\n\nwhere the coefficients ai are integers. The solution ( on the elliptic curve.\n\n, ∞\n\n) will be viewed as a point\n\n∞\n\n4\n\nK. RUBIN AND A. SILVERBERG\n\nRemarks. (i) A singular point on a curve f (x, y) = 0 is a point where both partial derivatives vanish. A curve is nonsingular if it has no singular points.\n\n(ii) Two elliptic curves over Q are isomorphic if one can be obtained from the other by changing coordinates x = A2x′ + B, y = A3y′ + Cx′ + D, with A, B, C, D\n\nQ and dividing through by A6.\n\n∈ (iii) Every elliptic curve over Q is isomorphic to one of the form\n\ny2 = x3 + a2x2 + a4x + a6\n\nwith integers ai. A curve of this form is nonsingular if and only if the cubic on the right side has no repeated roots.\n\nExample. The equation y2 = x(x + 32)(x\n\n42) defines an elliptic curve over Q.\n\n−\n\n1.4. Modularity. Let H denote the complex upper half plane C : Im(z) > 0 } where Im(z) is the imaginary part of z. If N is a positive integer, define a group of matrices\n\nz\n\n{\n\n∈\n\na b c d\n\nSL2(Z) : c is divisible by N\n\n.\n\nΓ0(N ) =\n\n∈\n\n(z) = az+b The group Γ0(N ) acts on H by linear fractional transformations cz+d . (cid:9) (cid:1) The quotient space H/Γ0(N ) is a (noncompact) Riemann surface. It can be com- pleted to a compact Riemann surface, denoted X0(N ), by adjoining a finite set of points called cusps. The cusps are the finitely many equivalence classes of Q ∞} under the action of Γ0(N ) (see Chapter 1 of [35]). The complex points of an elliptic curve can also be viewed as a compact Riemann surface.\n\na b c d\n\n(cid:8)(cid:0)\n\n(cid:1)\n\n(cid:0)\n\ni\n\n∪{\n\nDefinition. An elliptic curve E is modular if, for some integer N , there is a holo- morphic map from X0(N ) onto E.\n\nExample. It can be shown that there is a (holomorphic) isomorphism from X0(15) onto the elliptic curve y2 = x(x + 32)(x\n\n42).\n\n−\n\nRemark . There are many equivalent definitions of modularity (see II.4.D of [24] and appendix of [22]). In some cases the equivalence is a deep result. For Wiles’ 1.7 proof of Fermat’s Last Theorem it suffices to use only the definition given in below.\n\n§\n\n§\n\n1.5. Semistability.\n\nDefinition. An elliptic curve over Q is semistable at the prime q if it is isomorphic to an elliptic curve over Q which modulo q either is nonsingular or has a singu- lar point with two distinct tangent directions. An elliptic curve over Q is called semistable if it is semistable at every prime.\n\nExample. The elliptic curve y2 = x(x + 32)(x isomorphic to y2 + xy + y = x3 + x2 x(x + 42)(x\n\n42) is semistable because it is − 10, but the elliptic curve y2 =\n\n10x\n\n−\n\n−\n\n32) is not semistable (it is not semistable at 2).\n\n−\n\n2 we explain how Wiles shows that his main result on Galois representations (Theorem 5.3) implies the following part of the Taniyama-Shimura Conjecture.\n\nBeginning in\n\n§\n\nSemistable Taniyama-Shimura Conjecture. Every semistable elliptic curve over Q is modular.\n\nA REPORT ON WILES’ CAMBRIDGE LECTURES\n\n5\n\nProposition 1.2. The Semistable Taniyama-Shimura Conjecture implies Theorem 1.1.\n\nProof. If A and B are distinct, nonzero, relatively prime integers, write EA,B for the elliptic curve defined by y2 = x(x + A)(x + B). Since EA,B and E−A,−B are isomorphic over the complex numbers (i.e., as Riemann surfaces), EA,B is modular if and only if E−A,−B is modular. If further AB(A B) is divisible by 16, then either EA,B or E−A,−B is semistable (this is easy to check directly; see for example I.1 of [24]). The Semistable Taniyama-Shimura Conjecture now implies that both § EA,B and E−A,−B are modular, and thus implies Theorem 1.1.\n\n−\n\nRemark . In 1.1 we saw that Theorem 1.1 and Ribet’s Theorem together imply Fermat’s Last Theorem. Therefore, the Semistable Taniyama-Shimura Conjecture implies Fermat’s Last Theorem.\n\n§\n\n1.6. Modular forms. In this paper we will work with a definition of modularity which uses modular forms.\n\nDefinition. If N is a positive integer, a modular form f of weight k for Γ0(N ) is C which satisfies a holomorphic function f : H\n\n→\n\nf (γ(z)) = (cz + d)kf (z)\n\na b c d\n\nH,\n\n(3)\n\nΓ0(N ) and z\n\nfor every γ =\n\n∈\n\n∈\n\n(cid:1)\n\n(cid:0)\n\nand is holomorphic at the cusps (see Chapter 2 of [35]).\n\n1 1 0 1\n\nΓ0(N )), so ∞ n=0 ane2πinz, with complex numbers an and it has a Fourier expansion f (z) = (cid:1) . We say f is a cusp form if it with n vanishes at all the cusps; in particular for a cusp form the coefficient a0 (the value at i\n\nA modular form f satisfies f (z) = f (z + 1) (apply (3) to\n\n∈\n\n(cid:0)\n\n0 because f is holomorphic at the cusp i\n\n≥\n\n∞\n\nP\n\n) is zero. Call a cusp form normalized if a1 = 1.\n\n∞ For fixed N there are commuting linear operators (called Hecke operators) Tm, 1, on the (finite-dimensional) vector space of cusp forms of weight\n\nfor integers m two for Γ0(N ) (see Chapter 3 of [35]). If f (z) =\n\n≥\n\n∞ n=1 ane2πinz, then\n\nP danm/d2\n\n∞\n\ne2πinz\n\n(4)\n\nTmf (z) =\n\nn=1 X\n\n(d,N )=1 d|(n,m)\n\n(cid:0) X\n\n(cid:1)\n\nwhere (a, b) denotes the greatest common divisor of a and b and a b means that a divides b. The Hecke algebra T (N ) is the ring generated over Z by these operators.\n\n|\n\nDefinition. In this paper an eigenform will mean a normalized cusp form of weight two for some Γ0(N ) which is an eigenfunction for all the Hecke operators.\n\n∞ n=1 ane2πinz is an eigenform, then Tmf = amf for all m.\n\nBy (4), if f (z) =\n\nP\n\n6\n\nK. RUBIN AND A. SILVERBERG\n\n1.7. Modularity, revisited. Suppose E is an elliptic curve over Q. If p is a prime, write Fp for the finite field with p elements, and let E(Fp) denote the Fp- solutions of the equation for E (including the point at infinity). We now give a second definition of modularity for an elliptic curve.\n\nDefinition. An elliptic curve E over Q is modular if there exists an eigenform\n\n∞ n=1 ane2πinz such that for all but finitely many primes q,\n\n#(E(Fq)).\n\n(5) P\n\naq = q + 1\n\n− 2. An overview\n\nThe flow chart shows how Fermat’s Last Theorem would follow if one knew the Semistable Modular Lifting Conjecture (Conjecture 2.1) for the primes 3 and 5. 1 we discussed the upper arrow, i.e., the implication “Semistable Taniyama- In § Fermat’s Last Theorem”. In this section we will discuss the Shimura Conjecture other implications in the flow chart. The implication given by the lowest arrow is straightforward (Proposition 2.3), while the middle one uses an ingenious idea of Wiles (Proposition 2.4).\n\n⇒\n\nFermat’s Last Theorem\n\n✻\n\nSemistable Taniyama-Shimura Conjecture\n\n✻\n\n(cid:0)\n\n❅ ❅\n\n(cid:0)\n\nSemistable Taniyama-Shimura for ¯ρE,3 irreducible\n\nSemistable Modular Lifting for p = 5\n\n✻\n\n(cid:0) (cid:0)\n\n❅\n\n❅\n\nSemistable Modular Lifting for p = 3\n\nLanglands-Tunnell Theorem\n\nSemistable Modular Lifting Conjecture\n\nFermat’s Last Theorem .\n\n⇒\n\nRemark . By the Modular Lifting Conjecture we will mean the Semistable Modular Lifting Conjecture with the hypothesis of semistability removed. The arguments of this section can also be used to show that the Modular Lifting Conjecture for p = 3 and 5, together with the Langlands-Tunnell Theorem, imply the full Taniyama- Shimura Conjecture.\n\nA REPORT ON WILES’ CAMBRIDGE LECTURES\n\n7\n\n2.1. Semistable Modular Lifting. Let ¯Q denote the algebraic closure of Q in C, and let GQ be the Galois group Gal( ¯Q/Q). If p is a prime, write\n\nF× p\n\n¯εp : GQ\n\n→\n\nfor the character giving the action of GQ on the p-th roots of unity. For the facts about elliptic curves stated below, see [39]. If E is an elliptic curve over Q and F is a subfield of the complex numbers, there is a natural commutative group law on the set of F -solutions of E, with the point at infinity as the identity element. Denote this group E(F ). If p is a prime, write E[p] for the subgroup of points in E( ¯Q) of order dividing p. Then E[p] ∼= F2 p. The action of GQ on E[p] gives a continuous representation\n\nGL2(Fp)\n\n¯ρE,p : GQ\n\n→\n\n(defined up to isomorphism) such that\n\n(6)\n\ndet(¯ρE,p) = ¯εp\n\nand for all but finitely many primes q,\n\n#(E(Fq))\n\n(7)\n\ntrace(¯ρE,p(Frobq))\n\nq + 1\n\n(mod p).\n\n≡ (See Appendix A for the definition of the Frobenius elements Frobq ∈ to each prime number q.)\n\n−\n\nGQ attached\n\n∞ n=1 ane2πinz is an eigenform, let\n\nOf denote the ring of integers of the number field Q(a2, a3, . . . ). (Recall that our eigenforms are normalized so that a1 = 1.)\n\nIf f (z) =\n\nP\n\nThe following conjecture is in the spirit of a conjecture of Mazur (see Conjectures\n\n3.2 and 4.2).\n\nConjecture 2.1 (Semistable Modular Lifting Conjecture). Suppose p is an odd prime and E is a semistable elliptic curve over Q satisfying\n\n(a) ¯ρE,p is irreducible, (b) there are an eigenform f (z) =\n\n∞ n=1 ane2πinz and a prime ideal λ of\n\nOf\n\nsuch that p\n\nλ and for all but finitely many primes q,\n\n∈\n\nP\n\n#(E(Fq))\n\naq ≡\n\nq + 1\n\n(mod λ).\n\n−\n\nThen E is modular.\n\nThe Semistable Modular Lifting Conjecture is a priori weaker than the Semi- stable Taniyama-Shimura Conjecture because of the extra hypotheses (a) and (b). The more serious condition is (b); there is no known way to produce such a form in general. But when p = 3, the existence of such a form follows from the theorem below of Tunnell [41] and Langlands [20]. Wiles then gets around condition (a) by a clever argument (described below) which, when ¯ρE,3 is not irreducible, allows him to use p = 5 instead.\n\n8\n\nK. RUBIN AND A. SILVERBERG\n\n2.2. Langlands-Tunnell Theorem. In order to state the Langlands-Tunnell Theorem, we need weight-one modular forms for a subgroup of Γ0(N ). Let\n\na b c d\n\nSL2(Z) : c\n\n0 (mod N ), a\n\nd\n\n1 (mod N )\n\n.\n\nΓ1(N ) =\n\n∈\n\n≡\n\n≡\n\n≡\n\n(cid:1)\n\n(cid:9)\n\n(cid:8)(cid:0)\n\nReplacing Γ0(N ) by Γ1(N ) in 1.6, one can define the notion of cusp forms on § Γ1(N ). See Chapter 3 of [35] for the definitions of the Hecke operators on the space of weight-one cusp forms for Γ1(N ).\n\nTheorem 2.2 (Langlands-Tunnell). Suppose ρ : GQ GL2(C) is a continuous irreducible representation whose image in PGL2(C) is a subgroup of S4 (the sym- metric group on four elements ), τ is complex conjugation, and det(ρ(τ )) = 1. ∞ n=1 bne2πinz for some Γ1(N ), which is an Then there is a weight-one cusp form eigenfunction for all the corresponding Hecke operators, such that for all but finitely many primes q,\n\n→\n\n−\n\nP\n\n(8)\n\nbq = trace(ρ(Frobq)).\n\nThe theorem as stated by Langlands [20] and by Tunnell [41] produces an auto- morphic representation rather than a cusp form. Using the fact that det(ρ(τ )) = 1, standard techniques (see for example [12]) show that this automorphic repre-\n\n− sentation corresponds to a weight-one cusp form as in Theorem 2.2.\n\n2.3. Semistable Modular Lifting\n\nSemistable Taniyama-Shimura.\n\n⇒\n\nProposition 2.3. Suppose the Semistable Modular Lifting Conjecture is true for p = 3, E is a semistable elliptic curve, and ¯ρE,3 is irreducible. Then E is modular.\n\nProof. It suffices to show that hypothesis (b) of the Semistable Modular Lifting Conjecture is satisfied with the given curve E, for p = 3. There is a faithful representation\n\nGL2(Z[√\n\nGL2(C)\n\nψ : GL2(F3) ֒\n\n2])\n\n−\n\n⊂\n\n→\n\nGL2(F3),\n\nsuch that for every g\n\n∈ trace(ψ(g))\n\n(mod(1 + √\n\n(9)\n\ntrace(g)\n\n2))\n\n≡\n\n−\n\nand\n\n(10)\n\ndet(ψ(g))\n\ndet(g)\n\n(mod 3).\n\n≡\n\nExplicitly, ψ can be defined on generators of GL2(F3) by\n\n√\n\n1 1 1 0\n\n1 1 1 0\n\n1 1\n\n1 1\n\n2 1 1 0\n\n.\n\nψ\n\n=\n\nand ψ\n\n=\n\n− −\n\n− −\n\n−\n\n−\n\n(cid:19)\n\n(cid:18)(cid:18)\n\n(cid:19)(cid:19)\n\n(cid:18)\n\n(cid:18)(cid:18) ¯ρE,3. If τ is complex conjugation, then it follows from (6) and (10) that 1. The image of ψ in PGL2(C) is a subgroup of PGL2(F3) ∼= S4.\n\n(cid:19)\n\n(cid:19)(cid:19)\n\n(cid:18)\n\nLet ρ = ψ ◦ det(ρ(τ )) = Using that ¯ρE,3 is irreducible, one can show that ρ is irreducible.\n\n−\n\n∞ n=1 bne2πinz be a weight-one cusp form for some Γ1(N ) obtained by applying the Langlands-Tunnell\n\nLet p be a prime of ¯Q containing 1 + √\n\n2. Let g(z) =\n\n−\n\nP\n\nA REPORT ON WILES’ CAMBRIDGE LECTURES\n\n9\n\nTheorem (Theorem 2.2) to ρ. It follows from (6) and (10) that N is divisible by 3. The function\n\n0 if d 1 if d 1 if d\n\n0 (mod 3), 1 (mod 3), 2 (mod 3)\n\n∞\n\n≡ ≡ ≡\n\nχ(d)e2πinz where χ(d) =\n\nE(z) = 1 + 6\n\n\uf8f1 \uf8f2\n\nn=1 X\n\nXd|n\n\n−\n\n∞ n=1 cne2πinz is a weight-one modular form for Γ1(3). The product g(z)E(z) = It is now is a weight-two cusp form for Γ0(N ) with cn ≡ bn possible to find an eigenform f (z) = (mod p) for every n (see 6.10 and 6.11 of [4]). By (7), (8), and (9), f satisfies (b) of the Semistable Modular Lifting Conjecture with p = 3 and with λ = p\n\n\uf8f3\n\nbn (mod p) for all n. P n=1 ane2πinz on Γ0(N ) such that an ≡ ∩ Of .\n\n∞\n\nP\n\nProposition 2
481
https://python.langchain.com/docs/integrations/document_loaders/duckdb
ComponentsDocument loadersDuckDBOn this pageDuckDBDuckDB is an in-process SQL OLAP database management system.Load a DuckDB query with one document per row.#!pip install duckdbfrom langchain.document_loaders import DuckDBLoaderTeam,PayrollNationals,81.34Reds,82.20 Writing example.csvloader = DuckDBLoader("SELECT * FROM read_csv_auto('example.csv')")data = loader.load()print(data) [Document(page_content='Team: Nationals\nPayroll: 81.34', metadata={}), Document(page_content='Team: Reds\nPayroll: 82.2', metadata={})]Specifying Which Columns are Content vs Metadata​loader = DuckDBLoader( "SELECT * FROM read_csv_auto('example.csv')", page_content_columns=["Team"], metadata_columns=["Payroll"],)data = loader.load()print(data) [Document(page_content='Team: Nationals', metadata={'Payroll': 81.34}), Document(page_content='Team: Reds', metadata={'Payroll': 82.2})]Adding Source to Metadata​loader = DuckDBLoader( "SELECT Team, Payroll, Team As source FROM read_csv_auto('example.csv')", metadata_columns=["source"],)data = loader.load()print(data) [Document(page_content='Team: Nationals\nPayroll: 81.34\nsource: Nationals', metadata={'source': 'Nationals'}), Document(page_content='Team: Reds\nPayroll: 82.2\nsource: Reds', metadata={'source': 'Reds'})]PreviousDropboxNextEmailSpecifying Which Columns are Content vs MetadataAdding Source to Metadata
482
https://python.langchain.com/docs/integrations/document_loaders/email
ComponentsDocument loadersEmailOn this pageEmailThis notebook shows how to load email (.eml) or Microsoft Outlook (.msg) files.Using Unstructured​#!pip install unstructuredfrom langchain.document_loaders import UnstructuredEmailLoaderloader = UnstructuredEmailLoader("example_data/fake-email.eml")data = loader.load()data [Document(page_content='This is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': 'example_data/fake-email.eml'})]Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredEmailLoader("example_data/fake-email.eml", mode="elements")data = loader.load()data[0] Document(page_content='This is a test email to use for unit tests.', metadata={'source': 'example_data/fake-email.eml', 'filename': 'fake-email.eml', 'file_directory': 'example_data', 'date': '2022-12-16T17:04:16-05:00', 'filetype': 'message/rfc822', 'sent_from': ['Matthew Robinson <mrobinson@unstructured.io>'], 'sent_to': ['Matthew Robinson <mrobinson@unstructured.io>'], 'subject': 'Test Email', 'category': 'NarrativeText'})Processing Attachments​You can process attachments with UnstructuredEmailLoader by setting process_attachments=True in the constructor. By default, attachments will be partitioned using the partition function from unstructured. You can use a different partitioning function by passing the function to the attachment_partitioner kwarg.loader = UnstructuredEmailLoader( "example_data/fake-email.eml", mode="elements", process_attachments=True,)data = loader.load()data[0] Document(page_content='This is a test email to use for unit tests.', metadata={'source': 'example_data/fake-email.eml', 'filename': 'fake-email.eml', 'file_directory': 'example_data', 'date': '2022-12-16T17:04:16-05:00', 'filetype': 'message/rfc822', 'sent_from': ['Matthew Robinson <mrobinson@unstructured.io>'], 'sent_to': ['Matthew Robinson <mrobinson@unstructured.io>'], 'subject': 'Test Email', 'category': 'NarrativeText'})Using OutlookMessageLoader​#!pip install extract_msgfrom langchain.document_loaders import OutlookMessageLoaderloader = OutlookMessageLoader("example_data/fake-email.msg")data = loader.load()data[0] Document(page_content='This is a test email to experiment with the MS Outlook MSG Extractor\r\n\r\n\r\n-- \r\n\r\n\r\nKind regards\r\n\r\n\r\n\r\n\r\nBrian Zhou\r\n\r\n', metadata={'subject': 'Test for TIF files', 'sender': 'Brian Zhou <brizhou@gmail.com>', 'date': 'Mon, 18 Nov 2013 16:26:24 +0800'})PreviousDuckDBNextEmbaasUsing UnstructuredRetain ElementsProcessing AttachmentsUsing OutlookMessageLoader
483
https://python.langchain.com/docs/integrations/document_loaders/embaas
ComponentsDocument loadersEmbaasOn this pageEmbaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.Prerequisites​Create a free embaas account at https://embaas.io/register and generate an API keyDocument Text Extraction API​The document text extraction API allows you to extract the text from a given document. The API supports a variety of document formats, including PDF, mp3, mp4 and more. For a full list of supported formats, check out the API docs (link below).# Set API keyembaas_api_key = "YOUR_API_KEY"# or set environment variableos.environ["EMBAAS_API_KEY"] = "YOUR_API_KEY"Using a blob (bytes)​from langchain.document_loaders.embaas import EmbaasBlobLoaderfrom langchain.document_loaders.blob_loaders import Blobblob_loader = EmbaasBlobLoader()blob = Blob.from_path("example.pdf")documents = blob_loader.load(blob)# You can also directly create embeddings with your preferred embeddings modelblob_loader = EmbaasBlobLoader(params={"model": "e5-large-v2", "should_embed": True})blob = Blob.from_path("example.pdf")documents = blob_loader.load(blob)print(documents[0]["metadata"]["embedding"])Using a file​from langchain.document_loaders.embaas import EmbaasLoaderfile_loader = EmbaasLoader(file_path="example.pdf")documents = file_loader.load()# Disable automatic text splittingfile_loader = EmbaasLoader(file_path="example.mp3", params={"should_chunk": False})documents = file_loader.load()For more detailed information about the embaas document text extraction API, please refer to the official embaas API documentation.PreviousEmailNextEPubPrerequisitesDocument Text Extraction API
484
https://python.langchain.com/docs/integrations/document_loaders/epub
ComponentsDocument loadersEPubOn this pageEPubEPUB is an e-book file format that uses the ".epub" file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.This covers how to load .epub documents into the Document format that we can use downstream. You'll need to install the pandoc package for this loader to work.#!pip install pandocfrom langchain.document_loaders import UnstructuredEPubLoaderloader = UnstructuredEPubLoader("winter-sports.epub")data = loader.load()Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredEPubLoader("winter-sports.epub", mode="elements")data = loader.load()data[0] Document(page_content='The Project Gutenberg eBook of Winter Sports in\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0)PreviousEmbaasNextEtherscanRetain Elements
485
https://python.langchain.com/docs/integrations/document_loaders/etherscan
ComponentsDocument loadersEtherscanOn this pageEtherscanEtherscan is the leading blockchain explorer, search, API and analytics platform for Ethereum, a decentralized smart contracts platform.Overview​The Etherscan loader use Etherscan API to load transacactions histories under specific account on Ethereum Mainnet.You will need a Etherscan api key to proceed. The free api key has 5 calls per seconds quota.The loader supports the following six functinalities:Retrieve normal transactions under specific account on Ethereum MainetRetrieve internal transactions under specific account on Ethereum MainetRetrieve erc20 transactions under specific account on Ethereum MainetRetrieve erc721 transactions under specific account on Ethereum MainetRetrieve erc1155 transactions under specific account on Ethereum MainetRetrieve ethereum balance in wei under specific account on Ethereum MainetIf the account does not have corresponding transactions, the loader will a list with one document. The content of document is ''.You can pass differnt filters to loader to access different functionalities we mentioned above:"normal_transaction""internal_transaction""erc20_transaction""eth_balance""erc721_transaction""erc1155_transaction" The filter is default to normal_transactionIf you have any questions, you can access Etherscan API Doc or contact me via i@inevitable.tech.All functions related to transactions histories are restricted 1000 histories maximum because of Etherscan limit. You can use the following parameters to find the transaction histories you need:offset: default to 20. Shows 20 transactions for one timepage: default to 1. This controls pagenation.start_block: Default to 0. The transaction histories starts from 0 block.end_block: Default to 99999999. The transaction histories starts from 99999999 blocksort: "desc" or "asc". Set default to "desc" to get latest transactions.Setup​%pip install langchain -qfrom langchain.document_loaders import EtherscanLoaderimport osos.environ["ETHERSCAN_API_KEY"] = etherscanAPIKeyCreate a ERC20 transaction loader​account_address = "0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b"loader = EtherscanLoader(account_address, filter="erc20_transaction")result = loader.load()eval(result[0].page_content) {'blockNumber': '13242975', 'timeStamp': '1631878751', 'hash': '0x366dda325b1a6570928873665b6b418874a7dedf7fee9426158fa3536b621788', 'nonce': '28', 'blockHash': '0x5469dba1b1e1372962cf2be27ab2640701f88c00640c4d26b8cc2ae9ac256fb6', 'from': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3', 'contractAddress': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '298131000000000', 'tokenName': 'ABCHANGE.io', 'tokenSymbol': 'XCH', 'tokenDecimal': '9', 'transactionIndex': '71', 'gas': '15000000', 'gasPrice': '48614996176', 'gasUsed': '5712724', 'cumulativeGasUsed': '11507920', 'input': 'deprecated', 'confirmations': '4492277'}Create a normal transaction loader with customized parameters​loader = EtherscanLoader( account_address, page=2, offset=20, start_block=10000, end_block=8888888888, sort="asc",)result = loader.load()result 20 [Document(page_content="{'blockNumber': '1723771', 'timeStamp': '1466213371', 'hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'nonce': '3155', 'blockHash': '0xc2c2207bcaf341eed07f984c9a90b3f8e8bdbdbd2ac6562f8c2f5bfa4b51299d', 'transactionIndex': '5', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13149213761000000000', 'gas': '90000', 'gasPrice': '22655598156', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '126000', 'gasUsed': '21000', 'confirmations': '16011481', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1727090', 'timeStamp': '1466262018', 'hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'nonce': '3267', 'blockHash': '0xc0cff378c3446b9b22d217c2c5f54b1c85b89a632c69c55b76cdffe88d2b9f4d', 'transactionIndex': '20', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11521979886000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3806725', 'gasUsed': '21000', 'confirmations': '16008162', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1730337', 'timeStamp': '1466308222', 'hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'nonce': '3344', 'blockHash': '0x3a52d28b8587d55c621144a161a0ad5c37dd9f7d63b629ab31da04fa410b2cfa', 'transactionIndex': '1', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9783400526000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '60788', 'gasUsed': '21000', 'confirmations': '16004915', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1733479', 'timeStamp': '1466352351', 'hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'nonce': '3367', 'blockHash': '0x9928661e7ae125b3ae0bcf5e076555a3ee44c52ae31bd6864c9c93a6ebb3f43e', 'transactionIndex': '0', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '1570706444000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '16001773', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1734172', 'timeStamp': '1466362463', 'hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'nonce': '1016', 'blockHash': '0x8a8afe2b446713db88218553cfb5dd202422928e5e0bc00475ed2f37d95649de', 'transactionIndex': '4', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '6322276709000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '105333', 'gasUsed': '21000', 'confirmations': '16001080', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1737276', 'timeStamp': '1466406037', 'hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'nonce': '1024', 'blockHash': '0xe117cad73752bb485c3bef24556e45b7766b283229180fcabc9711f3524b9f79', 'transactionIndex': '35', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9976891868000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3187163', 'gasUsed': '21000', 'confirmations': '15997976', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1740314', 'timeStamp': '1466450262', 'hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'nonce': '1051', 'blockHash': '0x588d17842819a81afae3ac6644d8005c12ce55ddb66c8d4c202caa91d4e8fdbe', 'transactionIndex': '6', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8060633765000000000', 'gas': '90000', 'gasPrice': '22926905859', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '153077', 'gasUsed': '21000', 'confirmations': '15994938', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1743384', 'timeStamp': '1466494099', 'hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'nonce': '1068', 'blockHash': '0x997245108c84250057fda27306b53f9438ad40978a95ca51d8fd7477e73fbaa7', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9541921352000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '119650', 'gasUsed': '21000', 'confirmations': '15991868', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1746405', 'timeStamp': '1466538123', 'hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'nonce': '1092', 'blockHash': '0x3af3966cdaf22e8b112792ee2e0edd21ceb5a0e7bf9d8c168a40cf22deb3690c', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8433783799000000000', 'gas': '90000', 'gasPrice': '25689279306', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15988847', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1749459', 'timeStamp': '1466582044', 'hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'nonce': '1096', 'blockHash': '0x5fc5d2a903977b35ce1239975ae23f9157d45d7bd8a8f6205e8ce270000797f9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10269065805000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15985793', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1752614', 'timeStamp': '1466626168', 'hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'nonce': '1118', 'blockHash': '0x88ef054b98e47504332609394e15c0a4467f84042396717af6483f0bcd916127', 'transactionIndex': '11', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11325836780000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '252000', 'gasUsed': '21000', 'confirmations': '15982638', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1755659', 'timeStamp': '1466669931', 'hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'nonce': '1133', 'blockHash': '0x2983972217a91343860415d1744c2a55246a297c4810908bbd3184785bc9b0c2', 'transactionIndex': '14', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13226475343000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '2674679', 'gasUsed': '21000', 'confirmations': '15979593', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1758709', 'timeStamp': '1466713652', 'hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'nonce': '1147', 'blockHash': '0x1660de1e73067251be0109d267a21ffc7d5bde21719a3664c7045c32e771ecf9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9758447294000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15976543', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1761783', 'timeStamp': '1466757809', 'hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'nonce': '1169', 'blockHash': '0x7576961afa4218a3264addd37a41f55c444dd534e9410dbd6f93f7fe20e0363e', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10197126683000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15973469', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1764895', 'timeStamp': '1466801683', 'hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'nonce': '1186', 'blockHash': '0x2e687643becd3c36e0c396a02af0842775e17ccefa0904de5aeca0a9a1aa795e', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8690241462000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15970357', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1767936', 'timeStamp': '1466845682', 'hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'nonce': '1211', 'blockHash': '0xb01d8fd47b3554a99352ac3e5baf5524f314cfbc4262afcfbea1467b2d682898', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11914401843000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15967316', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1770911', 'timeStamp': '1466888890', 'hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'nonce': '1212', 'blockHash': '0x79a9de39276132dab8bf00dc3e060f0e8a14f5e16a0ee4e9cc491da31b25fe58', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10918214730000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15964341', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1774044', 'timeStamp': '1466932983', 'hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'nonce': '1240', 'blockHash': '0x69cee390378c3b886f9543fb3a1cb2fc97621ec155f7884564d4c866348ce539', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9979637283000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15961208', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1777057', 'timeStamp': '1466976422', 'hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'nonce': '1248', 'blockHash': '0xc7cacda0ac38c99f1b9bccbeee1562a41781d2cfaa357e8c7b4af6a49584b968', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '4556173496000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15958195', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}), Document(page_content="{'blockNumber': '1780120', 'timeStamp': '1467020353', 'hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'nonce': '1266', 'blockHash': '0xfc0e066e5b613239e1a01e6d582e7ab162ceb3ca4f719dfbd1a0c965adcfe1c5', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11890330240000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15955132', 'methodId': '0x', 'functionName': ''}", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'})]PreviousEPubNextEverNoteOverviewSetupCreate a ERC20 transaction loaderCreate a normal transaction loader with customized parameters
486
https://python.langchain.com/docs/integrations/document_loaders/evernote
ComponentsDocument loadersEverNoteEverNoteEverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual "notebooks" and can be tagged, annotated, edited, searched, and exported.This notebook shows how to load an Evernote export file (.enex) from disk.A document will be created for each note in the export.# lxml and html2text are required to parse EverNote notes# !pip install lxml# !pip install html2textfrom langchain.document_loaders import EverNoteLoader# By default all notes are combined into a single Documentloader = EverNoteLoader("example_data/testing.enex")loader.load() [Document(page_content='testing this\n\nwhat happens?\n\nto the world?**Jan - March 2022**', metadata={'source': 'example_data/testing.enex'})]# It's likely more useful to return a Document for each noteloader = EverNoteLoader("example_data/testing.enex", load_single_document=False)loader.load() [Document(page_content='testing this\n\nwhat happens?\n\nto the world?', metadata={'title': 'testing', 'created': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=47, tm_sec=46, tm_wday=3, tm_yday=40, tm_isdst=-1), 'updated': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=53, tm_sec=28, tm_wday=3, tm_yday=40, tm_isdst=-1), 'note-attributes.author': 'Harrison Chase', 'source': 'example_data/testing.enex'}), Document(page_content='**Jan - March 2022**', metadata={'title': 'Summer Training Program', 'created': time.struct_time(tm_year=2022, tm_mon=12, tm_mday=27, tm_hour=1, tm_min=59, tm_sec=48, tm_wday=1, tm_yday=361, tm_isdst=-1), 'note-attributes.author': 'Mike McGarry', 'note-attributes.source': 'mobile.iphone', 'source': 'example_data/testing.enex'})]PreviousEtherscanNextNotebook
487
https://python.langchain.com/docs/integrations/document_loaders/example_data/notebook
ComponentsDocument loadersexample_dataNotebookNotebookThis notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.from langchain.document_loaders import NotebookLoaderloader = NotebookLoader("example_data/notebook.ipynb")NotebookLoader.load() loads the .ipynb notebook file into a Document object.Parameters:include_outputs (bool): whether to include cell outputs in the resulting document (default is False).max_output_length (int): the maximum number of characters to include from each cell output (default is 10).remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False).traceback (bool): whether to include full traceback (default is False).loader.load(include_outputs=True, max_output_length=20, remove_newline=True)PreviousEverNoteNextMicrosoft Excel
488
https://python.langchain.com/docs/integrations/document_loaders/excel
ComponentsDocument loadersMicrosoft ExcelMicrosoft ExcelThe UnstructuredExcelLoader is used to load Microsoft Excel files. The loader works with both .xlsx and .xls files. The page content will be the raw text of the Excel file. If you use the loader in "elements" mode, an HTML representation of the Excel file will be available in the document metadata under the text_as_html key.from langchain.document_loaders import UnstructuredExcelLoaderloader = UnstructuredExcelLoader("example_data/stanley-cups.xlsx", mode="elements")docs = loader.load()docs[0] Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border="1" class="dataframe">\n <tbody>\n <tr>\n <td>Team</td>\n <td>Location</td>\n <td>Stanley Cups</td>\n </tr>\n <tr>\n <td>Blues</td>\n <td>STL</td>\n <td>1</td>\n </tr>\n <tr>\n <td>Flyers</td>\n <td>PHI</td>\n <td>2</td>\n </tr>\n <tr>\n <td>Maple Leafs</td>\n <td>TOR</td>\n <td>13</td>\n </tr>\n </tbody>\n</table>', 'category': 'Table'})PreviousNotebookNextFacebook Chat
489
https://python.langchain.com/docs/integrations/document_loaders/facebook_chat
ComponentsDocument loadersFacebook ChatFacebook ChatMessenger is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010.This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain.# pip install pandasfrom langchain.document_loaders import FacebookChatLoaderloader = FacebookChatLoader("example_data/facebook_chat.json")loader.load() [Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\n\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 03:04:28: Here is $129\n\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 02:59:59: How much do you want?\n\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})]PreviousMicrosoft ExcelNextFauna
490
https://python.langchain.com/docs/integrations/document_loaders/fauna
ComponentsDocument loadersFaunaOn this pageFaunaFauna is a Document Database.Query Fauna documents#!pip install faunaQuery data example​from langchain.document_loaders.fauna import FaunaLoadersecret = "<enter-valid-fauna-secret>"query = "Item.all()" # Fauna query. Assumes that the collection is called "Item"field = "text" # The field that contains the page content. Assumes that the field is called "text"loader = FaunaLoader(query, field, secret)docs = loader.lazy_load()for value in docs: print(value)Query with Pagination​You get a after value if there are more data. You can get values after the curcor by passing in the after string in query. To learn more following this linkquery = """Item.paginate("hs+DzoPOg ... aY1hOohozrV7A")Item.all()"""loader = FaunaLoader(query, field, secret)PreviousFacebook ChatNextFigmaQuery data exampleQuery with Pagination
491
https://python.langchain.com/docs/integrations/document_loaders/figma
ComponentsDocument loadersFigmaFigmaFigma is a collaborative web application for interface design.This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation.import osfrom langchain.document_loaders.figma import FigmaFileLoaderfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.chat_models import ChatOpenAIfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.chains import ConversationChain, LLMChainfrom langchain.memory import ConversationBufferWindowMemoryfrom langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate,)The Figma API Requires an access token, node_ids, and a file key.The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilenameNode IDs are also available in the URL. Click on anything and look for the '?node-id={node_id}' param.Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokensfigma_loader = FigmaFileLoader( os.environ.get("ACCESS_TOKEN"), os.environ.get("NODE_IDS"), os.environ.get("FILE_KEY"),)# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([figma_loader])figma_doc_retriever = index.vectorstore.as_retriever()def generate_code(human_input): # I have no idea if the Jon Carmack thing makes for better code. YMMV. # See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idomatic HTML/CSS code as possible based on the user request. Everything must be inline in one file and your response must be directly renderable by the browser. Figma file nodes and metadata: {context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template( system_prompt_template ) human_message_prompt = HumanMessagePromptTemplate.from_template( human_prompt_template ) # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results gpt_4 = ChatOpenAI(temperature=0.02, model_name="gpt-4") # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input) conversation = [system_message_prompt, human_message_prompt] chat_prompt = ChatPromptTemplate.from_messages(conversation) response = gpt_4( chat_prompt.format_prompt( context=relevant_nodes, text=human_input ).to_messages() ) return responseresponse = generate_code("page top header")Returns the following in response.content:<!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html>PreviousFaunaNextGeopandas
492
https://python.langchain.com/docs/integrations/document_loaders/geopandas
ComponentsDocument loadersGeopandasGeopandasGeopandas is an open source project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further depends on fiona for file access and matplotlib for plotting.LLM applications (chat, QA) that utilize geospatial data are an interesting area for exploration.pip install sodapy pip install pandas pip install geopandasimport astimport pandas as pdimport geopandas as gpdfrom langchain.document_loaders import OpenCityDataLoaderCreate a GeoPandas dataframe from Open City Data as an example input.# Load Open City Datadataset = "tmnf-yvry" # San Francisco crime dataloader = OpenCityDataLoader(city_id="data.sfgov.org", dataset_id=dataset, limit=5000)docs = loader.load()# Convert list of dictionaries to DataFramedf = pd.DataFrame([ast.literal_eval(d.page_content) for d in docs])# Extract latitude and longitudedf["Latitude"] = df["location"].apply(lambda loc: loc["coordinates"][1])df["Longitude"] = df["location"].apply(lambda loc: loc["coordinates"][0])# Create geopandas DFgdf = gpd.GeoDataFrame( df, geometry=gpd.points_from_xy(df.Longitude, df.Latitude), crs="EPSG:4326")# Only keep valid longitudes and latitudes for San Franciscogdf = gdf[ (gdf["Longitude"] >= -123.173825) & (gdf["Longitude"] <= -122.281780) & (gdf["Latitude"] >= 37.623983) & (gdf["Latitude"] <= 37.929824)]Visiualization of the sample of SF crimne data. import matplotlib.pyplot as plt# Load San Francisco map datasf = gpd.read_file("https://data.sfgov.org/resource/3psu-pn9h.geojson")# Plot the San Francisco map and the pointsfig, ax = plt.subplots(figsize=(10, 10))sf.plot(ax=ax, color="white", edgecolor="black")gdf.plot(ax=ax, color="red", markersize=5)plt.show()Load GeoPandas dataframe as a Document for downstream processing (embedding, chat, etc). The geometry will be the default page_content columns, and all other columns are placed in metadata.But, we can specify the page_content_column.from langchain.document_loaders import GeoDataFrameLoaderloader = GeoDataFrameLoader(data_frame=gdf, page_content_column="geometry")docs = loader.load()docs[0] Document(page_content='POINT (-122.420084075249 37.7083109744362)', metadata={'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309', ':@computed_region_6qbp_sg9q': nan, ':@computed_region_qgnn_b9vv': nan, ':@computed_region_ajp5_b2md': nan, ':@computed_region_yftq_j783': nan, ':@computed_region_p5aj_wyqh': nan, ':@computed_region_fyvs_ahh9': nan, ':@computed_region_6pnf_4xz7': nan, ':@computed_region_jwn9_ihcz': nan, ':@computed_region_9dfj_4gjx': nan, ':@computed_region_4isq_27mq': nan, ':@computed_region_pigm_ib2e': nan, ':@computed_region_9jxd_iqea': nan, ':@computed_region_6ezc_tdp2': nan, ':@computed_region_h4ep_8xdi': nan, ':@computed_region_n4xg_c4py': nan, ':@computed_region_fcz8_est8': nan, ':@computed_region_nqbw_i6c3': nan, ':@computed_region_2dwj_jsy4': nan, 'Latitude': 37.7083109744362, 'Longitude': -122.420084075249})PreviousFigmaNextGit
493
https://python.langchain.com/docs/integrations/document_loaders/git
ComponentsDocument loadersGitOn this pageGitGit is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.This notebook shows how to load text files from Git repository.Load existing repository from disk​pip install GitPythonfrom git import Reporepo = Repo.clone_from( "https://github.com/langchain-ai/langchain", to_path="./example_data/test_repo1")branch = repo.head.referencefrom langchain.document_loaders import GitLoaderloader = GitLoader(repo_path="./example_data/test_repo1/", branch=branch)data = loader.load()len(data)print(data[0]) page_content='.venv\n.github\n.git\n.mypy_cache\n.pytest_cache\nDockerfile' metadata={'file_path': '.dockerignore', 'file_name': '.dockerignore', 'file_type': ''}Clone repository from url​from langchain.document_loaders import GitLoaderloader = GitLoader( clone_url="https://github.com/langchain-ai/langchain", repo_path="./example_data/test_repo2/", branch="master",)data = loader.load()len(data) 1074Filtering files to load​from langchain.document_loaders import GitLoader# e.g. loading only python filesloader = GitLoader( repo_path="./example_data/test_repo1/", file_filter=lambda file_path: file_path.endswith(".py"),)PreviousGeopandasNextGitBookLoad existing repository from diskClone repository from urlFiltering files to load
494
https://python.langchain.com/docs/integrations/document_loaders/gitbook
ComponentsDocument loadersGitBookOn this pageGitBookGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.This notebook shows how to pull page data from any GitBook.from langchain.document_loaders import GitbookLoaderLoad from single GitBook page​loader = GitbookLoader("https://docs.gitbook.com")page_data = loader.load()page_data [Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)]Load from all paths in a given GitBook​For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True.loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True)all_pages_data = loader.load() Fetching text from https://docs.gitbook.com/ Fetching text from https://docs.gitbook.com/getting-started/overview Fetching text from https://docs.gitbook.com/getting-started/import Fetching text from https://docs.gitbook.com/getting-started/git-sync Fetching text from https://docs.gitbook.com/getting-started/content-structure Fetching text from https://docs.gitbook.com/getting-started/collaboration Fetching text from https://docs.gitbook.com/getting-started/publishing Fetching text from https://docs.gitbook.com/tour/quick-find Fetching text from https://docs.gitbook.com/tour/editor Fetching text from https://docs.gitbook.com/tour/customization Fetching text from https://docs.gitbook.com/tour/member-management Fetching text from https://docs.gitbook.com/tour/pdf-export Fetching text from https://docs.gitbook.com/tour/activity-history Fetching text from https://docs.gitbook.com/tour/insights Fetching text from https://docs.gitbook.com/tour/notifications Fetching text from https://docs.gitbook.com/tour/internationalization Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts Fetching text from https://docs.gitbook.com/tour/seo Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security Fetching text from https://docs.gitbook.com/advanced-guides/integrations Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings Fetching text from https://docs.gitbook.com/billing-and-admin/plans Fetching text from https://docs.gitbook.com/troubleshooting/faqs Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues Fetching text from https://docs.gitbook.com/troubleshooting/supportprint(f"fetched {len(all_pages_data)} documents.")# show second documentall_pages_data[2] fetched 28 documents. Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0)PreviousGitNextGitHubLoad from single GitBook pageLoad from all paths in a given GitBook