Unnamed: 0
stringlengths
1
178
link
stringlengths
31
163
text
stringlengths
18
32.8k
495
https://python.langchain.com/docs/integrations/document_loaders/github
ComponentsDocument loadersGitHubOn this pageGitHubThis notebooks shows how you can load issues and pull requests (PRs) for a given repository on GitHub. We will use the LangChain Python repository as an example.Setup access token​To access the GitHub API, you need a personal access token - you can set up yours here: https://github.com/settings/tokens?type=beta. You can either set this token as the environment variable GITHUB_PERSONAL_ACCESS_TOKEN and it will be automatically pulled in, or you can pass it in directly at initializaiton as the access_token named parameter.# If you haven't set your access token as an environment variable, pass it in here.from getpass import getpassACCESS_TOKEN = getpass()Load Issues and PRs​from langchain.document_loaders import GitHubIssuesLoaderloader = GitHubIssuesLoader( repo="langchain-ai/langchain", access_token=ACCESS_TOKEN, # delete/comment out this argument if you've set the access token as an env var. creator="UmerHA",)Let's load all issues and PRs created by "UmerHA".Here's a list of all filters you can use:include_prsmilestonestateassigneecreatormentionedlabelssortdirectionsinceFor more info, see https://docs.github.com/en/rest/issues/issues?apiVersion=2022-11-28#list-repository-issues.docs = loader.load()print(docs[0].page_content)print(docs[0].metadata) # Creates GitHubLoader (#5257) GitHubLoader is a DocumentLoader that loads issues and PRs from GitHub. Fixes #5257 Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: DataLoaders - @eyurtsev {'url': 'https://github.com/langchain-ai/langchain/pull/5408', 'title': 'DocumentLoader for GitHub', 'creator': 'UmerHA', 'created_at': '2023-05-29T14:50:53Z', 'comments': 0, 'state': 'open', 'labels': ['enhancement', 'lgtm', 'doc loader'], 'assignee': None, 'milestone': None, 'locked': False, 'number': 5408, 'is_pull_request': True}Only load issues​By default, the GitHub API returns considers pull requests to also be issues. To only get 'pure' issues (i.e., no pull requests), use include_prs=Falseloader = GitHubIssuesLoader( repo="langchain-ai/langchain", access_token=ACCESS_TOKEN, # delete/comment out this argument if you've set the access token as an env var. creator="UmerHA", include_prs=False,)docs = loader.load()print(docs[0].page_content)print(docs[0].metadata) ### System Info LangChain version = 0.0.167 Python version = 3.11.0 System = Windows 11 (using Jupyter) ### Who can help? - @hwchase17 - @agola11 - @UmerHA (I have a fix ready, will submit a PR) ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` import os os.environ["OPENAI_API_KEY"] = "..." from langchain.chains import LLMChain from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.prompts.chat import ChatPromptTemplate from langchain.schema import messages_from_dict role_strings = [ ("system", "you are a bird expert"), ("human", "which bird has a point beak?") ] prompt = ChatPromptTemplate.from_role_strings(role_strings) chain = LLMChain(llm=ChatOpenAI(), prompt=prompt) chain.run({}) ``` ### Expected behavior Chain should run {'url': 'https://github.com/langchain-ai/langchain/issues/5027', 'title': "ChatOpenAI models don't work with prompts created via ChatPromptTemplate.from_role_strings", 'creator': 'UmerHA', 'created_at': '2023-05-20T10:39:18Z', 'comments': 1, 'state': 'open', 'labels': [], 'assignee': None, 'milestone': None, 'locked': False, 'number': 5027, 'is_pull_request': False}PreviousGitBookNextGoogle BigQuerySetup access tokenLoad Issues and PRsOnly load issues
496
https://python.langchain.com/docs/integrations/document_loaders/google_bigquery
ComponentsDocument loadersGoogle BigQueryOn this pageGoogle BigQueryGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. BigQuery is a part of the Google Cloud Platform.Load a BigQuery query with one document per row.#!pip install google-cloud-bigqueryfrom langchain.document_loaders import BigQueryLoaderBASE_QUERY = """SELECT id, dna_sequence, organismFROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array)"""Basic Usage​loader = BigQueryLoader(BASE_QUERY)data = loader.load()print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={}, lookup_index=0)]Specifying Which Columns are Content vs Metadata​loader = BigQueryLoader( BASE_QUERY, page_content_columns=["dna_sequence", "organism"], metadata_columns=["id"],)data = loader.load()print(data) [Document(page_content='dna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={'id': 1}, lookup_index=0), Document(page_content='dna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={'id': 2}, lookup_index=0), Document(page_content='dna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={'id': 3}, lookup_index=0)]Adding Source to Metadata​# Note that the `id` column is being returned twice, with one instance aliased as `source`ALIASED_QUERY = """SELECT id, dna_sequence, organism, id as sourceFROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array)"""loader = BigQueryLoader(ALIASED_QUERY, metadata_columns=["source"])data = loader.load()print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).\nsource: 1', lookup_str='', metadata={'source': 1}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).\nsource: 2', lookup_str='', metadata={'source': 2}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).\nsource: 3', lookup_str='', metadata={'source': 3}, lookup_index=0)]PreviousGitHubNextGoogle Cloud Storage DirectoryBasic UsageSpecifying Which Columns are Content vs MetadataAdding Source to Metadata
497
https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory
ComponentsDocument loadersGoogle Cloud Storage DirectoryOn this pageGoogle Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructured data.This covers how to load document objects from an Google Cloud Storage (GCS) directory (bucket).# !pip install google-cloud-storagefrom langchain.document_loaders import GCSDirectoryLoaderloader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)]Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)]PreviousGoogle BigQueryNextGoogle Cloud Storage FileSpecifying a prefix
498
https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_file
ComponentsDocument loadersGoogle Cloud Storage FileGoogle Cloud Storage FileGoogle Cloud Storage is a managed service for storing unstructured data.This covers how to load document objects from an Google Cloud Storage (GCS) file object (blob).# !pip install google-cloud-storagefrom langchain.document_loaders import GCSFileLoaderloader = GCSFileLoader(project_name="aist", bucket="testing-hwc", blob="fake.docx")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmp3srlf8n8/fake.docx'}, lookup_index=0)]If you want to use an alternative loader, you can provide a custom function, for example:from langchain.document_loaders import PyPDFLoaderdef load_pdf(file_path): return PyPDFLoader(file_path)loader = GCSFileLoader(project_name="aist", bucket="testing-hwc", blob="fake.pdf", loader_func=load_pdf)PreviousGoogle Cloud Storage DirectoryNextGoogle Drive
499
https://python.langchain.com/docs/integrations/document_loaders/google_drive
ComponentsDocument loadersGoogle DriveOn this pageGoogle DriveGoogle Drive is a file storage and synchronization service developed by Google.This notebook covers how to load documents from Google Drive. Currently, only Google Docs are supported.Prerequisites​Create a Google Cloud project or use an existing projectEnable the Google Drive APIAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib🧑 Instructions for ingesting your Google Docs data​By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_path keyword argument. Same thing with token.json - token_path. Note that token.json will be created automatically the first time you use the loader.The first time you use GoogleDriveLoader, you will be displayed with the consent screen in your browser. If this doesn't happen and you get a RefreshError, do not use credentials_path in your GoogleDriveLoader constructor call. Instead, put that path in a GOOGLE_APPLICATION_CREDENTIALS environmental variable.GoogleDriveLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlibfrom langchain.document_loaders import GoogleDriveLoaderloader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", token_path='/path/where/you/want/token/to/be/created/google_token.json' # Optional: configure whether to recursively fetch files from subfolders. Defaults to False. recursive=False,)docs = loader.load()When you pass a folder_id by default all files of type document, sheet and pdf are loaded. You can modify this behaviour by passing a file_types argument loader = GoogleDriveLoader( folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5", file_types=["document", "sheet"], recursive=False)Passing in Optional File Loaders​When processing files other than Google Docs and Google Sheets, it can be helpful to pass an optional file loader to GoogleDriveLoader. If you pass in a file loader, that file loader will be used on documents that do not have a Google Docs or Google Sheets MIME type. Here is an example of how to load an Excel document from Google Drive using a file loader. from langchain.document_loaders import GoogleDriveLoaderfrom langchain.document_loaders import UnstructuredFileIOLoaderfile_id = "1x9WBtFPWMEAdjcJzPScRsjpjQvpSo_kz"loader = GoogleDriveLoader( file_ids=[file_id], file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"},)docs = loader.load()docs[0]You can also process a folder with a mix of files and Google Docs/Sheets using the following pattern:folder_id = "1asMOHY1BqBS84JcRbOag5LOJac74gpmD"loader = GoogleDriveLoader( folder_id=folder_id, file_loader_cls=UnstructuredFileIOLoader, file_loader_kwargs={"mode": "elements"},)docs = loader.load()docs[0]Extended usage​An external component can manage the complexity of Google Drive : langchain-googledrive It's compatible with the ̀langchain.document_loaders.GoogleDriveLoader and can be used in its place.To be compatible with containers, the authentication uses an environment variable ̀GOOGLE_ACCOUNT_FILE` to credential file (for user or service).pip install langchain-googledrivefolder_id='root'#folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5'# Use the advanced version.from langchain_googledrive.document_loaders import GoogleDriveLoaderloader = GoogleDriveLoader( folder_id=folder_id, recursive=False, num_results=2, # Maximum number of file to load)By default, all files with these mime-type can be converted to Document.text/texttext/plaintext/htmltext/csvtext/markdownimage/pngimage/jpegapplication/epub+zipapplication/pdfapplication/rtfapplication/vnd.google-apps.document (GDoc)application/vnd.google-apps.presentation (GSlide)application/vnd.google-apps.spreadsheet (GSheet)application/vnd.google.colaboratory (Notebook colab)application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX)application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX)It's possible to update or customize this. See the documentation of GDriveLoader.But, the corresponding packages must be installed.pip install unstructuredfor doc in loader.load(): print("---") print(doc.page_content.strip()[:60]+"...")Customize the search pattern​All parameter compatible with Google list() API can be set.To specify the new pattern of the Google request, you can use a PromptTemplate(). The variables for the prompt can be set with kwargs in the constructor. Some pre-formated request are proposed (use {query}, {folder_id} and/or {mime_type}):You can customize the criteria to select the files. A set of predefined filter are proposed: | template | description | | -------------------------------------- | --------------------------------------------------------------------- | | gdrive-all-in-folder | Return all compatible files from a folder_id | | gdrive-query | Search query in all drives | | gdrive-by-name | Search file with name query | | gdrive-query-in-folder | Search query in folder_id (and sub-folders if recursive=true) | | gdrive-mime-type | Search a specific mime_type | | gdrive-mime-type-in-folder | Search a specific mime_type in folder_id | | gdrive-query-with-mime-type | Search query with a specific mime_type | | gdrive-query-with-mime-type-and-folder | Search query with a specific mime_type and in folder_id |loader = GoogleDriveLoader( folder_id=folder_id, recursive=False, template="gdrive-query", # Default template to use query="machine learning", num_results=2, # Maximum number of file to load supportsAllDrives=False, # GDrive `list()` parameter)for doc in loader.load(): print("---") print(doc.page_content.strip()[:60]+"...")You can customize your pattern.from langchain.prompts.prompt import PromptTemplateloader = GoogleDriveLoader( folder_id=folder_id, recursive=False, template=PromptTemplate( input_variables=["query", "query_name"], template="fullText contains '{query}' and name contains '{query_name}' and trashed=false", ), # Default template to use query="machine learning", query_name="ML", num_results=2, # Maximum number of file to load)for doc in loader.load(): print("---") print(doc.page_content.strip()[:60]+"...")Modes for GSlide and GSheet​The parameter mode accepts different values:"document": return the body of each document"snippets": return the description of each file (set in metadata of Google Drive files).The conversion can manage in Markdown format:bulletlinktabletitlesThe parameter gslide_mode accepts different values:"single" : one document with <PAGE BREAK>"slide" : one document by slide"elements" : one document for each elements.loader = GoogleDriveLoader( template="gdrive-mime-type", mime_type="application/vnd.google-apps.presentation", # Only GSlide files gslide_mode="slide", num_results=2, # Maximum number of file to load)for doc in loader.load(): print("---") print(doc.page_content.strip()[:60]+"...")The parameter gsheet_mode accepts different values:"single": Generate one document by line"elements" : one document with markdown array and <PAGE BREAK> tags.loader = GoogleDriveLoader( template="gdrive-mime-type", mime_type="application/vnd.google-apps.spreadsheet", # Only GSheet files gsheet_mode="elements", num_results=2, # Maximum number of file to load)for doc in loader.load(): print("---") print(doc.page_content.strip()[:60]+"...")Advanced usage​All Google File have a 'description' in the metadata. This field can be used to memorize a summary of the document or others indexed tags (See method lazy_update_description_with_summary()).If you use the mode="snippet", only the description will be used for the body. Else, the metadata['summary'] has the field.Sometime, a specific filter can be used to extract some information from the filename, to select some files with specific criteria. You can use a filter.Sometimes, many documents are returned. It's not necessary to have all documents in memory at the same time. You can use the lazy versions of methods, to get one document at a time. It's better to use a complex query in place of a recursive search. For each folder, a query must be applied if you activate recursive=True.import osloader = GoogleDriveLoader( gdrive_api_file=os.environ["GOOGLE_ACCOUNT_FILE"], num_results=2, template="gdrive-query", filter=lambda search, file: "#test" not in file.get('description',''), query='machine learning', supportsAllDrives=False, )for doc in loader.load(): print("---") print(doc.page_content.strip()[:60]+"...")PreviousGoogle Cloud Storage FileNextGrobidPrerequisites🧑 Instructions for ingesting your Google Docs dataPassing in Optional File LoadersExtended usageCustomize the search patternAdvanced usage
500
https://python.langchain.com/docs/integrations/document_loaders/grobid
ComponentsDocument loadersGrobidGrobidGROBID is a machine learning library for extracting, parsing, and re-structuring raw documents.It is designed and expected to be used to parse academic papers, where it works particularly well. Note: if the articles supplied to Grobid are large documents (e.g. dissertations) exceeding a certain number of elements, they might not be processed. This loader uses Grobid to parse PDFs into Documents that retain metadata associated with the section of text.The best approach is to install Grobid via docker, see https://grobid.readthedocs.io/en/latest/Grobid-docker/. (Note: additional instructions can be found here.)Once grobid is up-and-running you can interact as described below. Now, we can use the data loader.from langchain.document_loaders.parsers import GrobidParserfrom langchain.document_loaders.generic import GenericLoaderloader = GenericLoader.from_filesystem( "../Papers/", glob="*", suffixes=[".pdf"], parser=GrobidParser(segment_sentences=False),)docs = loader.load()docs[3].page_content 'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.'docs[3].metadata {'text': 'Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g."Books -2TB" or "Social media conversations").There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.', 'para': '2', 'bboxes': "[[{'page': '1', 'x': '317.05', 'y': '509.17', 'h': '207.73', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '522.72', 'h': '220.08', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '536.27', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '549.82', 'h': '218.65', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '563.37', 'h': '136.98', 'w': '9.46'}], [{'page': '1', 'x': '446.49', 'y': '563.37', 'h': '78.11', 'w': '9.46'}, {'page': '1', 'x': '304.69', 'y': '576.92', 'h': '138.32', 'w': '9.46'}], [{'page': '1', 'x': '447.75', 'y': '576.92', 'h': '76.66', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '590.47', 'h': '219.63', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '604.02', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '617.56', 'h': '218.27', 'w': '9.46'}, {'page': '1', 'x': '306.14', 'y': '631.11', 'h': '220.18', 'w': '9.46'}]]", 'pages': "('1', '1')", 'section_title': 'Introduction', 'section_number': '1', 'paper_title': 'LLaMA: Open and Efficient Foundation Language Models', 'file_path': '/Users/31treehaus/Desktop/Papers/2302.13971.pdf'}PreviousGoogle DriveNextGutenberg
501
https://python.langchain.com/docs/integrations/document_loaders/gutenberg
ComponentsDocument loadersGutenbergGutenbergProject Gutenberg is an online library of free eBooks.This notebook covers how to load links to Gutenberg e-books into a document format that we can use downstream.from langchain.document_loaders import GutenbergLoaderloader = GutenbergLoader("https://www.gutenberg.org/cache/epub/69972/pg69972.txt")data = loader.load()data[0].page_content[:300] 'The Project Gutenberg eBook of The changed brides, by Emma Dorothy\r\n\n\nEliza Nevitte Southworth\r\n\n\n\r\n\n\nThis eBook is for the use of anyone anywhere in the United States and\r\n\n\nmost other parts of the world at no cost and with almost no restrictions\r\n\n\nwhatsoever. You may copy it, give it away or re-u'data[0].metadata {'source': 'https://www.gutenberg.org/cache/epub/69972/pg69972.txt'}PreviousGrobidNextHacker News
502
https://python.langchain.com/docs/integrations/document_loaders/hacker_news
ComponentsDocument loadersHacker NewsHacker NewsHacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as "anything that gratifies one's intellectual curiosity."This notebook covers how to pull page data and comments from Hacker Newsfrom langchain.document_loaders import HNLoaderloader = HNLoader("https://news.ycombinator.com/item?id=34817881")data = loader.load()data[0].page_content[:300] "delta_p_delta_x 73 days ago \n | next [–] \n\nAstrophysical and cosmological simulations are often insightful. They're also very cross-disciplinary; besides the obvious astrophysics, there's networking and sysadmin, parallel computing and algorithm theory (so that the simulation programs a"data[0].metadata {'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universe’s Standard Candles?'}PreviousGutenbergNextHuawei OBS Directory
503
https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_directory
ComponentsDocument loadersHuawei OBS DirectoryOn this pageHuawei OBS DirectoryThe following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents.# Install the required package# pip install esdk-obs-pythonfrom langchain.document_loaders import OBSDirectoryLoaderendpoint = "your-endpoint"# Configure your access credentials\nconfig = { "ak": "your-access-key", "sk": "your-secret-key"}loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config)loader.load()Specify a Prefix for Loading​If you want to load objects with a specific prefix from the bucket, you can use the following code:loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config, prefix="test_prefix")loader.load()Get Authentication Information from ECS​If your langchain is deployed on Huawei Cloud ECS and Agency is set up, the loader can directly get the security token from ECS without needing access key and secret key. config = {"get_token_from_ecs": True}loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint, config=config)loader.load()Use a Public Bucket​If your bucket's bucket policy allows anonymous access (anonymous users have listBucket and GetObject permissions), you can directly load the objects without configuring the config parameter.loader = OBSDirectoryLoader("your-bucket-name", endpoint=endpoint)loader.load()PreviousHacker NewsNextHuawei OBS FileSpecify a Prefix for LoadingGet Authentication Information from ECSUse a Public Bucket
504
https://python.langchain.com/docs/integrations/document_loaders/huawei_obs_file
ComponentsDocument loadersHuawei OBS FileOn this pageHuawei OBS FileThe following code demonstrates how to load an object from the Huawei OBS (Object Storage Service) as document.# Install the required package# pip install esdk-obs-pythonfrom langchain.document_loaders.obs_file import OBSFileLoaderendpoint = "your-endpoint"from obs import ObsClientobs_client = ObsClient(access_key_id="your-access-key", secret_access_key="your-secret-key", server=endpoint)loader = OBSFileLoader("your-bucket-name", "your-object-key", client=obs_client)loader.load()Each Loader with Separate Authentication Information​If you don't need to reuse OBS connections between different loaders, you can directly configure the config. The loader will use the config information to initialize its own OBS client.# Configure your access credentials\nconfig = { "ak": "your-access-key", "sk": "your-secret-key"}loader = OBSFileLoader("your-bucket-name", "your-object-key",endpoint=endpoint, config=config)loader.load()Get Authentication Information from ECS​If your langchain is deployed on Huawei Cloud ECS and Agency is set up, the loader can directly get the security token from ECS without needing access key and secret key. config = {"get_token_from_ecs": True}loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint=endpoint, config=config)loader.load()Access a Publicly Accessible Object​If the object you want to access allows anonymous user access (anonymous users have GetObject permission), you can directly load the object without configuring the config parameter.loader = OBSFileLoader("your-bucket-name", "your-object-key", endpoint=endpoint)loader.load()PreviousHuawei OBS DirectoryNextHuggingFace datasetEach Loader with Separate Authentication InformationGet Authentication Information from ECSAccess a Publicly Accessible Object
505
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
ComponentsDocument loadersHuggingFace datasetOn this pageHuggingFace datasetThe Hugging Face Hub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. They used for a diverse range of tasks such as translation, automatic speech recognition, and image classification.This notebook shows how to load Hugging Face Hub datasets to LangChain.from langchain.document_loaders import HuggingFaceDatasetLoaderdataset_name = "imdb"page_content_column = "text"loader = HuggingFaceDatasetLoader(dataset_name, page_content_column)data = loader.load()data[:15] [Document(page_content='I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered "controversial" I really had to see this for myself.<br /><br />The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.<br /><br />What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.<br /><br />I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\'t have much of a plot.', metadata={'label': 0}), Document(page_content='"I Am Curious: Yellow" is a risible and pretentious steaming pile. It doesn\'t matter what one\'s political views are because this film can hardly be taken seriously on any level. As for the claim that frontal male nudity is an automatic NC-17, that isn\'t true. I\'ve seen R-rated films with male nudity. Granted, they only offer some fleeting views, but where are the R-rated films with gaping vulvas and flapping labia? Nowhere, because they don\'t exist. The same goes for those crappy cable shows: schlongs swinging in the breeze but not a clitoris in sight. And those pretentious indie movies like The Brown Bunny, in which we\'re treated to the site of Vincent Gallo\'s throbbing johnson, but not a trace of pink visible on Chloe Sevigny. Before crying (or implying) "double-standard" in matters of nudity, the mentally obtuse should take into account one unavoidably obvious anatomical difference between men and women: there are no genitals on display when actresses appears nude, and the same cannot be said for a man. In fact, you generally won\'t see female genitals in an American film in anything short of porn or explicit erotica. This alleged double-standard is less a double standard than an admittedly depressing ability to come to terms culturally with the insides of women\'s bodies.', metadata={'label': 0}), Document(page_content="If only to avoid making this type of film in the future. This film is interesting as an experiment but tells no cogent story.<br /><br />One might feel virtuous for sitting thru it because it touches on so many IMPORTANT issues but it does so without any discernable motive. The viewer comes away with no new perspectives (unless one comes up with one while one's mind wanders, as it will invariably do during this pointless film).<br /><br />One might better spend one's time staring out a window at a tree growing.<br /><br />", metadata={'label': 0}), Document(page_content="This film was probably inspired by Godard's Masculin, féminin and I urge you to see that film instead.<br /><br />The film has two strong elements and those are, (1) the realistic acting (2) the impressive, undeservedly good, photo. Apart from that, what strikes me most is the endless stream of silliness. Lena Nyman has to be most annoying actress in the world. She acts so stupid and with all the nudity in this film,...it's unattractive. Comparing to Godard's film, intellectuality has been replaced with stupidity. Without going too far on this subject, I would say that follows from the difference in ideals between the French and the Swedish society.<br /><br />A movie of its time, and place. 2/10.", metadata={'label': 0}), Document(page_content='Oh, brother...after hearing about this ridiculous film for umpteen years all I can think of is that old Peggy Lee song..<br /><br />"Is that all there is??" ...I was just an early teen when this smoked fish hit the U.S. I was too young to get in the theater (although I did manage to sneak into "Goodbye Columbus"). Then a screening at a local film museum beckoned - Finally I could see this film, except now I was as old as my parents were when they schlepped to see it!!<br /><br />The ONLY reason this film was not condemned to the anonymous sands of time was because of the obscenity case sparked by its U.S. release. MILLIONS of people flocked to this stinker, thinking they were going to see a sex film...Instead, they got lots of closeups of gnarly, repulsive Swedes, on-street interviews in bland shopping malls, asinie political pretension...and feeble who-cares simulated sex scenes with saggy, pale actors.<br /><br />Cultural icon, holy grail, historic artifact..whatever this thing was, shred it, burn it, then stuff the ashes in a lead box!<br /><br />Elite esthetes still scrape to find value in its boring pseudo revolutionary political spewings..But if it weren\'t for the censorship scandal, it would have been ignored, then forgotten.<br /><br />Instead, the "I Am Blank, Blank" rhythymed title was repeated endlessly for years as a titilation for porno films (I am Curious, Lavender - for gay films, I Am Curious, Black - for blaxploitation films, etc..) and every ten years or so the thing rises from the dead, to be viewed by a new generation of suckers who want to see that "naughty sex film" that "revolutionized the film industry"...<br /><br />Yeesh, avoid like the plague..Or if you MUST see it - rent the video and fast forward to the "dirty" parts, just to get it over with.<br /><br />', metadata={'label': 0}), Document(page_content="I would put this at the top of my list of films in the category of unwatchable trash! There are films that are bad, but the worst kind are the ones that are unwatchable but you are suppose to like them because they are supposed to be good for you! The sex sequences, so shocking in its day, couldn't even arouse a rabbit. The so called controversial politics is strictly high school sophomore amateur night Marxism. The film is self-consciously arty in the worst sense of the term. The photography is in a harsh grainy black and white. Some scenes are out of focus or taken from the wrong angle. Even the sound is bad! And some people call this art?<br /><br />", metadata={'label': 0}), Document(page_content="Whoever wrote the screenplay for this movie obviously never consulted any books about Lucille Ball, especially her autobiography. I've never seen so many mistakes in a biopic, ranging from her early years in Celoron and Jamestown to her later years with Desi. I could write a whole list of factual errors, but it would go on for pages. In all, I believe that Lucille Ball is one of those inimitable people who simply cannot be portrayed by anyone other than themselves. If I were Lucie Arnaz and Desi, Jr., I would be irate at how many mistakes were made in this film. The filmmakers tried hard, but the movie seems awfully sloppy to me.", metadata={'label': 0}), Document(page_content='When I first saw a glimpse of this movie, I quickly noticed the actress who was playing the role of Lucille Ball. Rachel York\'s portrayal of Lucy is absolutely awful. Lucille Ball was an astounding comedian with incredible talent. To think about a legend like Lucille Ball being portrayed the way she was in the movie is horrendous. I cannot believe out of all the actresses in the world who could play a much better Lucy, the producers decided to get Rachel York. She might be a good actress in other roles but to play the role of Lucille Ball is tough. It is pretty hard to find someone who could resemble Lucille Ball, but they could at least find someone a bit similar in looks and talent. If you noticed York\'s portrayal of Lucy in episodes of I Love Lucy like the chocolate factory or vitavetavegamin, nothing is similar in any way-her expression, voice, or movement.<br /><br />To top it all off, Danny Pino playing Desi Arnaz is horrible. Pino does not qualify to play as Ricky. He\'s small and skinny, his accent is unreal, and once again, his acting is unbelievable. Although Fred and Ethel were not similar either, they were not as bad as the characters of Lucy and Ricky.<br /><br />Overall, extremely horrible casting and the story is badly told. If people want to understand the real life situation of Lucille Ball, I suggest watching A&E Biography of Lucy and Desi, read the book from Lucille Ball herself, or PBS\' American Masters: Finding Lucy. If you want to see a docudrama, "Before the Laughter" would be a better choice. The casting of Lucille Ball and Desi Arnaz in "Before the Laughter" is much better compared to this. At least, a similar aspect is shown rather than nothing.', metadata={'label': 0}), Document(page_content='Who are these "They"- the actors? the filmmakers? Certainly couldn\'t be the audience- this is among the most air-puffed productions in existence. It\'s the kind of movie that looks like it was a lot of fun to shoot\x97 TOO much fun, nobody is getting any actual work done, and that almost always makes for a movie that\'s no fun to watch.<br /><br />Ritter dons glasses so as to hammer home his character\'s status as a sort of doppleganger of the bespectacled Bogdanovich; the scenes with the breezy Ms. Stratten are sweet, but have an embarrassing, look-guys-I\'m-dating-the-prom-queen feel to them. Ben Gazzara sports his usual cat\'s-got-canary grin in a futile attempt to elevate the meager plot, which requires him to pursue Audrey Hepburn with all the interest of a narcoleptic at an insomnia clinic. In the meantime, the budding couple\'s respective children (nepotism alert: Bogdanovich\'s daughters) spew cute and pick up some fairly disturbing pointers on \'love\' while observing their parents. (Ms. Hepburn, drawing on her dignity, manages to rise above the proceedings- but she has the monumental challenge of playing herself, ostensibly.) Everybody looks great, but so what? It\'s a movie and we can expect that much, if that\'s what you\'re looking for you\'d be better off picking up a copy of Vogue.<br /><br />Oh- and it has to be mentioned that Colleen Camp thoroughly annoys, even apart from her singing, which, while competent, is wholly unconvincing... the country and western numbers are woefully mismatched with the standards on the soundtrack. Surely this is NOT what Gershwin (who wrote the song from which the movie\'s title is derived) had in mind; his stage musicals of the 20\'s may have been slight, but at least they were long on charm. "They All Laughed" tries to coast on its good intentions, but nobody- least of all Peter Bogdanovich - has the good sense to put on the brakes.<br /><br />Due in no small part to the tragic death of Dorothy Stratten, this movie has a special place in the heart of Mr. Bogdanovich- he even bought it back from its producers, then distributed it on his own and went bankrupt when it didn\'t prove popular. His rise and fall is among the more sympathetic and tragic of Hollywood stories, so there\'s no joy in criticizing the film... there _is_ real emotional investment in Ms. Stratten\'s scenes. But "Laughed" is a faint echo of "The Last Picture Show", "Paper Moon" or "What\'s Up, Doc"- following "Daisy Miller" and "At Long Last Love", it was a thundering confirmation of the phase from which P.B. has never emerged.<br /><br />All in all, though, the movie is harmless, only a waste of rental. I want to watch people having a good time, I\'ll go to the park on a sunny day. For filmic expressions of joy and love, I\'ll stick to Ernest Lubitsch and Jaques Demy...', metadata={'label': 0}), Document(page_content="This is said to be a personal film for Peter Bogdonavitch. He based it on his life but changed things around to fit the characters, who are detectives. These detectives date beautiful models and have no problem getting them. Sounds more like a millionaire playboy filmmaker than a detective, doesn't it? This entire movie was written by Peter, and it shows how out of touch with real people he was. You're supposed to write what you know, and he did that, indeed. And leaves the audience bored and confused, and jealous, for that matter. This is a curio for people who want to see Dorothy Stratten, who was murdered right after filming. But Patti Hanson, who would, in real life, marry Keith Richards, was also a model, like Stratten, but is a lot better and has a more ample part. In fact, Stratten's part seemed forced; added. She doesn't have a lot to do with the story, which is pretty convoluted to begin with. All in all, every character in this film is somebody that very few people can relate with, unless you're millionaire from Manhattan with beautiful supermodels at your beckon call. For the rest of us, it's an irritating snore fest. That's what happens when you're out of touch. You entertain your few friends with inside jokes, and bore all the rest.", metadata={'label': 0}), Document(page_content='It was great to see some of my favorite stars of 30 years ago including John Ritter, Ben Gazarra and Audrey Hepburn. They looked quite wonderful. But that was it. They were not given any characters or good lines to work with. I neither understood or cared what the characters were doing.<br /><br />Some of the smaller female roles were fine, Patty Henson and Colleen Camp were quite competent and confident in their small sidekick parts. They showed some talent and it is sad they didn\'t go on to star in more and better films. Sadly, I didn\'t think Dorothy Stratten got a chance to act in this her only important film role.<br /><br />The film appears to have some fans, and I was very open-minded when I started watching it. I am a big Peter Bogdanovich fan and I enjoyed his last movie, "Cat\'s Meow" and all his early ones from "Targets" to "Nickleodeon". So, it really surprised me that I was barely able to keep awake watching this one.<br /><br />It is ironic that this movie is about a detective agency where the detectives and clients get romantically involved with each other. Five years later, Bogdanovich\'s ex-girlfriend, Cybil Shepherd had a hit television series called "Moonlighting" stealing the story idea from Bogdanovich. Of course, there was a great difference in that the series relied on tons of witty dialogue, while this tries to make do with slapstick and a few screwball lines.<br /><br />Bottom line: It ain\'t no "Paper Moon" and only a very pale version of "What\'s Up, Doc".', metadata={'label': 0}), Document(page_content="I can't believe that those praising this movie herein aren't thinking of some other film. I was prepared for the possibility that this would be awful, but the script (or lack thereof) makes for a film that's also pointless. On the plus side, the general level of craft on the part of the actors and technical crew is quite competent, but when you've got a sow's ear to work with you can't make a silk purse. Ben G fans should stick with just about any other movie he's been in. Dorothy S fans should stick to Galaxina. Peter B fans should stick to Last Picture Show and Target. Fans of cheap laughs at the expense of those who seem to be asking for it should stick to Peter B's amazingly awful book, Killing of the Unicorn.", metadata={'label': 0}), Document(page_content='Never cast models and Playboy bunnies in your films! Bob Fosse\'s "Star 80" about Dorothy Stratten, of whom Bogdanovich was obsessed enough to have married her SISTER after her murder at the hands of her low-life husband, is a zillion times more interesting than Dorothy herself on the silver screen. Patty Hansen is no actress either..I expected to see some sort of lost masterpiece a la Orson Welles but instead got Audrey Hepburn cavorting in jeans and a god-awful "poodlesque" hair-do....Very disappointing...."Paper Moon" and "The Last Picture Show" I could watch again and again. This clunker I could barely sit through once. This movie was reputedly not released because of the brouhaha surrounding Ms. Stratten\'s tawdry death; I think the real reason was because it was so bad!', metadata={'label': 0}), Document(page_content="Its not the cast. A finer group of actors, you could not find. Its not the setting. The director is in love with New York City, and by the end of the film, so are we all! Woody Allen could not improve upon what Bogdonovich has done here. If you are going to fall in love, or find love, Manhattan is the place to go. No, the problem with the movie is the script. There is none. The actors fall in love at first sight, words are unnecessary. In the director's own experience in Hollywood that is what happens when they go to work on the set. It is reality to him, and his peers, but it is a fantasy to most of us in the real world. So, in the end, the movie is hollow, and shallow, and message-less.", metadata={'label': 0}), Document(page_content='Today I found "They All Laughed" on VHS on sale in a rental. It was a really old and very used VHS, I had no information about this movie, but I liked the references listed on its cover: the names of Peter Bogdanovich, Audrey Hepburn, John Ritter and specially Dorothy Stratten attracted me, the price was very low and I decided to risk and buy it. I searched IMDb, and the User Rating of 6.0 was an excellent reference. I looked in "Mick Martin & Marsha Porter Video & DVD Guide 2003" and \x96 wow \x96 four stars! So, I decided that I could not waste more time and immediately see it. Indeed, I have just finished watching "They All Laughed" and I found it a very boring overrated movie. The characters are badly developed, and I spent lots of minutes to understand their roles in the story. The plot is supposed to be funny (private eyes who fall in love for the women they are chasing), but I have not laughed along the whole story. The coincidences, in a huge city like New York, are ridiculous. Ben Gazarra as an attractive and very seductive man, with the women falling for him as if her were a Brad Pitt, Antonio Banderas or George Clooney, is quite ridiculous. In the end, the greater attractions certainly are the presence of the Playboy centerfold and playmate of the year Dorothy Stratten, murdered by her husband pretty after the release of this movie, and whose life was showed in "Star 80" and "Death of a Centerfold: The Dorothy Stratten Story"; the amazing beauty of the sexy Patti Hansen, the future Mrs. Keith Richards; the always wonderful, even being fifty-two years old, Audrey Hepburn; and the song "Amigo", from Roberto Carlos. Although I do not like him, Roberto Carlos has been the most popular Brazilian singer since the end of the 60\'s and is called by his fans as "The King". I will keep this movie in my collection only because of these attractions (manly Dorothy Stratten). My vote is four.<br /><br />Title (Brazil): "Muito Riso e Muita Alegria" ("Many Laughs and Lots of Happiness")', metadata={'label': 0})]Example​In this example, we use data from a dataset to answer a questionfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoaderdataset_name = "tweet_eval"page_content_column = "text"name = "stance_climate"loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)index = VectorstoreIndexCreator().from_loaders([loader]) Found cached dataset tweet_eval 0%| | 0/3 [00:00<?, ?it/s] Using embedded DuckDB without persistence: data will be transientquery = "What are the most used hashtag?"result = index.query(query)result ' The most used hashtags in this context are #UKClimate2015, #Sustainability, #TakeDownTheFlag, #LoveWins, #CSOTA, #ClimateSummitoftheAmericas, #SM, and #SocialMedia.'PreviousHuawei OBS FileNextiFixitExample
506
https://python.langchain.com/docs/integrations/document_loaders/ifixit
ComponentsDocument loadersiFixitOn this pageiFixitiFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0.This loader will allow you to download the text of a repair guide, text of Q&A's and wikis from devices on iFixit using their open APIs. It's incredibly useful for context related to technical documents and answers to questions about devices in the corpus of data on iFixit.from langchain.document_loaders import IFixitLoaderloader = IFixitLoader("https://www.ifixit.com/Teardown/Banana+Teardown/811")data = loader.load()data [Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]loader = IFixitLoader( "https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself")data = loader.load()data [Document(page_content='# My iPhone 6 is typing and opening apps by itself\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\nI restored as manufactures cleaned up the screen\nthe problem continues\n\n## 27 Answers\n\nFilter by: \n\nMost Helpful\nNewest\nOldest\n\n### Accepted Answer\nHi,\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\'ll have a year warranty and can get it replaced free.\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\nIf this is the case, it may be the screen that needs replacing to solve your issue.\nEither way, wherever you got it, it\'s best to return it and get a refund or a replacement device. :-)\n\n\n\n### Most Helpful Answer\nI had the same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then ghosts and then hackers.\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\nHere is what I did two days ago and since then it is working like a charm..\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please take a back-up first).\nAnd your phone should be good to use again.\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\nLet me know how it goes.\n\n\n\n### Other Answer\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\n\n\n\n### Other Answer\nI\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\n\n\n\n### Other Answer\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue… it’s hardware, not software.\n\n\n\n### Other Answer\nHey.\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\n\n\n\n### Other Answer\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\n\n\n\n### Other Answer\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to clean it and try everyone’s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\'s what the "plus" in "6 plus" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\'t fix the problem. Thanks for helping me figure out that it\'s most likely a hardware problem--which the "genius" probably knows too.\nI\'m getting ready to go Android.\n\n\n\n### Other Answer\nI experienced similar ghost touches. Two weeks ago, I changed my iPhone 6 Plus shell (I had forced the phone into it because it’s pretty tight), and also put a new glass screen protector (the edges of the protector don’t stick to the screen, weird, so I brushed pressure on the edges at times to see if they may smooth out one day miraculously). I’m not sure if I accidentally bend the phone when I installed the shell, or, if I got a defective glass protector that messes up the touch sensor. Well, yesterday was the worse day, keeps dropping calls and ghost pressing keys for me when I was on a call. I got fed up, so I removed the screen protector, and so far problems have not reoccurred yet. I’m crossing my fingers that problems indeed solved.\n\n\n\n### Other Answer\nthank you so much for this post! i was struggling doing the reset because i cannot type userids and passwords correctly because the iphone 6 plus i have kept on typing letters incorrectly. I have been doing it for a day until i come across this article. Very helpful! God bless you!!\n\n\n\n### Other Answer\nI just turned it off, and turned it back on.\n\n\n\n### Other Answer\nMy problem has not gone away completely but its better now i changed my charger and turned off prediction ....,,,now it rarely happens\n\n\n\n### Other Answer\nI tried all of the above. I then turned off my home cleaned it with isopropyl alcohol 90%. Then I baked it in my oven on warm for an hour and a half over foil. Took it out and set it cool completely on the glass top stove. Then I turned on and it worked.\n\n\n\n### Other Answer\nI think at& t should man up and fix your phone for free! You pay a lot for a Apple they should back it. I did the next 30 month payments and finally have it paid off in June. My iPad sept. Looking forward to a almost 100 drop in my phone bill! Now this crap!!! Really\n\n\n\n### Other Answer\nIf your phone is JailBroken, suggest downloading a virus. While all my symptoms were similar, there was indeed a virus/malware on the phone which allowed for remote control of my iphone (even while in lock mode). My mistake for buying a third party iphone i suppose. Anyway i have since had the phone restored to factory and everything is working as expected for now. I will of course keep you posted if this changes. Thanks to all for the helpful posts, really helped me narrow a few things down.\n\n\n\n### Other Answer\nWhen my phone was doing this, it ended up being the screen protector that i got from 5 below. I took it off and it stopped. I ordered more protectors from amazon and replaced it\n\n\n\n### Other Answer\niPhone 6 Plus first generation….I had the same issues as all above, apps opening by themselves, self typing, ultra sensitive screen, items jumping around all over….it even called someone on FaceTime twice by itself when I was not in the room…..I thought the phone was toast and i’d have to buy a new one took me a while to figure out but it was the extra cheap block plug I bought at a dollar store for convenience of an extra charging station when I move around the house from den to living room…..cord was fine but bought a new Apple brand block plug…no more problems works just fine now. This issue was a recent event so had to narrow things down to what had changed recently to my phone so I could figure it out.\nI even had the same problem on a laptop with documents opening up by themselves…..a laptop that was plugged in to the same wall plug as my phone charger with the dollar store block plug….until I changed the block plug.\n\n\n\n### Other Answer\nHad the problem: Inherited a 6s Plus from my wife. She had no problem with it.\nLooks like it was merely the cheap phone case I purchased on Amazon. It was either pinching the edges or torquing the screen/body of the phone. Problem solved.\n\n\n\n### Other Answer\nI bought my phone on march 6 and it was a brand new, but It sucks me uo because it freezing, shaking and control by itself. I went to the store where I bought this and I told them to replacr it, but they told me I have to pay it because Its about lcd issue. Please help me what other ways to fix it. Or should I try to remove the screen or should I follow your step above.\n\n\n\n### Other Answer\nI tried everything and it seems to come back to needing the original iPhone cable…or at least another 1 that would have come with another iPhone…not the $5 Store fast charging cables. My original cable is pretty beat up - like most that I see - but I’ve been beaten up much MUCH less by sticking with its use! I didn’t find that the casing/shell around it or not made any diff.\n\n\n\n### Other Answer\ngreat now I have to wait one more hour to reset my phone and while I was tryin to connect my phone to my computer the computer also restarted smh does anyone else knows how I can get my phone to work… my problem is I have a black dot on the bottom left of my screen an it wont allow me to touch a certain part of my screen unless I rotate my phone and I know the password but the first number is a 2 and it won\'t let me touch 1,2, or 3 so now I have to find a way to get rid of my password and all of a sudden my phone wants to touch stuff on its own which got my phone disabled many times to the point where I have to wait a whole hour and I really need to finish something on my phone today PLEASE HELPPPP\n\n\n\n### Other Answer\nIn my case , iphone 6 screen was faulty. I got it replaced at local repair shop, so far phone is working fine.\n\n\n\n### Other Answer\nthis problem in iphone 6 has many different scenarios and solutions, first try to reconnect the lcd screen to the motherboard again, if didnt solve, try to replace the lcd connector on the motherboard, if not solved, then remains two issues, lcd screen it self or touch IC. in my country some repair shops just change them all for almost 40$ since they dont want to troubleshoot one by one. readers of this comment also should know that partial screen not responding in other iphone models might also have an issue in LCD connector on the motherboard, specially if you lock/unlock screen and screen works again for sometime. lcd connectors gets disconnected lightly from the motherboard due to multiple falls and hits after sometime. best of luck for all\n\n\n\n### Other Answer\nI am facing the same issue whereby these ghost touches type and open apps , I am using an original Iphone cable , how to I fix this issue.\n\n\n\n### Other Answer\nThere were two issues with the phone I had troubles with. It was my dads and turns out he carried it in his pocket. The phone itself had a little bend in it as a result. A little pressure in the opposite direction helped the issue. But it also had a tiny crack in the screen which wasnt obvious, once we added a screen protector this fixed the issues entirely.\n\n\n\n### Other Answer\nI had the same problem with my 64Gb iPhone 6+. Tried a lot of things and eventually downloaded all my images and videos to my PC and restarted the phone - problem solved. Been working now for two days.', lookup_str='', metadata={'source': 'https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself', 'title': 'My iPhone 6 is typing and opening apps by itself'}, lookup_index=0)]loader = IFixitLoader("https://www.ifixit.com/Device/Standard_iPad")data = loader.load()data [Document(page_content="Standard iPad\nThe standard edition of the tablet computer made by Apple.\n== Background Information ==\n\nOriginally introduced in January 2010, the iPad is Apple's standard edition of their tablet computer. In total, there have been ten generations of the standard edition of the iPad.\n\n== Additional Information ==\n\n* [link|https://www.apple.com/ipad-select/|Official Apple Product Page]\n* [link|https://en.wikipedia.org/wiki/IPad#iPad|Official iPad Wikipedia]", lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Standard_iPad', 'title': 'Standard iPad'}, lookup_index=0)]Searching iFixit using /suggest​If you're looking for a more general way to search iFixit based on a keyword or phrase, the /suggest endpoint will return content related to the search term, then the loader will load the content from each of the suggested items and prep and return the documents.data = IFixitLoader.load_suggestions("Banana")data [Document(page_content='Banana\nTasty fruit. Good source of potassium. Yellow.\n== Background Information ==\n\nCommonly misspelled, this wildly popular, phone shaped fruit serves as nutrition and an obstacle to slow down vehicles racing close behind you. Also used commonly as a synonym for “crazy” or “insane”.\n\nBotanically, the banana is considered a berry, although it isn’t included in the culinary berry category containing strawberries and raspberries. Belonging to the genus Musa, the banana originated in Southeast Asia and Australia. Now largely cultivated throughout South and Central America, bananas are largely available throughout the world. They are especially valued as a staple food group in developing countries due to the banana tree’s ability to produce fruit year round.\n\nThe banana can be easily opened. Simply remove the outer yellow shell by cracking the top of the stem. Then, with the broken piece, peel downward on each side until the fruity components on the inside are exposed. Once the shell has been removed it cannot be put back together.\n\n== Technical Specifications ==\n\n* Dimensions: Variable depending on genetics of the parent tree\n* Color: Variable depending on ripeness, region, and season\n\n== Additional Information ==\n\n[link|https://en.wikipedia.org/wiki/Banana|Wiki: Banana]', lookup_str='', metadata={'source': 'https://www.ifixit.com/Device/Banana', 'title': 'Banana'}, lookup_index=0), Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)]PreviousHuggingFace datasetNextImagesSearching iFixit using /suggest
507
https://python.langchain.com/docs/integrations/document_loaders/image
ComponentsDocument loadersImagesOn this pageImagesThis covers how to load images such as JPG or PNG into a document format that we can use downstream.Using Unstructured​#!pip install pdfminerfrom langchain.document_loaders.image import UnstructuredImageLoaderloader = UnstructuredImageLoader("layout-parser-paper-fast.jpg")data = loader.load()data[0] Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements")data = loader.load()data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)PreviousiFixitNextImage captionsUsing UnstructuredRetain Elements
508
https://python.langchain.com/docs/integrations/document_loaders/image_captions
ComponentsDocument loadersImage captionsOn this pageImage captionsBy default, the loader utilizes the pre-trained Salesforce BLIP image captioning model.This notebook shows how to use the ImageCaptionLoader to generate a query-able index of image captions#!pip install transformersfrom langchain.document_loaders import ImageCaptionLoaderPrepare a list of image urls from Wikimedia​list_image_urls = [ "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg", "https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg",]Create the loader​loader = ImageCaptionLoader(path_images=list_image_urls)list_docs = loader.load()list_docsfrom PIL import Imageimport requestsImage.open(requests.get(list_image_urls[0], stream=True).raw).convert("RGB")Create the index​from langchain.indexes import VectorstoreIndexCreatorindex = VectorstoreIndexCreator().from_loaders([loader])Query​query = "What's the painting about?"index.query(query)query = "What kind of images are there?"index.query(query)PreviousImagesNextIMSDbPrepare a list of image urls from WikimediaCreate the loaderCreate the indexQuery
509
https://python.langchain.com/docs/integrations/document_loaders/imsdb
ComponentsDocument loadersIMSDbIMSDbIMSDb is the Internet Movie Script Database.This covers how to load IMSDb webpages into a document format that we can use downstream.from langchain.document_loaders import IMSDbLoaderloader = IMSDbLoader("https://imsdb.com/scripts/BlacKkKlansman.html")data = loader.load()data[0].page_content[:500] '\n\r\n\r\n\r\n\r\n BLACKKKLANSMAN\r\n \r\n \r\n \r\n \r\n Written by\r\n\r\n Charlie Wachtel & David Rabinowitz\r\n\r\n and\r\n\r\n Kevin Willmott & Spike Lee\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n FADE IN:\r\n \r\n SCENE FROM "GONE WITH'data[0].metadata {'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'}PreviousImage captionsNextIugu
510
https://python.langchain.com/docs/integrations/document_loaders/iugu
ComponentsDocument loadersIuguIuguIugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import IuguLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Iugu API requires an access token, which can be found inside of the Iugu dashboard.This document loader also requires a resource option which defines what data you want to load.Following resources are available:Documentation Documentationiugu_loader = IuguLoader("charges")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([iugu_loader])iugu_doc_retriever = index.vectorstore.as_retriever()PreviousIMSDbNextJoplin
511
https://python.langchain.com/docs/integrations/document_loaders/joplin
ComponentsDocument loadersJoplinJoplinJoplin is an open source note-taking app. Capture your thoughts and securely access them from any device.This notebook covers how to load documents from a Joplin database.Joplin has a REST API for accessing its local database. This loader uses the API to retrieve all notes in the database and their metadata. This requires an access token that can be obtained from the app by following these steps:Open the Joplin app. The app must stay open while the documents are being loaded.Go to settings / options and select "Web Clipper".Make sure that the Web Clipper service is enabled.Under "Advanced Options", copy the authorization token.You may either initialize the loader directly with the access token, or store it in the environment variable JOPLIN_ACCESS_TOKEN.An alternative to this approach is to export the Joplin's note database to Markdown files (optionally, with Front Matter metadata) and use a Markdown loader, such as ObsidianLoader, to load them.from langchain.document_loaders import JoplinLoaderloader = JoplinLoader(access_token="<access-token>")docs = loader.load()PreviousIuguNextJupyter Notebook
512
https://python.langchain.com/docs/integrations/document_loaders/jupyter_notebook
ComponentsDocument loadersJupyter NotebookJupyter NotebookJupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating notebook documents.This notebook covers how to load data from a Jupyter notebook (.html) into a format suitable by LangChain.from langchain.document_loaders import NotebookLoaderloader = NotebookLoader( "example_data/notebook.html", include_outputs=True, max_output_length=20, remove_newline=True,)NotebookLoader.load() loads the .html notebook file into a Document object.Parameters:include_outputs (bool): whether to include cell outputs in the resulting document (default is False).max_output_length (int): the maximum number of characters to include from each cell output (default is 10).remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False).traceback (bool): whether to include full traceback (default is False).loader.load() [Document(page_content='\'markdown\' cell: \'[\'# Notebook\', \'\', \'This notebook covers how to load data from an .html notebook into a format suitable by LangChain.\']\'\n\n \'code\' cell: \'[\'from langchain.document_loaders import NotebookLoader\']\'\n\n \'code\' cell: \'[\'loader = NotebookLoader("example_data/notebook.html")\']\'\n\n \'markdown\' cell: \'[\'`NotebookLoader.load()` loads the `.html` notebook file into a `Document` object.\', \'\', \'**Parameters**:\', \'\', \'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\', \'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\', \'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\', \'* `traceback` (bool): whether to include full traceback (default is False).\']\'\n\n \'code\' cell: \'[\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\']\'\n\n', metadata={'source': 'example_data/notebook.html'})]PreviousJoplinNextLarkSuite (FeiShu)
513
https://python.langchain.com/docs/integrations/document_loaders/larksuite
ComponentsDocument loadersLarkSuite (FeiShu)LarkSuite (FeiShu)LarkSuite is an enterprise collaboration platform developed by ByteDance.This notebook covers how to load data from the LarkSuite REST API into a format that can be ingested into LangChain, along with example usage for text summarization.The LarkSuite API requires an access token (tenant_access_token or user_access_token), checkout LarkSuite open platform document for API details.from getpass import getpassfrom langchain.document_loaders.larksuite import LarkSuiteDocLoaderDOMAIN = input("larksuite domain")ACCESS_TOKEN = getpass("larksuite tenant_access_token or user_access_token")DOCUMENT_ID = input("larksuite document id")from pprint import pprintlarksuite_loader = LarkSuiteDocLoader(DOMAIN, ACCESS_TOKEN, DOCUMENT_ID)docs = larksuite_loader.load()pprint(docs) [Document(page_content='Test Doc\nThis is a Test Doc\n\n1\n2\n3\n\n', metadata={'document_id': 'V76kdbd2HoBbYJxdiNNccajunPf', 'revision_id': 11, 'title': 'Test Doc'})]# see https://python.langchain.com/docs/use_cases/summarization for more detailsfrom langchain.chains.summarize import load_summarize_chainchain = load_summarize_chain(llm, chain_type="map_reduce")chain.run(docs)PreviousJupyter NotebookNextMastodon
514
https://python.langchain.com/docs/integrations/document_loaders/mastodon
ComponentsDocument loadersMastodonMastodonMastodon is a federated social media and social networking service.This loader fetches the text from the "toots" of a list of Mastodon accounts, using the Mastodon.py Python package.Public accounts can the queried by default without any authentication. If non-public accounts or instances are queried, you have to register an application for your account which gets you an access token, and set that token and your account's API base URL.Then you need to pass in the Mastodon account names you want to extract, in the @account@instance format.from langchain.document_loaders import MastodonTootsLoader#!pip install Mastodon.pyloader = MastodonTootsLoader( mastodon_accounts=["@Gargron@mastodon.social"], number_toots=50, # Default value is 100)# Or set up access information to use a Mastodon app.# Note that the access token can either be passed into# constructor or you can set the envirovnment "MASTODON_ACCESS_TOKEN".# loader = MastodonTootsLoader(# access_token="<ACCESS TOKEN OF MASTODON APP>",# api_base_url="<API BASE URL OF MASTODON APP INSTANCE>",# mastodon_accounts=["@Gargron@mastodon.social"],# number_toots=50, # Default value is 100# )documents = loader.load()for doc in documents[:3]: print(doc.page_content) print("=" * 80) <p>It is tough to leave this behind and go back to reality. And some people live here! I’m sure there are downsides but it sounds pretty good to me right now.</p> ================================================================================ <p>I wish we could stay here a little longer, but it is time to go home 🥲</p> ================================================================================ <p>Last day of the honeymoon. And it’s <a href="https://mastodon.social/tags/caturday" class="mention hashtag" rel="tag">#<span>caturday</span></a>! This cute tabby came to the restaurant to beg for food and got some chicken.</p> ================================================================================The toot texts (the documents' page_content) is by default HTML as returned by the Mastodon API.PreviousLarkSuite (FeiShu)NextMediaWiki Dump
515
https://python.langchain.com/docs/integrations/document_loaders/mediawikidump
ComponentsDocument loadersMediaWiki DumpMediaWiki DumpMediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.This covers how to load a MediaWiki XML dump file into a document format that we can use downstream.It uses mwxml from mediawiki-utilities to dump and mwparserfromhell from earwig to parse MediaWiki wikicode.Dump files can be obtained with dumpBackup.php or on the Special:Statistics page of the Wiki.# mediawiki-utilities supports XML schema 0.11 in unmerged branchespip install -qU git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11# mediawiki-utilities mwxml has a bug, fix PR pendingpip install -qU git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11pip install -qU mwparserfromhellfrom langchain.document_loaders import MWDumpLoaderloader = MWDumpLoader( file_path = "example_data/testmw_pages_current.xml", encoding="utf8", #namespaces = [0,2,3] Optional list to load only specific namespaces. Loads all namespaces by default. skip_redirects = True, #will skip over pages that just redirect to other pages (or not if False) stop_on_error = False #will skip over pages that cause parsing errors (or not if False) )documents = loader.load()print(f"You have {len(documents)} document(s) in your data ") You have 177 document(s) in your data documents[:5] [Document(page_content='\t\n\t\n\tArtist\n\tReleased\n\tRecorded\n\tLength\n\tLabel\n\tProducer', metadata={'source': 'Album'}), Document(page_content='{| class="article-table plainlinks" style="width:100%;"\n|- style="font-size:18px;"\n! style="padding:0px;" | Template documentation\n|-\n| Note: portions of the template sample may not be visible without values provided.\n|-\n| View or edit this documentation. (About template documentation)\n|-\n| Editors can experiment in this template\'s [ sandbox] and [ test case] pages.\n|}Category:Documentation templates', metadata={'source': 'Documentation'}), Document(page_content='Description\nThis template is used to insert descriptions on template pages.\n\nSyntax\nAdd <noinclude></noinclude> at the end of the template page.\n\nAdd <noinclude></noinclude> to transclude an alternative page from the /doc subpage.\n\nUsage\n\nOn the Template page\nThis is the normal format when used:\n\nTEMPLATE CODE\n<includeonly>Any categories to be inserted into articles by the template</includeonly>\n<noinclude>{{Documentation}}</noinclude>\n\nIf your template is not a completed div or table, you may need to close the tags just before {{Documentation}} is inserted (within the noinclude tags).\n\nA line break right before {{Documentation}} can also be useful as it helps prevent the documentation template "running into" previous code.\n\nOn the documentation page\nThe documentation page is usually located on the /doc subpage for a template, but a different page can be specified with the first parameter of the template (see Syntax).\n\nNormally, you will want to write something like the following on the documentation page:\n\n==Description==\nThis template is used to do something.\n\n==Syntax==\nType <code>{{t|templatename}}</code> somewhere.\n\n==Samples==\n<code><nowiki>{{templatename|input}}</nowiki></code> \n\nresults in...\n\n{{templatename|input}}\n\n<includeonly>Any categories for the template itself</includeonly>\n<noinclude>[[Category:Template documentation]]</noinclude>\n\nUse any or all of the above description/syntax/sample output sections. You may also want to add "see also" or other sections.\n\nNote that the above example also uses the Template:T template.\n\nCategory:Documentation templatesCategory:Template documentation', metadata={'source': 'Documentation/doc'}), Document(page_content='Description\nA template link with a variable number of parameters (0-20).\n\nSyntax\n \n\nSource\nImproved version not needing t/piece subtemplate developed on Templates wiki see the list of authors. Copied here via CC-By-SA 3.0 license.\n\nExample\n\nCategory:General wiki templates\nCategory:Template documentation', metadata={'source': 'T/doc'}), Document(page_content='\t\n\t\t \n\t\n\t\t Aliases\n\t Relatives\n\t Affiliation\n Occupation\n \n Biographical information\n Marital status\n \tDate of birth\n Place of birth\n Date of death\n Place of death\n \n Physical description\n Species\n Gender\n Height\n Weight\n Eye color\n\t\n Appearances\n Portrayed by\n Appears in\n Debut\n ', metadata={'source': 'Character'})]PreviousMastodonNextMerge Documents Loader
516
https://python.langchain.com/docs/integrations/document_loaders/merge_doc
ComponentsDocument loadersMerge Documents LoaderMerge Documents LoaderMerge the documents returned from a set of specified data loaders.from langchain.document_loaders import WebBaseLoaderloader_web = WebBaseLoader( "https://github.com/basecamp/handbook/blob/master/37signals-is-you.md")from langchain.document_loaders import PyPDFLoaderloader_pdf = PyPDFLoader("../MachineLearning-Lecture01.pdf")from langchain.document_loaders.merge import MergedDataLoaderloader_all = MergedDataLoader(loaders=[loader_web, loader_pdf])docs_all = loader_all.load()len(docs_all) 23PreviousMediaWiki DumpNextmhtml
517
https://python.langchain.com/docs/integrations/document_loaders/mhtml
ComponentsDocument loadersmhtmlmhtmlMHTML is a is used both for emails but also for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a single file in which entire webpage is archived. When one saves a webpage as MHTML format, this file extension will contain HTML code, images, audio files, flash animation etc.from langchain.document_loaders import MHTMLLoader# Create a new loader object for the MHTML fileloader = MHTMLLoader( file_path="../../../../../../tests/integration_tests/examples/example.mht")# Load the document from the filedocuments = loader.load()# Print the documents to see the resultsfor doc in documents: print(doc) page_content='LangChain\nLANG CHAIN 🦜️🔗Official Home Page\xa0\n\n\n\n\n\n\n\nIntegrations\n\n\n\nFeatures\n\n\n\n\nBlog\n\n\n\nConceptual Guide\n\n\n\n\nPython Repo\n\n\nJavaScript Repo\n\n\n\nPython Documentation \n\n\nJavaScript Documentation\n\n\n\n\nPython ChatLangChain \n\n\nJavaScript ChatLangChain\n\n\n\n\nDiscord \n\n\nTwitter\n\n\n\n\nIf you have any comments about our WEB page, you can \nwrite us at the address shown above. However, due to \nthe limited number of personnel in our corporate office, we are unable to \nprovide a direct response.\n\nCopyright © 2023-2023 LangChain Inc.\n\n\n' metadata={'source': '../../../../../../tests/integration_tests/examples/example.mht', 'title': 'LangChain'}PreviousMerge Documents LoaderNextMicrosoft OneDrive
518
https://python.langchain.com/docs/integrations/document_loaders/microsoft_onedrive
ComponentsDocument loadersMicrosoft OneDriveOn this pageMicrosoft OneDriveMicrosoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft.This notebook covers how to load documents from OneDrive. Currently, only docx, doc, and pdf files are supported.Prerequisites​Register an application with the Microsoft identity platform instructions.When registration finishes, the Azure portal displays the app registration's Overview pane. You see the Application (client) ID. Also called the client ID, this value uniquely identifies your application in the Microsoft identity platform.During the steps you will be following at item 1, you can set the redirect URI as http://localhost:8000/callbackDuring the steps you will be following at item 1, generate a new password (client_secret) under Application Secrets section.Follow the instructions at this document to add the following SCOPES (offline_access and Files.Read.All) to your application.Visit the Graph Explorer Playground to obtain your OneDrive ID. The first step is to ensure you are logged in with the account associated your OneDrive account. Then you need to make a request to https://graph.microsoft.com/v1.0/me/drive and the response will return a payload with a field id that holds the ID of your OneDrive account.You need to install the o365 package using the command pip install o365.At the end of the steps you must have the following values: CLIENT_IDCLIENT_SECRETDRIVE_ID🧑 Instructions for ingesting your documents from OneDrive​🔑 Authentication​By default, the OneDriveLoader expects that the values of CLIENT_ID and CLIENT_SECRET must be stored as environment variables named O365_CLIENT_ID and O365_CLIENT_SECRET respectively. You could pass those environment variables through a .env file at the root of your application or using the following command in your script.os.environ['O365_CLIENT_ID'] = "YOUR CLIENT ID"os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET"This loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was succesful.from langchain.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID")Once the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the auth_with_token parameter to True in the instantiation of the loader.from langchain.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID", auth_with_token=True)🗂️ Documents loader​📑 Loading documents from a OneDrive Directory​OneDriveLoader can load documents from a specific folder within your OneDrive. For instance, you want to load all documents that are stored at Documents/clients folder within your OneDrive.from langchain.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID", folder_path="Documents/clients", auth_with_token=True)documents = loader.load()📑 Loading documents from a list of Documents IDs​Another possibility is to provide a list of object_id for each document you want to load. For that, you will need to query the Microsoft Graph API to find all the documents ID that you are interested in. This link provides a list of endpoints that will be helpful to retrieve the documents ID.For instance, to retrieve information about all objects that are stored at the root of the Documents folder, you need make a request to: https://graph.microsoft.com/v1.0/drives/{YOUR DRIVE ID}/root/children. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters.from langchain.document_loaders.onedrive import OneDriveLoaderloader = OneDriveLoader(drive_id="YOUR DRIVE ID", object_ids=["ID_1", "ID_2"], auth_with_token=True)documents = loader.load()PreviousmhtmlNextMicrosoft PowerPointPrerequisites🧑 Instructions for ingesting your documents from OneDrive🔑 Authentication🗂️ Documents loader
519
https://python.langchain.com/docs/integrations/document_loaders/microsoft_powerpoint
ComponentsDocument loadersMicrosoft PowerPointOn this pageMicrosoft PowerPointMicrosoft PowerPoint is a presentation program by Microsoft.This covers how to load Microsoft PowerPoint documents into a document format that we can use downstream.from langchain.document_loaders import UnstructuredPowerPointLoaderloader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx")data = loader.load()data [Document(page_content='Adding a Bullet Slide\n\nFind the bullet slide layout\n\nUse _TextFrame.text for first bullet\n\nUse _TextFrame.add_paragraph() for subsequent bullets\n\nHere is a lot of text!\n\nHere is some text in a text box!', metadata={'source': 'example_data/fake-power-point.pptx'})]Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredPowerPointLoader( "example_data/fake-power-point.pptx", mode="elements")data = loader.load()data[0] Document(page_content='Adding a Bullet Slide', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0)PreviousMicrosoft OneDriveNextMicrosoft SharePointRetain Elements
520
https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint
ComponentsDocument loadersMicrosoft SharePointOn this pageMicrosoft SharePointMicrosoft SharePoint is a website-based collaboration system that uses workflow applications, “list” databases, and other web parts and security features to empower business teams to work together developed by Microsoft.This notebook covers how to load documents from the SharePoint Document Library. Currently, only docx, doc, and pdf files are supported.Prerequisites​Register an application with the Microsoft identity platform instructions.When registration finishes, the Azure portal displays the app registration's Overview pane. You see the Application (client) ID. Also called the client ID, this value uniquely identifies your application in the Microsoft identity platform.During the steps you will be following at item 1, you can set the redirect URI as https://login.microsoftonline.com/common/oauth2/nativeclientDuring the steps you will be following at item 1, generate a new password (client_secret) under Application Secrets section.Follow the instructions at this document to add the following SCOPES (offline_access and Sites.Read.All) to your application.To retrieve files from your Document Library, you will need its ID. To obtain it, you will need values of Tenant Name, Collection ID, and Subsite ID.To find your Tenant Name follow the instructions at this document. Once you got this, just remove .onmicrosoft.com from the value and hold the rest as your Tenant Name.To obtain your Collection ID and Subsite ID, you will need your SharePoint site-name. Your SharePoint site URL has the following format https://<tenant-name>.sharepoint.com/sites/<site-name>. The last part of this URL is the site-name.To Get the Site Collection ID, hit this URL in the browser: https://<tenant>.sharepoint.com/sites/<site-name>/_api/site/id and copy the value of the Edm.Guid property.To get the Subsite ID (or web ID) use: https://<tenant>.sharepoint.com/<site-name>/_api/web/id and copy the value of the Edm.Guid property.The SharePoint site ID has the following format: <tenant-name>.sharepoint.com,<Collection ID>,<subsite ID>. You can hold that value to use in the next step.Visit the Graph Explorer Playground to obtain your Document Library ID. The first step is to ensure you are logged in with the account associated with your SharePoint site. Then you need to make a request to https://graph.microsoft.com/v1.0/sites/<SharePoint site ID>/drive and the response will return a payload with a field id that holds the ID of your Document Library ID.🧑 Instructions for ingesting your documents from SharePoint Document Library​🔑 Authentication​By default, the SharePointLoader expects that the values of CLIENT_ID and CLIENT_SECRET must be stored as environment variables named O365_CLIENT_ID and O365_CLIENT_SECRET respectively. You could pass those environment variables through a .env file at the root of your application or using the following command in your script.os.environ['O365_CLIENT_ID'] = "YOUR CLIENT ID"os.environ['O365_CLIENT_SECRET'] = "YOUR CLIENT SECRET"This loader uses an authentication called on behalf of a user. It is a 2 step authentication with user consent. When you instantiate the loader, it will call will print a url that the user must visit to give consent to the app on the required permissions. The user must then visit this url and give consent to the application. Then the user must copy the resulting page url and paste it back on the console. The method will then return True if the login attempt was succesful.from langchain.document_loaders.sharepoint import SharePointLoaderloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID")Once the authentication has been done, the loader will store a token (o365_token.txt) at ~/.credentials/ folder. This token could be used later to authenticate without the copy/paste steps explained earlier. To use this token for authentication, you need to change the auth_with_token parameter to True in the instantiation of the loader.from langchain.document_loaders.sharepoint import SharePointLoaderloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", auth_with_token=True)🗂️ Documents loader​📑 Loading documents from a Document Library Directory​SharePointLoader can load documents from a specific folder within your Document Library. For instance, you want to load all documents that are stored at Documents/marketing folder within your Document Library.from langchain.document_loaders.sharepoint import SharePointLoaderloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", folder_path="Documents/marketing", auth_with_token=True)documents = loader.load()📑 Loading documents from a list of Documents IDs​Another possibility is to provide a list of object_id for each document you want to load. For that, you will need to query the Microsoft Graph API to find all the documents ID that you are interested in. This link provides a list of endpoints that will be helpful to retrieve the documents ID.For instance, to retrieve information about all objects that are stored at data/finance/ folder, you need make a request to: https://graph.microsoft.com/v1.0/drives/<document-library-id>/root:/data/finance:/children. Once you have the list of IDs that you are interested in, then you can instantiate the loader with the following parameters.from langchain.document_loaders.sharepoint import SharePointLoaderloader = SharePointLoader(document_library_id="YOUR DOCUMENT LIBRARY ID", object_ids=["ID_1", "ID_2"], auth_with_token=True)documents = loader.load()PreviousMicrosoft PowerPointNextMicrosoft WordPrerequisites🧑 Instructions for ingesting your documents from SharePoint Document Library🔑 Authentication🗂️ Documents loader
521
https://python.langchain.com/docs/integrations/document_loaders/microsoft_word
ComponentsDocument loadersMicrosoft WordOn this pageMicrosoft WordMicrosoft Word is a word processor developed by Microsoft.This covers how to load Word documents into a document format that we can use downstream.Using Docx2txt​Load .docx using Docx2txt into a document.pip install docx2txtfrom langchain.document_loaders import Docx2txtLoaderloader = Docx2txtLoader("example_data/fake.docx")data = loader.load()data [Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})]Using Unstructured​from langchain.document_loaders import UnstructuredWordDocumentLoaderloader = UnstructuredWordDocumentLoader("example_data/fake.docx")data = loader.load()data [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx'}, lookup_index=0)]Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements")data = loader.load()data[0] Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx', 'filename': 'fake.docx', 'category': 'Title'}, lookup_index=0)PreviousMicrosoft SharePointNextModern TreasuryUsing Docx2txtUsing UnstructuredRetain Elements
522
https://python.langchain.com/docs/integrations/document_loaders/modern_treasury
ComponentsDocument loadersModern TreasuryModern TreasuryModern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.Connect to banks and payment systemsTrack transactions and balances in real-timeAutomate payment operations for scaleThis notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import ModernTreasuryLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings.This document loader also requires a resource option which defines what data you want to load.Following resources are available:payment_orders Documentationexpected_payments Documentationreturns Documentationincoming_payment_details Documentationcounterparties Documentationinternal_accounts Documentationexternal_accounts Documentationtransactions Documentationledgers Documentationledger_accounts Documentationledger_transactions Documentationevents Documentationinvoices Documentationmodern_treasury_loader = ModernTreasuryLoader("payment_orders")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])modern_treasury_doc_retriever = index.vectorstore.as_retriever()PreviousMicrosoft WordNextMongoDB
523
https://python.langchain.com/docs/integrations/document_loaders/mongodb
ComponentsDocument loadersMongoDBOn this pageMongoDBMongoDB is a NoSQL , document-oriented database that supports JSON-like documents with a dynamic schema.Overview​The MongoDB Document Loader returns a list of Langchain Documents from a MongoDB database.The Loader requires the following parameters:MongoDB connection stringMongoDB database nameMongoDB collection name(Optional) Content Filter dictionaryThe output takes the following format:pageContent= Mongo Documentmetadata={'database': '[database_name]', 'collection': '[collection_name]'}Load the Document Loader​# add this import for running in jupyter notebookimport nest_asyncionest_asyncio.apply()from langchain.document_loaders.mongodb import MongodbLoaderloader = MongodbLoader(connection_string="mongodb://localhost:27017/", db_name="sample_restaurants", collection_name="restaurants", filter_criteria={"borough": "Bronx", "cuisine": "Bakery" }, )docs = loader.load()len(docs) 25359docs[0] Document(page_content="{'_id': ObjectId('5eb3d668b31de5d588f4292a'), 'address': {'building': '2780', 'coord': [-73.98241999999999, 40.579505], 'street': 'Stillwell Avenue', 'zipcode': '11224'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': datetime.datetime(2014, 6, 10, 0, 0), 'grade': 'A', 'score': 5}, {'date': datetime.datetime(2013, 6, 5, 0, 0), 'grade': 'A', 'score': 7}, {'date': datetime.datetime(2012, 4, 13, 0, 0), 'grade': 'A', 'score': 12}, {'date': datetime.datetime(2011, 10, 12, 0, 0), 'grade': 'A', 'score': 12}], 'name': 'Riviera Caterer', 'restaurant_id': '40356018'}", metadata={'database': 'sample_restaurants', 'collection': 'restaurants'})PreviousModern TreasuryNextNews URLOverviewLoad the Document Loader
524
https://python.langchain.com/docs/integrations/document_loaders/news
ComponentsDocument loadersNews URLNews URLThis covers how to load HTML news articles from a list of URLs into a document format that we can use downstream.from langchain.document_loaders import NewsURLLoaderurls = [ "https://www.bbc.com/news/world-us-canada-66388172", "https://www.bbc.com/news/entertainment-arts-66384971",]Pass in urls to load them into Documentsloader = NewsURLLoader(urls=urls)data = loader.load()print("First article: ", data[0])print("\nSecond article: ", data[1]) First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None} Second article: page_content='Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."' metadata={'title': "Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None}Use nlp=True to run nlp analysis and generate keywords + summaryloader = NewsURLLoader(urls=urls, nlp=True)data = loader.load()print("First article: ", data[0])print("\nSecond article: ", data[1]) First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None, 'keywords': ['powell', 'know', 'donald', 'trump', 'review', 'indictment', 'telling', 'view', 'reasonable', 'person', 'testimony', 'coconspirators', 'riot', 'representatives', 'claims'], 'summary': 'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact.\nNeither she nor her representatives have commented.'} Second article: page_content='Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."' metadata={'title': "Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None, 'keywords': ['davis', 'lizzo', 'singers', 'experience', 'crystal', 'ensure', 'arianna', 'theres', 'williams', 'power', 'going', 'dancers', 'im', 'speaks', 'work', 'ms', 'scared'], 'summary': 'Ms Williams added: "If there\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\'t have to go through that same experience, I\'m going to do that."'}data[0].metadata['keywords'] ['powell', 'know', 'donald', 'trump', 'review', 'indictment', 'telling', 'view', 'reasonable', 'person', 'testimony', 'coconspirators', 'riot', 'representatives', 'claims']data[0].metadata['summary'] 'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that "no reasonable person" would view her claims as fact.\nNeither she nor her representatives have commented.'PreviousMongoDBNextNotion DB 1/2
525
https://python.langchain.com/docs/integrations/document_loaders/notion
ComponentsDocument loadersNotion DB 1/2On this pageNotion DB 1/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.This notebook covers how to load documents from a Notion database dump.In order to get this notion dump, follow these instructions:🧑 Instructions for ingesting your own dataset​Export your dataset from Notion. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.When exporting, make sure to select the Markdown & CSV format option.This will produce a .zip file in your Downloads folder. Move the .zip file into this repository.Run the following command to unzip the zip file (replace the Export... with your own file name as needed).unzip Export-d3adfe0f-3131-4bf3-8987-a52017fc1bae.zip -d Notion_DBRun the following command to ingest the data.from langchain.document_loaders import NotionDirectoryLoaderloader = NotionDirectoryLoader("Notion_DB")docs = loader.load()PreviousNews URLNextNotion DB 2/2🧑 Instructions for ingesting your own dataset
526
https://python.langchain.com/docs/integrations/document_loaders/notiondb
ComponentsDocument loadersNotion DB 2/2On this pageNotion DB 2/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.NotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects.Requirements​A Notion DatabaseNotion Integration TokenSetup​1. Create a Notion Table Database​Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns:Title: set Title as the default property.Categories: A Multi-select property to store categories associated with the page.Keywords: A Multi-select property to store keywords associated with the page.Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages.2. Create a Notion Integration​To create a Notion Integration, follow these steps:Visit the Notion Developers page and log in with your Notion account.Click on the "+ New integration" button.Give your integration a name and choose the workspace where your database is located.Select the require capabilities, this extension only need the Read content capabilityClick the "Submit" button to create the integration. Once the integration is created, you'll be provided with an Integration Token (API key). Copy this token and keep it safe, as you'll need it to use the NotionDBLoader.3. Connect the Integration to the Database​To connect your integration to the database, follow these steps:Open your database in Notion.Click on the three-dot menu icon in the top right corner of the database view.Click on the "+ New integration" button.Find your integration, you may need to start typing its name in the search box.Click on the "Connect" button to connect the integration to the database.4. Get the Database ID​To get the database ID, follow these steps:Open your database in Notion.Click on the three-dot menu icon in the top right corner of the database view.Select "Copy link" from the menu to copy the database URL to your clipboard.The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=.... In this example, the database ID is 8935f9d140a04f95a872520c4f123456.With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database.Usage​NotionDBLoader is part of the langchain package's document loaders. You can use it as follows:from getpass import getpassNOTION_TOKEN = getpass()DATABASE_ID = getpass() ········ ········from langchain.document_loaders import NotionDBLoaderloader = NotionDBLoader( integration_token=NOTION_TOKEN, database_id=DATABASE_ID, request_timeout_sec=30, # optional, defaults to 10)docs = loader.load()print(docs) PreviousNotion DB 1/2NextNucliaRequirementsSetup1. Create a Notion Table Database2. Create a Notion Integration3. Connect the Integration to the Database4. Get the Database IDUsage
527
https://python.langchain.com/docs/integrations/document_loaders/nuclia
ComponentsDocument loadersNucliaOn this pageNucliaNuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.The Nuclia Understanding API supports the processing of unstructured data, including text, web pages, documents, and audio/video contents. It extracts all texts wherever they are (using speech-to-text or OCR when needed), it also extracts metadata, embedded files (like images in a PDF), and web links. If machine learning is enabled, it identifies entities, provides a summary of the content and generates embeddings for all the sentences.Setup​To use the Nuclia Understanding API, you need to have a Nuclia account. You can create one for free at https://nuclia.cloud, and then create a NUA key.#!pip install --upgrade protobuf#!pip install nucliadb-protosimport osos.environ["NUCLIA_ZONE"] = "<YOUR_ZONE>" # e.g. europe-1os.environ["NUCLIA_NUA_KEY"] = "<YOUR_API_KEY>"Example​To use the Nuclia document loader, you need to instantiate a NucliaUnderstandingAPI tool:from langchain.tools.nuclia import NucliaUnderstandingAPInua = NucliaUnderstandingAPI(enable_ml=False)from langchain.document_loaders.nuclia import NucliaLoaderloader = NucliaLoader("./interview.mp4", nua)You can now call the load the document in a loop until you get the document.import timepending = Truewhile pending: time.sleep(15) docs = loader.load() if len(docs) > 0: print(docs[0].page_content) print(docs[0].metadata) pending = False else: print("waiting...")Retrieved information​Nuclia returns the following information:file metadataextracted textnested text (like text in an embedded image)paragraphs and sentences splitting (defined by the position of their first and last characters, plus start time and end time for a video or audio file)linksa thumbnailembedded filesNote: Generated files (thumbnail, extracted embedded files, etc.) are provided as a token. You can download them with the /processing/download endpoint. Also at any level, if an attribute exceeds a certain size, it will be put in a downloadable file and will be replaced in the document by a file pointer. This will consist of {"file": {"uri": "JWT_TOKEN"}}. The rule is that if the size of the message is greater than 1000000 characters, the biggest parts will be moved to downloadable files. First, the compression process will target vectors. If that is not enough, it will target large field metadata, and finally it will target extracted text.PreviousNotion DB 2/2NextObsidianSetupExampleRetrieved information
528
https://python.langchain.com/docs/integrations/document_loaders/obsidian
ComponentsDocument loadersObsidianObsidianObsidian is a powerful and extensible knowledge base that works on top of your local folder of plain text files.This notebook covers how to load documents from an Obsidian database.Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the document's metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.)from langchain.document_loaders import ObsidianLoaderloader = ObsidianLoader("<path-to-obsidian>")docs = loader.load()PreviousNucliaNextOpen Document Format (ODT)
529
https://python.langchain.com/docs/integrations/document_loaders/odt
ComponentsDocument loadersOpen Document Format (ODT)Open Document Format (ODT)The Open Document Format for Office Applications (ODF), also known as OpenDocument, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications.The standard is developed and maintained by a technical committee in the Organization for the Advancement of Structured Information Standards (OASIS) consortium. It was based on the Sun Microsystems specification for OpenOffice.org XML, the default format for OpenOffice.org and LibreOffice. It was originally developed for StarOffice "to provide an open standard for office documents."The UnstructuredODTLoader is used to load Open Office ODT files.from langchain.document_loaders import UnstructuredODTLoaderloader = UnstructuredODTLoader("example_data/fake.odt", mode="elements")docs = loader.load()docs[0] Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.odt', 'filename': 'example_data/fake.odt', 'category': 'Title'})PreviousObsidianNextOpen City Data
530
https://python.langchain.com/docs/integrations/document_loaders/open_city_data
ComponentsDocument loadersOpen City DataOpen City DataSocrata provides an API for city open data. For a dataset such as SF crime, to to the API tab on top right. That provides you with the dataset identifier.Use the dataset identifier to grab specific tables for a given city_id (data.sfgov.org) - E.g., vw6y-z8j6 for SF 311 data.E.g., tmnf-yvry for SF Police data.pip install sodapyfrom langchain.document_loaders import OpenCityDataLoaderdataset = "vw6y-z8j6" # 311 datadataset = "tmnf-yvry" # crime dataloader = OpenCityDataLoader(city_id="data.sfgov.org", dataset_id=dataset, limit=2000)docs = loader.load() WARNING:root:Requests made without an app_token will be subject to strict throttling limits.eval(docs[0].page_content) {'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309'}PreviousOpen Document Format (ODT)NextOrg-mode
531
https://python.langchain.com/docs/integrations/document_loaders/org_mode
ComponentsDocument loadersOrg-modeOn this pageOrg-modeA Org Mode document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs.UnstructuredOrgModeLoader​You can load data from Org-mode files with UnstructuredOrgModeLoader using the following workflow.from langchain.document_loaders import UnstructuredOrgModeLoaderloader = UnstructuredOrgModeLoader(file_path="example_data/README.org", mode="elements")docs = loader.load()print(docs[0]) page_content='Example Docs' metadata={'source': 'example_data/README.org', 'filename': 'README.org', 'file_directory': 'example_data', 'filetype': 'text/org', 'page_number': 1, 'category': 'Title'}PreviousOpen City DataNextPandas DataFrameUnstructuredOrgModeLoader
532
https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe
ComponentsDocument loadersPandas DataFramePandas DataFrameThis notebook goes over how to load data from a pandas DataFrame.#!pip install pandasimport pandas as pddf = pd.read_csv("example_data/mlb_teams_2012.csv")df.head()<div><style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; }</style><table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Team</th> <th>"Payroll (millions)"</th> <th>"Wins"</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>Nationals</td> <td>81.34</td> <td>98</td> </tr> <tr> <th>1</th> <td>Reds</td> <td>82.20</td> <td>97</td> </tr> <tr> <th>2</th> <td>Yankees</td> <td>197.96</td> <td>95</td> </tr> <tr> <th>3</th> <td>Giants</td> <td>117.62</td> <td>94</td> </tr> <tr> <th>4</th> <td>Braves</td> <td>83.31</td> <td>94</td> </tr> </tbody></table></div>from langchain.document_loaders import DataFrameLoaderloader = DataFrameLoader(df, page_content_column="Team")loader.load() [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})]# Use lazy load for larger table, which won't read the full table into memoryfor i in loader.lazy_load(): print(i) page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98} page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97} page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95} page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94} page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94} page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94} page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93} page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93} page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90} page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89} page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88} page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88} page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86} page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85} page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83} page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81} page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81} page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79} page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76} page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75} page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74} page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73} page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72} page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69} page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69} page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68} page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66} page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64} page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61} page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55}PreviousOrg-modeNextAmazon Textract
533
https://python.langchain.com/docs/integrations/document_loaders/pdf-amazonTextractPDFLoader
ComponentsDocument loadersAmazon TextractOn this pageAmazon TextractAmazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Today, many companies manually extract data from scanned documents such as PDFs, images, tables, and forms, or through simple OCR software that requires manual configuration (which often must be updated when the form changes). To overcome these manual and expensive processes, Textract uses ML to read and process any type of document, accurately extracting text, handwriting, tables, and other data with no manual effort. You can quickly automate document processing and act on the information extracted, whether you’re automating loans processing or extracting information from invoices and receipts. Textract can extract the data in minutes instead of hours or days.This sample demonstrates the use of Amazon Textract in combination with LangChain as a DocumentLoader.Textract supports PDF, TIFF, PNG and JPEG format.Check https://docs.aws.amazon.com/textract/latest/dg/limits-document.html for supported document sizes, languages and characters.pip install langchain boto3 openai tiktoken python-dotenv -q [notice] A new release of pip is available: 23.2 -> 23.2.1 [notice] To update, run: python -m pip install --upgrade pipSample 1​The first example uses a local file, which internally will be send to Amazon Textract sync API DetectDocumentText. Local files or URL endpoints like HTTP:// are limited to one page documents for Textract. Multi-page documents have to reside on S3. This sample file is a jpeg.from langchain.document_loaders import AmazonTextractPDFLoaderloader = AmazonTextractPDFLoader("example_data/alejandro_rosalez_sample-small.jpeg")documents = loader.load()Output from the filedocuments [Document(page_content='Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No ', metadata={'source': 'example_data/alejandro_rosalez_sample-small.jpeg', 'page': 1})]Sample 2​The next sample loads a file from an HTTPS endpoint. It has to be single page, as Amazon Textract requires all multi-page documents to be stored on S3.from langchain.document_loaders import AmazonTextractPDFLoaderloader = AmazonTextractPDFLoader("https://amazon-textract-public-content.s3.us-east-2.amazonaws.com/langchain/alejandro_rosalez_sample_1.jpg")documents = loader.load()documents [Document(page_content='Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No Patient Information First Name: ALEJANDRO Last Name: ROSALEZ Date of Birth: 10/10/1982 Sex: M Marital Status: MARRIED Email Address: Address: 123 ANY STREET City: ANYTOWN State: CA Zip Code: 12345 Phone: 646-555-0111 Emergency Contact 1: First Name: CARLOS Last Name: SALAZAR Phone: 212-555-0150 Relationship to Patient: BROTHER Emergency Contact 2: First Name: JANE Last Name: DOE Phone: 650-555-0123 Relationship FRIEND to Patient: Did you feel fever or feverish lately? Yes No Are you having shortness of breath? Yes No Do you have a cough? Yes No Did you experience loss of taste or smell? Yes No Where you in contact with any confirmed COVID-19 positive patients? Yes No Did you travel in the past 14 days to any regions affected by COVID-19? Yes No ', metadata={'source': 'example_data/alejandro_rosalez_sample-small.jpeg', 'page': 1})]Sample 3​Processing a multi-page document requires the document to be on S3. The sample document resides in a bucket in us-east-2 and Textract needs to be called in that same region to be successful, so we set the region_name on the client and pass that in to the loader to ensure Textract is called from us-east-2. You could also to have your notebook running in us-east-2, setting the AWS_DEFAULT_REGION set to us-east-2 or when running in a different environment, pass in a boto3 Textract client with that region name like in the cell below.import boto3textract_client = boto3.client('textract', region_name='us-east-2')file_path = "s3://amazon-textract-public-content/langchain/layout-parser-paper.pdf"loader = AmazonTextractPDFLoader(file_path, client=textract_client)documents = loader.load()Now getting the number of pages to validate the response (printing out the full response would be quite long...). We expect 16 pages.len(documents) 16Using the AmazonTextractPDFLoader in an LangChain chain (e. g. OpenAI)​The AmazonTextractPDFLoader can be used in a chain the same way the other loaders are used. Textract itself does have a Query feature, which offers similar functionality to the QA chain in this sample, which is worth checking out as well.# You can store your OPENAI_API_KEY in a .env file as well# import os # from dotenv import load_dotenv# load_dotenv()# Or set the OpenAI key in the environment directlyimport os os.environ["OPENAI_API_KEY"] = "your-OpenAI-API-key"from langchain.llms import OpenAIfrom langchain.chains.question_answering import load_qa_chainchain = load_qa_chain(llm=OpenAI(), chain_type="map_reduce")query = ["Who are the autors?"]chain.run(input_documents=documents, question=query) ' The authors are Zejiang Shen, Ruochen Zhang, Melissa Dell, Benjamin Charles Germain Lee, Jacob Carlson, Weining Li, Gardner, M., Grus, J., Neumann, M., Tafjord, O., Dasigi, P., Liu, N., Peters, M., Schmitz, M., Zettlemoyer, L., Lukasz Garncarek, Powalski, R., Stanislawek, T., Topolski, B., Halama, P., Gralinski, F., Graves, A., Fernández, S., Gomez, F., Schmidhuber, J., Harley, A.W., Ufkes, A., Derpanis, K.G., He, K., Gkioxari, G., Dollár, P., Girshick, R., He, K., Zhang, X., Ren, S., Sun, J., Kay, A., Lamiroy, B., Lopresti, D., Mears, J., Jakeway, E., Ferriter, M., Adams, C., Yarasavage, N., Thomas, D., Zwaard, K., Li, M., Cui, L., Huang,'PreviousPandas DataFrameNextPolars DataFrameSample 1Sample 2Sample 3Using the AmazonTextractPDFLoader in an LangChain chain (e. g. OpenAI)
534
https://python.langchain.com/docs/integrations/document_loaders/polars_dataframe
ComponentsDocument loadersPolars DataFramePolars DataFrameThis notebook goes over how to load data from a polars DataFrame.#!pip install polarsimport polars as pldf = pl.read_csv("example_data/mlb_teams_2012.csv")df.head()<div><style>.dataframe > thead > tr > th,.dataframe > tbody > tr > td { text-align: right;}</style><small>shape: (5, 3)</small><table border="1" class="dataframe"><thead><tr><th>Team</th><th> &quot;Payroll (millions)&quot;</th><th> &quot;Wins&quot;</th></tr><tr><td>str</td><td>f64</td><td>i64</td></tr></thead><tbody><tr><td>&quot;Nationals&quot;</td><td>81.34</td><td>98</td></tr><tr><td>&quot;Reds&quot;</td><td>82.2</td><td>97</td></tr><tr><td>&quot;Yankees&quot;</td><td>197.96</td><td>95</td></tr><tr><td>&quot;Giants&quot;</td><td>117.62</td><td>94</td></tr><tr><td>&quot;Braves&quot;</td><td>83.31</td><td>94</td></tr></tbody></table></div>from langchain.document_loaders import PolarsDataFrameLoaderloader = PolarsDataFrameLoader(df, page_content_column="Team")loader.load() [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})]# Use lazy load for larger table, which won't read the full table into memoryfor i in loader.lazy_load(): print(i) page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98} page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97} page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95} page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94} page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94} page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94} page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93} page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93} page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90} page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89} page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88} page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88} page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86} page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85} page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83} page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81} page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81} page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79} page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76} page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75} page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74} page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73} page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72} page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69} page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69} page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68} page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66} page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64} page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61} page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55}PreviousAmazon TextractNextPsychic
535
https://python.langchain.com/docs/integrations/document_loaders/psychic
ComponentsDocument loadersPsychicOn this pagePsychicThis notebook covers how to load documents from Psychic. See here for more details.Prerequisites​Follow the Quick Start section in this documentLog into the Psychic dashboard and get your secret keyInstall the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify.Loading documents​Use the PsychicLoader class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library).# Uncomment this to install psychicapi if you don't already have it installedpoetry run pip -q install psychicapi [notice] A new release of pip is available: 23.0.1 -> 23.1.2 [notice] To update, run: pip install --upgrade pipfrom langchain.document_loaders import PsychicLoaderfrom psychicapi import ConnectorId# Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value# This loader uses our test credentialsgoogle_drive_loader = PsychicLoader( api_key="7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e", connector_id=ConnectorId.gdrive.value, connection_id="google-test",)documents = google_drive_loader.load()Converting the docs to embeddings​We can now convert these documents into embeddings and store them in a vector database like Chromafrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.llms import OpenAIfrom langchain.chains import RetrievalQAWithSourcesChaintext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain({"question": "what is psychic?"}, return_only_outputs=True)PreviousPolars DataFrameNextPubMedPrerequisitesLoading documentsConverting the docs to embeddings
536
https://python.langchain.com/docs/integrations/document_loaders/pubmed
ComponentsDocument loadersPubMedPubMedPubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites.from langchain.document_loaders import PubMedLoaderloader = PubMedLoader("chatgpt")docs = loader.load()len(docs) 3docs[1].metadata {'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'}docs[1].page_content "BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics."PreviousPsychicNextPySpark
537
https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe
ComponentsDocument loadersPySparkPySparkThis notebook goes over how to load data from a PySpark DataFrame.#!pip install pysparkfrom pyspark.sql import SparkSessionspark = SparkSession.builder.getOrCreate() Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/05/31 14:08:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicabledf = spark.read.csv("example_data/mlb_teams_2012.csv", header=True)from langchain.document_loaders import PySparkDataFrameLoaderloader = PySparkDataFrameLoader(spark, df, page_content_column="Team")loader.load() [Stage 8:> (0 + 1) / 1] [Document(page_content='Nationals', metadata={' "Payroll (millions)"': ' 81.34', ' "Wins"': ' 98'}), Document(page_content='Reds', metadata={' "Payroll (millions)"': ' 82.20', ' "Wins"': ' 97'}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': ' 197.96', ' "Wins"': ' 95'}), Document(page_content='Giants', metadata={' "Payroll (millions)"': ' 117.62', ' "Wins"': ' 94'}), Document(page_content='Braves', metadata={' "Payroll (millions)"': ' 83.31', ' "Wins"': ' 94'}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': ' 55.37', ' "Wins"': ' 94'}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': ' 120.51', ' "Wins"': ' 93'}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': ' 81.43', ' "Wins"': ' 93'}), Document(page_content='Rays', metadata={' "Payroll (millions)"': ' 64.17', ' "Wins"': ' 90'}), Document(page_content='Angels', metadata={' "Payroll (millions)"': ' 154.49', ' "Wins"': ' 89'}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': ' 132.30', ' "Wins"': ' 88'}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': ' 110.30', ' "Wins"': ' 88'}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': ' 95.14', ' "Wins"': ' 86'}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': ' 96.92', ' "Wins"': ' 85'}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': ' 97.65', ' "Wins"': ' 83'}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': ' 174.54', ' "Wins"': ' 81'}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': ' 74.28', ' "Wins"': ' 81'}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': ' 63.43', ' "Wins"': ' 79'}), Document(page_content='Padres', metadata={' "Payroll (millions)"': ' 55.24', ' "Wins"': ' 76'}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': ' 81.97', ' "Wins"': ' 75'}), Document(page_content='Mets', metadata={' "Payroll (millions)"': ' 93.35', ' "Wins"': ' 74'}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': ' 75.48', ' "Wins"': ' 73'}), Document(page_content='Royals', metadata={' "Payroll (millions)"': ' 60.91', ' "Wins"': ' 72'}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': ' 118.07', ' "Wins"': ' 69'}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': ' 173.18', ' "Wins"': ' 69'}), Document(page_content='Indians', metadata={' "Payroll (millions)"': ' 78.43', ' "Wins"': ' 68'}), Document(page_content='Twins', metadata={' "Payroll (millions)"': ' 94.08', ' "Wins"': ' 66'}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': ' 78.06', ' "Wins"': ' 64'}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': ' 88.19', ' "Wins"': ' 61'}), Document(page_content='Astros', metadata={' "Payroll (millions)"': ' 60.65', ' "Wins"': ' 55'})]PreviousPubMedNextReadTheDocs Documentation
538
https://python.langchain.com/docs/integrations/document_loaders/readthedocs_documentation
ComponentsDocument loadersReadTheDocs DocumentationReadTheDocs DocumentationRead the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.This notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build.For an example of this in the wild, see here.This assumes that the HTML has already been scraped into a folder. This can be done by uncommenting and running the following command#!pip install beautifulsoup4#!wget -r -A.html -P rtdocs https://python.langchain.com/en/latest/from langchain.document_loaders import ReadTheDocsLoaderloader = ReadTheDocsLoader("rtdocs", features="html.parser")docs = loader.load()PreviousPySparkNextRecursive URL
539
https://python.langchain.com/docs/integrations/document_loaders/recursive_url
ComponentsDocument loadersRecursive URLRecursive URLWe may want to process load all URLs under a root directory.For example, let's look at the Python 3.9 Document.This has many interesting child pages that we may want to read in bulk.Of course, the WebBaseLoader can load a list of pages. But, the challenge is traversing the tree of child pages and actually assembling that list!We do this using the RecursiveUrlLoader.This also gives us the flexibility to exclude some children, customize the extractor, and more.Parametersurl: str, the target url to crawl.exclude_dirs: Optional[str], webpage directories to exclude.use_async: Optional[bool], wether to use async requests, using async requests is usually faster in large tasks. However, async will disable the lazy loading feature(the function still works, but it is not lazy). By default, it is set to False.extractor: Optional[Callable[[str], str]], a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like goose3 and beautifulsoup to extract the text. By default, it just returns the page as it is.max_depth: Optional[int] = None, the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job.timeout: Optional[int] = None, the timeout for each request, in the unit of seconds. By default, it is set to 10.prevent_outside: Optional[bool] = None, whether to prevent crawling outside the root url. By default, it is set to True.from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoaderLet's try a simple example.from bs4 import BeautifulSoup as Soupurl = "https://docs.python.org/3.9/"loader = RecursiveUrlLoader(url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text)docs = loader.load()docs[0].page_content[:50] '\n\n\n\n\nPython Frequently Asked Questions — Python 3.'docs[-1].metadata {'source': 'https://docs.python.org/3.9/library/index.html', 'title': 'The Python Standard Library — Python 3.9.17 documentation', 'language': None}However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned results are good enough.Testing on LangChain docs.url = "https://js.langchain.com/docs/modules/memory/integrations/"loader = RecursiveUrlLoader(url=url)docs = loader.load()len(docs) 8PreviousReadTheDocs DocumentationNextReddit
540
https://python.langchain.com/docs/integrations/document_loaders/reddit
ComponentsDocument loadersRedditRedditReddit is an American social news aggregation, content rating, and discussion website.This loader fetches the text from the Posts of Subreddits or Reddit users, using the praw Python package.Make a Reddit Application and initialize the loader with with your Reddit API credentials.from langchain.document_loaders import RedditPostsLoader# !pip install praw# load using 'subreddit' modeloader = RedditPostsLoader( client_id="YOUR CLIENT ID", client_secret="YOUR CLIENT SECRET", user_agent="extractor by u/Master_Ocelot8179", categories=["new", "hot"], # List of categories to load posts from mode="subreddit", search_queries=[ "investing", "wallstreetbets", ], # List of subreddits to load posts from number_posts=20, # Default value is 10)# # or load using 'username' mode# loader = RedditPostsLoader(# client_id="YOUR CLIENT ID",# client_secret="YOUR CLIENT SECRET",# user_agent="extractor by u/Master_Ocelot8179",# categories=['new', 'hot'],# mode = 'username',# search_queries=['ga3far', 'Master_Ocelot8179'], # List of usernames to load posts from# number_posts=20# )# Note: Categories can be only of following value - "controversial" "hot" "new" "rising" "top"documents = loader.load()documents[:5] [Document(page_content='Hello, I am not looking for investment advice. I will apply my own due diligence. However, I am interested if anyone knows as a UK resident how fees and exchange rate differences would impact performance?\n\nI am planning to create a pie of index funds (perhaps UK, US, europe) or find a fund with a good track record of long term growth at low rates. \n\nDoes anyone have any ideas?', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Long term retirement funds fees/exchange rate query', 'post_score': 1, 'post_id': '130pa6m', 'post_url': 'https://www.reddit.com/r/investing/comments/130pa6m/long_term_retirement_funds_feesexchange_rate_query/', 'post_author': Redditor(name='Badmanshiz')}), Document(page_content='I much prefer the Roth IRA and would rather rollover my 401k to that every year instead of keeping it in the limited 401k options. But if I rollover, will I be able to continue contributing to my 401k? Or will that close my account? I realize that there are tax implications of doing this but I still think it is the better option.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Is it possible to rollover my 401k every year?', 'post_score': 3, 'post_id': '130ja0h', 'post_url': 'https://www.reddit.com/r/investing/comments/130ja0h/is_it_possible_to_rollover_my_401k_every_year/', 'post_author': Redditor(name='AnCap_Catholic')}), Document(page_content='Have a general question? Want to offer some commentary on markets? Maybe you would just like to throw out a neat fact that doesn\'t warrant a self post? Feel free to post here! \n\nIf your question is "I have $10,000, what do I do?" or other "advice for my personal situation" questions, you should include relevant information, such as the following:\n\n* How old are you? What country do you live in? \n* Are you employed/making income? How much? \n* What are your objectives with this money? (Buy a house? Retirement savings?) \n* What is your time horizon? Do you need this money next month? Next 20yrs? \n* What is your risk tolerance? (Do you mind risking it at blackjack or do you need to know its 100% safe?) \n* What are you current holdings? (Do you already have exposure to specific funds and sectors? Any other assets?) \n* Any big debts (include interest rate) or expenses? \n* And any other relevant financial information will be useful to give you a proper answer. \n\nPlease consider consulting our FAQ first - https://www.reddit.com/r/investing/wiki/faq\nAnd our [side bar](https://www.reddit.com/r/investing/about/sidebar) also has useful resources. \n\nIf you are new to investing - please refer to Wiki - [Getting Started](https://www.reddit.com/r/investing/wiki/index/gettingstarted/)\n\nThe reading list in the wiki has a list of books ranging from light reading to advanced topics depending on your knowledge level. Link here - [Reading List](https://www.reddit.com/r/investing/wiki/readinglist)\n\nCheck the resources in the sidebar.\n\nBe aware that these answers are just opinions of Redditors and should be used as a starting point for your research. You should strongly consider seeing a registered investment adviser if you need professional support before making any financial decisions!', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Daily General Discussion and Advice Thread - April 27, 2023', 'post_score': 5, 'post_id': '130eszz', 'post_url': 'https://www.reddit.com/r/investing/comments/130eszz/daily_general_discussion_and_advice_thread_april/', 'post_author': Redditor(name='AutoModerator')}), Document(page_content="Based on recent news about salt battery advancements and the overall issues of lithium, I was wondering what would be feasible ways to invest into non-lithium based battery technologies? CATL is of course a choice, but the selection of brokers I currently have in my disposal don't provide HK stocks at all.", metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Investing in non-lithium battery technologies?', 'post_score': 2, 'post_id': '130d6qp', 'post_url': 'https://www.reddit.com/r/investing/comments/130d6qp/investing_in_nonlithium_battery_technologies/', 'post_author': Redditor(name='-manabreak')}), Document(page_content='Hello everyone,\n\nI would really like to invest in an ETF that follows spy or another big index, as I think this form of investment suits me best. \n\nThe problem is, that I live in Denmark where ETFs and funds are taxed annually on unrealised gains at quite a steep rate. This means that an ETF growing say 10% per year will only grow about 6%, which really ruins the long term effects of compounding interest.\n\nHowever stocks are only taxed on realised gains which is why they look more interesting to hold long term.\n\nI do not like the lack of diversification this brings, as I am looking to spend tonnes of time picking the right long term stocks.\n\nIt would be ideal to find a few stocks that over the long term somewhat follows the indexes. Does anyone have suggestions?\n\nI have looked at Nasdaq Inc. which quite closely follows Nasdaq 100. \n\nI really appreciate any help.', metadata={'post_subreddit': 'r/investing', 'post_category': 'new', 'post_title': 'Stocks that track an index', 'post_score': 7, 'post_id': '130auvj', 'post_url': 'https://www.reddit.com/r/investing/comments/130auvj/stocks_that_track_an_index/', 'post_author': Redditor(name='LeAlbertP')})]PreviousRecursive URLNextRoam
541
https://python.langchain.com/docs/integrations/document_loaders/roam
ComponentsDocument loadersRoamOn this pageRoamROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here.🧑 Instructions for ingesting your own dataset​Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.When exporting, make sure to select the Markdown & CSV format option.This will produce a .zip file in your Downloads folder. Move the .zip file into this repository.Run the following command to unzip the zip file (replace the Export... with your own file name as needed).unzip Roam-Export-1675782732639.zip -d Roam_DBfrom langchain.document_loaders import RoamLoaderloader = RoamLoader("Roam_DB")docs = loader.load()PreviousRedditNextRockset🧑 Instructions for ingesting your own dataset
542
https://python.langchain.com/docs/integrations/document_loaders/rockset
ComponentsDocument loadersRocksetOn this pageRocksetRockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).This notebook demonstrates how to use Rockset as a document loader in langchain. To get started, make sure you have a Rockset account and an API key available.Setting up the environment​Go to the Rockset console and get an API key. Find your API region from the API reference. For the purpose of this notebook, we will assume you're using Rockset from Oregon(us-west-2).Set your the environment variable ROCKSET_API_KEY.Install the Rockset python client, which will be used by langchain to interact with the Rockset database.$ pip3 install rocksetLoading DocumentsThe Rockset integration with LangChain allows you to load documents from Rockset collections with SQL queries. In order to do this you must construct a RocksetLoader object. Here is an example snippet that initializes a RocksetLoader.from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 3"), # SQL query ["text"], # content columns metadata_keys=["id", "date"], # metadata columns)Here, you can see that the following query is run:SELECT * FROM langchain_demo LIMIT 3The text column in the collection is used as the page content, and the record's id and date columns are used as metadata (if you do not pass anything into metadata_keys, the whole Rockset document will be used as metadata). To execute the query and access an iterator over the resulting Documents, run:loader.lazy_load()To execute the query and access all resulting Documents at once, run:loader.load()Here is an example response of loader.load():[ Document( page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a libero porta, dictum ipsum eget, hendrerit neque. Morbi blandit, ex ut suscipit viverra, enim velit tincidunt tellus, a tempor velit nunc et ex. Proin hendrerit odio nec convallis lobortis. Aenean in purus dolor. Vestibulum orci orci, laoreet eget magna in, commodo euismod justo.", metadata={"id": 83209, "date": "2022-11-13T18:26:45.000000Z"} ), Document( page_content="Integer at finibus odio. Nam sit amet enim cursus lacus gravida feugiat vestibulum sed libero. Aenean eleifend est quis elementum tincidunt. Curabitur sit amet ornare erat. Nulla id dolor ut magna volutpat sodales fringilla vel ipsum. Donec ultricies, lacus sed fermentum dignissim, lorem elit aliquam ligula, sed suscipit sapien purus nec ligula.", metadata={"id": 89313, "date": "2022-11-13T18:28:53.000000Z"} ), Document( page_content="Morbi tortor enim, commodo id efficitur vitae, fringilla nec mi. Nullam molestie faucibus aliquet. Praesent a est facilisis, condimentum justo sit amet, viverra erat. Fusce volutpat nisi vel purus blandit, et facilisis felis accumsan. Phasellus luctus ligula ultrices tellus tempor hendrerit. Donec at ultricies leo.", metadata={"id": 87732, "date": "2022-11-13T18:49:04.000000Z"} )]Using multiple columns as content​You can choose to use multiple columns as content:from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], # TWO content columns)Assuming the "sentence1" field is "This is the first sentence." and the "sentence2" field is "This is the second sentence.", the page_content of the resulting Document would be:This is the first sentence.This is the second sentence.You can define you own function to join content columns by setting the content_columns_joiner argument in the RocksetLoader constructor. content_columns_joiner is a method that takes in a List[Tuple[str, Any]]] as an argument, representing a list of tuples of (column name, column value). By default, this is a method that joins each column value with a new line.For example, if you wanted to join sentence1 and sentence2 with a space instead of a new line, you could set content_columns_joiner like so:RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: " ".join( [doc[1] for doc in docs] ), # join with space instead of /n)The page_content of the resulting Document would be:This is the first sentence. This is the second sentence.Oftentimes you want to include the column name in the page_content. You can do that like this:RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: "\n".join( [f"{doc[0]}: {doc[1]}" for doc in docs] ),)This would result in the following page_content:sentence1: This is the first sentence.sentence2: This is the second sentence.PreviousRoamNextRSS FeedsSetting up the environmentUsing multiple columns as content
543
https://python.langchain.com/docs/integrations/document_loaders/rss
ComponentsDocument loadersRSS FeedsRSS FeedsThis covers how to load HTML news articles from a list of RSS feed URLs into a document format that we can use downstream.pip install feedparser newspaper3k listparserfrom langchain.document_loaders import RSSFeedLoaderurls = ["https://news.ycombinator.com/rss"]Pass in urls to load them into Documentsloader = RSSFeedLoader(urls=urls)data = loader.load()print(len(data))print(data[0].page_content) (next Rich) 04 August 2023 Rich Hickey It is with a mixture of heartache and optimism that I announce today my (long planned) retirement from commercial software development, and my employment at Nubank. It’s been thrilling to see Clojure and Datomic successfully applied at scale. I look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again. We have many useful things planned for 1.12 and beyond. The community remains friendly, mature and productive, and is taking Clojure into many interesting new domains. I want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large. Stu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives. I’m particularly excited to see where the new free availability of Datomic will lead. My time with Cognitect remains the highlight of my career. I have learned from absolutely everyone on our team, and am forever grateful to all for our interactions. There are too many people to thank here, but I must extend my sincerest appreciation and love to Stu and Justin for (repeatedly) taking a risk on me and my ideas, and for being the best of partners and friends, at all times fully embodying the notion of integrity. And of course to Alex Miller - who possesses in abundance many skills I lack, and without whose indomitable spirit, positivity and friendship Clojure would not have become what it did. I have made many friends through Clojure and Cognitect, and I hope to nurture those friendships moving forward. Retirement returns me to the freedom and independence I had when originally developing Clojure. The journey continues!You can pass arguments to the NewsURLLoader which it uses to load articles.loader = RSSFeedLoader(urls=urls, nlp=True)data = loader.load()print(len(data)) Error fetching or processing https://twitter.com/andrewmccalip/status/1687405505604734978, exception: You must `parse()` an article first! Error processing entry https://twitter.com/andrewmccalip/status/1687405505604734978, exception: list index out of range 13data[0].metadata['keywords'] ['nubank', 'alex', 'stu', 'taking', 'team', 'remains', 'rich', 'clojure', 'thank', 'planned', 'datomic']data[0].metadata['summary'] 'It’s been thrilling to see Clojure and Datomic successfully applied at scale.\nI look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again.\nThe community remains friendly, mature and productive, and is taking Clojure into many interesting new domains.\nI want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large.\nStu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives.'You can also use an OPML file such as a Feedly export. Pass in either a URL or the OPML contents.with open("example_data/sample_rss_feeds.opml", "r") as f: loader = RSSFeedLoader(opml=f.read())data = loader.load()print(len(data)) Error fetching http://www.engadget.com/rss-full.xml, exception: Error fetching http://www.engadget.com/rss-full.xml, exception: document declared as us-ascii, but parsed as utf-8 20data[0].page_content 'The electric vehicle startup Fisker made a splash in Huntington Beach last night, showing off a range of new EVs it plans to build alongside the Fisker Ocean, which is slowly beginning deliveries in Europe and the US. With shades of Lotus circa 2010, it seems there\'s something for most tastes, with a powerful four-door GT, a versatile pickup truck, and an affordable electric city car.\n\n"We want the world to know that we have big plans and intend to move into several different segments, redefining each with our unique blend of design, innovation, and sustainability," said CEO Henrik Fisker.\n\nStarting with the cheapest, the Fisker PEAR—a cutesy acronym for "Personal Electric Automotive Revolution"—is said to use 35 percent fewer parts than other small EVs. Although it\'s a smaller car, the PEAR seats six thanks to front and rear bench seats. Oh, and it has a frunk, which the company is calling the "froot," something that will satisfy some British English speakers like Ars\' friend and motoring journalist Jonny Smith.\n\nBut most exciting is the price—starting at $29,900 and scheduled for 2025. Fisker plans to contract with Foxconn to build the PEAR in Lordstown, Ohio, meaning it would be eligible for federal tax incentives.\n\nAdvertisement\n\nThe Fisker Alaska is the company\'s pickup truck, built on a modified version of the platform used by the Ocean. It has an extendable cargo bed, which can be as little as 4.5 feet (1,371 mm) or as much as 9.2 feet (2,804 mm) long. Fisker claims it will be both the lightest EV pickup on sale and the most sustainable pickup truck in the world. Range will be an estimated 230–240 miles (370–386 km).\n\nThis, too, is slated for 2025, and also at a relatively affordable price, starting at $45,400. Fisker hopes to build this car in North America as well, although it isn\'t saying where that might take place.\n\nFinally, there\'s the Ronin, a four-door GT that bears more than a passing resemblance to the Fisker Karma, Henrik Fisker\'s 2012 creation. There\'s no price for this one, but Fisker says its all-wheel drive powertrain will boast 1,000 hp (745 kW) and will hit 60 mph from a standing start in two seconds—just about as fast as modern tires will allow. Expect a massive battery in this one, as Fisker says it\'s targeting a 600-mile (956 km) range.\n\n"Innovation and sustainability, along with design, are our three brand values. By 2027, we intend to produce the world’s first climate-neutral vehicle, and as our customers reinvent their relationships with mobility, we want to be a leader in software-defined transportation," Fisker said.'PreviousRocksetNextRST
544
https://python.langchain.com/docs/integrations/document_loaders/rst
ComponentsDocument loadersRSTOn this pageRSTA reStructured Text (RST) file is a file format for textual data used primarily in the Python programming language community for technical documentation.UnstructuredRSTLoader​You can load data from RST files with UnstructuredRSTLoader using the following workflow.from langchain.document_loaders import UnstructuredRSTLoaderloader = UnstructuredRSTLoader(file_path="example_data/README.rst", mode="elements")docs = loader.load()print(docs[0]) page_content='Example Docs' metadata={'source': 'example_data/README.rst', 'filename': 'README.rst', 'file_directory': 'example_data', 'filetype': 'text/x-rst', 'page_number': 1, 'category': 'Title'}PreviousRSS FeedsNextSitemapUnstructuredRSTLoader
545
https://python.langchain.com/docs/integrations/document_loaders/sitemap
ComponentsDocument loadersSitemapOn this pageSitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.The scraping is done concurrently. There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren't concerned about being a good citizen, or you control the scrapped server, or don't care about load. Note, while this will speed up the scraping process, but it may cause the server to block you. Be careful!pip install nest_asyncio Requirement already satisfied: nest_asyncio in /Users/tasp/Code/projects/langchain/.venv/lib/python3.10/site-packages (1.5.6) [notice] A new release of pip available: 22.3.1 -> 23.0.1 [notice] To update, run: pip install --upgrade pip# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply()from langchain.document_loaders.sitemap import SitemapLoadersitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")docs = sitemap_loader.load()You can change the requests_per_second parameter to increase the max concurrent requests. and use requests_kwargs to pass kwargs when send requests.sitemap_loader.requests_per_second = 2# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issuesitemap_loader.requests_kwargs = {"verify": False}docs[0] Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nModules\n\nPrompt Templates\nGetting Started\nKey Concepts\nHow-To Guides\nCreate a custom prompt template\nCreate a custom example selector\nProvide few shot examples to a prompt\nPrompt Serialization\nExample Selectors\nOutput Parsers\n\n\nReference\nPromptTemplates\nExample Selector\n\n\n\n\nLLMs\nGetting Started\nKey Concepts\nHow-To Guides\nGeneric Functionality\nCustom LLM\nFake LLM\nLLM Caching\nLLM Serialization\nToken Usage Tracking\n\n\nIntegrations\nAI21\nAleph Alpha\nAnthropic\nAzure OpenAI LLM Example\nBanana\nCerebriumAI LLM Example\nCohere\nDeepInfra LLM Example\nForefrontAI LLM Example\nGooseAI LLM Example\nHugging Face Hub\nManifest\nModal\nOpenAI\nPetals LLM Example\nPromptLayer OpenAI\nSageMakerEndpoint\nSelf-Hosted Models via Runhouse\nStochasticAI\nWriter\n\n\nAsync API for LLM\nStreaming with LLMs\n\n\nReference\n\n\nDocument Loaders\nKey Concepts\nHow To Guides\nCoNLL-U\nAirbyte JSON\nAZLyrics\nBlackboard\nCollege Confidential\nCopy Paste\nCSV Loader\nDirectory Loader\nEmail\nEverNote\nFacebook Chat\nFigma\nGCS Directory\nGCS File Storage\nGitBook\nGoogle Drive\nGutenberg\nHacker News\nHTML\niFixit\nImages\nIMSDb\nMarkdown\nNotebook\nNotion\nObsidian\nPDF\nPowerPoint\nReadTheDocs Documentation\nRoam\ns3 Directory\ns3 File\nSubtitle Files\nTelegram\nUnstructured File Loader\nURL\nWeb Base\nWord Documents\nYouTube\n\n\n\n\nUtils\nKey Concepts\nGeneric Utilities\nBash\nBing Search\nGoogle Search\nGoogle Serper API\nIFTTT WebHooks\nPython REPL\nRequests\nSearxNG Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions API\n\n\nReference\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\n\n\nIndexes\nGetting Started\nKey Concepts\nHow To Guides\nEmbeddings\nHypothetical Document Embeddings\nText Splitter\nVectorStores\nAtlasDB\nChroma\nDeep Lake\nElasticSearch\nFAISS\nMilvus\nOpenSearch\nPGVector\nPinecone\nQdrant\nRedis\nWeaviate\nChatGPT Plugin Retriever\nVectorStore Retriever\nAnalyze Document\nChat Index\nGraph QA\nQuestion Answering with Sources\nQuestion Answering\nSummarization\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\nVector DB Text Generation\n\n\n\n\nChains\nGetting Started\nHow-To Guides\nGeneric Chains\nLoading from LangChainHub\nLLM Chain\nSequential Chains\nSerialization\nTransformation Chain\n\n\nUtility Chains\nAPI Chains\nSelf-Critique Chain with Constitutional AI\nBashChain\nLLMCheckerChain\nLLM Math\nLLMRequestsChain\nLLMSummarizationCheckerChain\nModeration\nPAL\nSQLite example\n\n\nAsync API for Chain\n\n\nKey Concepts\nReference\n\n\nAgents\nGetting Started\nKey Concepts\nHow-To Guides\nAgents and Vectorstores\nAsync API for Agent\nConversation Agent (for Chat Models)\nChatGPT Plugins\nCustom Agent\nDefining Custom Tools\nHuman as a tool\nIntermediate Steps\nLoading from LangChainHub\nMax Iterations\nMulti Input Tools\nSearch Tools\nSerialization\nAdding SharedMemory to an Agent and its Tools\nCSV Agent\nJSON Agent\nOpenAPI Agent\nPandas Dataframe Agent\nPython Agent\nSQL Database Agent\nVectorstore Agent\nMRKL\nMRKL Chat\nReAct\nSelf Ask With Search\n\n\nReference\n\n\nMemory\nGetting Started\nKey Concepts\nHow-To Guides\nConversationBufferMemory\nConversationBufferWindowMemory\nEntity Memory\nConversation Knowledge Graph Memory\nConversationSummaryMemory\nConversationSummaryBufferMemory\nConversationTokenBufferMemory\nAdding Memory To an LLMChain\nAdding Memory to a Multi-Input Chain\nAdding Memory to an Agent\nChatGPT Clone\nConversation Agent\nConversational Memory Customization\nCustom Memory\nMultiple Memory\n\n\n\n\nChat\nGetting Started\nKey Concepts\nHow-To Guides\nAgent\nChat Vector DB\nFew Shot Examples\nMemory\nPromptLayer ChatOpenAI\nStreaming\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\n\n\n\n\n\nUse Cases\n\nAgents\nChatbots\nGenerate Examples\nData Augmented Generation\nQuestion Answering\nSummarization\nQuerying Tabular Data\nExtraction\nEvaluation\nAgent Benchmarking: Search + Calculator\nAgent VectorDB Question Answering Benchmarking\nBenchmarking Template\nData Augmented Question Answering\nUsing Hugging Face Datasets\nLLM Math\nQuestion Answering Benchmarking: Paul Graham Essay\nQuestion Answering Benchmarking: State of the Union Address\nQA Generation\nQuestion Answering\nSQL Question Answering Benchmarking: Chinook\n\n\nModel Comparison\n\nReference\n\nInstallation\nIntegrations\nAPI References\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\nChains\nAgents\n\n\n\nEcosystem\n\nLangChain Ecosystem\nAI21 Labs\nAtlasDB\nBanana\nCerebriumAI\nChroma\nCohere\nDeepInfra\nDeep Lake\nForefrontAI\nGoogle Search Wrapper\nGoogle Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHelicone\nHugging Face\nMilvus\nModal\nNLPCloud\nOpenAI\nOpenSearch\nPetals\nPGVector\nPinecone\nPromptLayer\nQdrant\nRunhouse\nSearxNG Search API\nSerpAPI\nStochasticAI\nUnstructured\nWeights & Biases\nWeaviate\nWolfram Alpha Wrapper\nWriter\n\n\n\nAdditional Resources\n\nLangChainHub\nGlossary\nLangChain Gallery\nDeployments\nTracing\nDiscord\nProduction Support\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.rst\n\n\n\n\n\n\n\n.pdf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWelcome to LangChain\n\n\n\n\n Contents \n\n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\nWelcome to LangChain#\nLarge language models (LLMs) are emerging as a transformative technology, enabling\ndevelopers to build applications that they previously could not.\nBut using these LLMs in isolation is often not enough to\ncreate a truly powerful app - the real power comes when you are able to\ncombine them with other sources of computation or knowledge.\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\n❓ Question Answering over specific documents\n\nDocumentation\nEnd-to-end Example: Question Answering over Notion Database\n\n💬 Chatbots\n\nDocumentation\nEnd-to-end Example: Chat-LangChain\n\n🤖 Agents\n\nDocumentation\nEnd-to-end Example: GPT+WolframAlpha\n\n\nGetting Started#\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\n\nGetting Started Documentation\n\n\n\n\n\nModules#\nThere are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\n\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nLLMs: This includes a generic interface for all LLMs, and common utilities for working with LLMs.\nDocument Loaders: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.\nUtils: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nChat: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.\n\n\n\n\n\nUse Cases#\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\n\nAgents: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nData Augmented Generation: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\nQuestion Answering: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nGenerate similar examples: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.\nCompare models: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\n\n\n\n\n\nReference Docs#\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\n\nReference Documentation\n\n\n\n\n\nLangChain Ecosystem#\nGuides for how other companies/products can be used with LangChain\n\nLangChain Ecosystem\n\n\n\n\n\nAdditional Resources#\nAdditional collection of resources we think may be useful as you develop your application!\n\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nDiscord: Join us on our Discord to discuss all things LangChain!\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\n\n\n\n\n\n\n\n\n\n\n\nnext\nQuickstart Guide\n\n\n\n\n\n\n\n\n\n Contents\n \n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\n\nBy Harrison Chase\n\n\n\n\n \n © Copyright 2023, Harrison Chase.\n \n\n\n\n\n Last updated on Mar 24, 2023.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/stable/', 'loc': 'https://python.langchain.com/en/stable/', 'lastmod': '2023-03-24T19:30:54.647430+00:00', 'changefreq': 'weekly', 'priority': '1'}, lookup_index=0)Filtering sitemap URLs​Sitemaps can be massive files, with thousands of URLs. Often you don't need every single one of them. You can filter the URLs by passing a list of strings or regex patterns to the url_filter parameter. Only URLs that match one of the patterns will be loaded.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"],)documents = loader.load()documents[0] Document(page_content='\n\n\n\n\n\nWelcome to LangChain — 🦜🔗 LangChain 0.0.123\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSkip to main content\n\n\n\n\n\n\n\n\n\n\nCtrl+K\n\n\n\n\n\n\n\n\n\n\n\n\n🦜🔗 LangChain 0.0.123\n\n\n\nGetting Started\n\nQuickstart Guide\n\nModules\n\nModels\nLLMs\nGetting Started\nGeneric Functionality\nHow to use the async API for LLMs\nHow to write a custom LLM wrapper\nHow (and why) to use the fake LLM\nHow to cache LLM calls\nHow to serialize LLM classes\nHow to stream LLM responses\nHow to track token usage\n\n\nIntegrations\nAI21\nAleph Alpha\nAnthropic\nAzure OpenAI LLM Example\nBanana\nCerebriumAI LLM Example\nCohere\nDeepInfra LLM Example\nForefrontAI LLM Example\nGooseAI LLM Example\nHugging Face Hub\nManifest\nModal\nOpenAI\nPetals LLM Example\nPromptLayer OpenAI\nSageMakerEndpoint\nSelf-Hosted Models via Runhouse\nStochasticAI\nWriter\n\n\nReference\n\n\nChat Models\nGetting Started\nHow-To Guides\nHow to use few shot examples\nHow to stream responses\n\n\nIntegrations\nAzure\nOpenAI\nPromptLayer ChatOpenAI\n\n\n\n\nText Embedding Models\nAzureOpenAI\nCohere\nFake Embeddings\nHugging Face Hub\nInstructEmbeddings\nOpenAI\nSageMaker Endpoint Embeddings\nSelf Hosted Embeddings\nTensorflowHub\n\n\n\n\nPrompts\nPrompt Templates\nGetting Started\nHow-To Guides\nHow to create a custom prompt template\nHow to create a prompt template that uses few shot examples\nHow to work with partial Prompt Templates\nHow to serialize prompts\n\n\nReference\nPromptTemplates\nExample Selector\n\n\n\n\nChat Prompt Template\nExample Selectors\nHow to create a custom example selector\nLengthBased ExampleSelector\nMaximal Marginal Relevance ExampleSelector\nNGram Overlap ExampleSelector\nSimilarity ExampleSelector\n\n\nOutput Parsers\nOutput Parsers\nCommaSeparatedListOutputParser\nOutputFixingParser\nPydanticOutputParser\nRetryOutputParser\nStructured Output Parser\n\n\n\n\nIndexes\nGetting Started\nDocument Loaders\nCoNLL-U\nAirbyte JSON\nAZLyrics\nBlackboard\nCollege Confidential\nCopy Paste\nCSV Loader\nDirectory Loader\nEmail\nEverNote\nFacebook Chat\nFigma\nGCS Directory\nGCS File Storage\nGitBook\nGoogle Drive\nGutenberg\nHacker News\nHTML\niFixit\nImages\nIMSDb\nMarkdown\nNotebook\nNotion\nObsidian\nPDF\nPowerPoint\nReadTheDocs Documentation\nRoam\ns3 Directory\ns3 File\nSubtitle Files\nTelegram\nUnstructured File Loader\nURL\nWeb Base\nWord Documents\nYouTube\n\n\nText Splitters\nGetting Started\nCharacter Text Splitter\nHuggingFace Length Function\nLatex Text Splitter\nMarkdown Text Splitter\nNLTK Text Splitter\nPython Code Text Splitter\nRecursiveCharacterTextSplitter\nSpacy Text Splitter\ntiktoken (OpenAI) Length Function\nTiktokenText Splitter\n\n\nVectorstores\nGetting Started\nAtlasDB\nChroma\nDeep Lake\nElasticSearch\nFAISS\nMilvus\nOpenSearch\nPGVector\nPinecone\nQdrant\nRedis\nWeaviate\n\n\nRetrievers\nChatGPT Plugin Retriever\nVectorStore Retriever\n\n\n\n\nMemory\nGetting Started\nHow-To Guides\nConversationBufferMemory\nConversationBufferWindowMemory\nEntity Memory\nConversation Knowledge Graph Memory\nConversationSummaryMemory\nConversationSummaryBufferMemory\nConversationTokenBufferMemory\nHow to add Memory to an LLMChain\nHow to add memory to a Multi-Input Chain\nHow to add Memory to an Agent\nHow to customize conversational memory\nHow to create a custom Memory class\nHow to use multiple memroy classes in the same chain\n\n\n\n\nChains\nGetting Started\nHow-To Guides\nAsync API for Chain\nLoading from LangChainHub\nLLM Chain\nSequential Chains\nSerialization\nTransformation Chain\nAnalyze Document\nChat Index\nGraph QA\nHypothetical Document Embeddings\nQuestion Answering with Sources\nQuestion Answering\nSummarization\nRetrieval Question/Answering\nRetrieval Question Answering with Sources\nVector DB Text Generation\nAPI Chains\nSelf-Critique Chain with Constitutional AI\nBashChain\nLLMCheckerChain\nLLM Math\nLLMRequestsChain\nLLMSummarizationCheckerChain\nModeration\nPAL\nSQLite example\n\n\nReference\n\n\nAgents\nGetting Started\nTools\nGetting Started\nDefining Custom Tools\nMulti Input Tools\nBash\nBing Search\nChatGPT Plugins\nGoogle Search\nGoogle Serper API\nHuman as a tool\nIFTTT WebHooks\nPython REPL\nRequests\nSearch Tools\nSearxNG Search API\nSerpAPI\nWolfram Alpha\nZapier Natural Language Actions API\n\n\nAgents\nAgent Types\nCustom Agent\nConversation Agent (for Chat Models)\nConversation Agent\nMRKL\nMRKL Chat\nReAct\nSelf Ask With Search\n\n\nToolkits\nCSV Agent\nJSON Agent\nOpenAPI Agent\nPandas Dataframe Agent\nPython Agent\nSQL Database Agent\nVectorstore Agent\n\n\nAgent Executors\nHow to combine agents and vectorstores\nHow to use the async API for Agents\nHow to create ChatGPT Clone\nHow to access intermediate steps\nHow to cap the max number of iterations\nHow to add SharedMemory to an Agent and its Tools\n\n\n\n\n\nUse Cases\n\nPersonal Assistants\nQuestion Answering over Docs\nChatbots\nQuerying Tabular Data\nInteracting with APIs\nSummarization\nExtraction\nEvaluation\nAgent Benchmarking: Search + Calculator\nAgent VectorDB Question Answering Benchmarking\nBenchmarking Template\nData Augmented Question Answering\nUsing Hugging Face Datasets\nLLM Math\nQuestion Answering Benchmarking: Paul Graham Essay\nQuestion Answering Benchmarking: State of the Union Address\nQA Generation\nQuestion Answering\nSQL Question Answering Benchmarking: Chinook\n\n\n\nReference\n\nInstallation\nIntegrations\nAPI References\nPrompts\nPromptTemplates\nExample Selector\n\n\nUtilities\nPython REPL\nSerpAPI\nSearxNG Search\nDocstore\nText Splitter\nEmbeddings\nVectorStores\n\n\nChains\nAgents\n\n\n\nEcosystem\n\nLangChain Ecosystem\nAI21 Labs\nAtlasDB\nBanana\nCerebriumAI\nChroma\nCohere\nDeepInfra\nDeep Lake\nForefrontAI\nGoogle Search Wrapper\nGoogle Serper Wrapper\nGooseAI\nGraphsignal\nHazy Research\nHelicone\nHugging Face\nMilvus\nModal\nNLPCloud\nOpenAI\nOpenSearch\nPetals\nPGVector\nPinecone\nPromptLayer\nQdrant\nRunhouse\nSearxNG Search API\nSerpAPI\nStochasticAI\nUnstructured\nWeights & Biases\nWeaviate\nWolfram Alpha Wrapper\nWriter\n\n\n\nAdditional Resources\n\nLangChainHub\nGlossary\nLangChain Gallery\nDeployments\nTracing\nDiscord\nProduction Support\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n.rst\n\n\n\n\n\n\n\n.pdf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWelcome to LangChain\n\n\n\n\n Contents \n\n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\nWelcome to LangChain#\nLangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\n\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\n\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\n\nGetting Started#\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\n\nGetting Started Documentation\n\n\n\n\n\nModules#\nThere are several main modules that LangChain provides support for.\nFor each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.\nThese modules are, in increasing order of complexity:\n\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\n\n\n\n\n\nUse Cases#\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\n\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\n\n\n\n\nReference Docs#\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\n\nReference Documentation\n\n\n\n\n\nLangChain Ecosystem#\nGuides for how other companies/products can be used with LangChain\n\nLangChain Ecosystem\n\n\n\n\n\nAdditional Resources#\nAdditional collection of resources we think may be useful as you develop your application!\n\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.\n\n\n\n\n\n\n\n\n\n\n\nnext\nQuickstart Guide\n\n\n\n\n\n\n\n\n\n Contents\n \n\n\nGetting Started\nModules\nUse Cases\nReference Docs\nLangChain Ecosystem\nAdditional Resources\n\n\n\n\n\n\n\n\n\nBy Harrison Chase\n\n\n\n\n \n © Copyright 2023, Harrison Chase.\n \n\n\n\n\n Last updated on Mar 27, 2023.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', lookup_str='', metadata={'source': 'https://python.langchain.com/en/latest/', 'loc': 'https://python.langchain.com/en/latest/', 'lastmod': '2023-03-27T22:50:49.790324+00:00', 'changefreq': 'daily', 'priority': '0.9'}, lookup_index=0)Add custom scraping rules​The SitemapLoader uses beautifulsoup4 for the scraping process, and it scrapes every element on the page by default. The SitemapLoader constructor accepts a custom scraping function. This feature can be helpful to tailor the scraping process to your specific needs; for example, you might want to avoid scraping headers or navigation elements. The following example shows how to develop and use a custom function to avoid navigation and header elements.Import the beautifulsoup4 library and define the custom function.pip install beautifulsoup4from bs4 import BeautifulSoupdef remove_nav_and_header_elements(content: BeautifulSoup) -> str: # Find all 'nav' and 'header' elements in the BeautifulSoup object nav_elements = content.find_all("nav") header_elements = content.find_all("header") # Remove each 'nav' and 'header' element from the BeautifulSoup object for element in nav_elements + header_elements: element.decompose() return str(content.get_text())Add your custom function to the SitemapLoader object.loader = SitemapLoader( "https://langchain.readthedocs.io/sitemap.xml", filter_urls=["https://python.langchain.com/en/latest/"], parsing_function=remove_nav_and_header_elements,)Local Sitemap​The sitemap loader can also be used to load local files.sitemap_loader = SitemapLoader(web_path="example_data/sitemap.xml", is_local=True)docs = sitemap_loader.load() Fetching pages: 100%|####################################################################################################################################| 3/3 [00:00<00:00, 3.91it/s]PreviousRSTNextSlackFiltering sitemap URLsAdd custom scraping rulesLocal Sitemap
546
https://python.langchain.com/docs/integrations/document_loaders/slack
ComponentsDocument loadersSlackOn this pageSlackSlack is an instant messaging program.This notebook covers how to load documents from a Zipfile generated from a Slack export.In order to get this Slack export, follow these instructions:🧑 Instructions for ingesting your own dataset​Export your Slack data. You can do this by going to your Workspace Management page and clicking the Import/Export option ({your_slack_domain}.slack.com/services/export). Then, choose the right date range and click Start export. Slack will send you an email and a DM when the export is ready.The download will produce a .zip file in your Downloads folder (or wherever your downloads can be found, depending on your OS configuration).Copy the path to the .zip file, and assign it as LOCAL_ZIPFILE below.from langchain.document_loaders import SlackDirectoryLoader# Optionally set your Slack URL. This will give you proper URLs in the docs sources.SLACK_WORKSPACE_URL = "https://xxx.slack.com"LOCAL_ZIPFILE = "" # Paste the local paty to your Slack zip file here.loader = SlackDirectoryLoader(LOCAL_ZIPFILE, SLACK_WORKSPACE_URL)docs = loader.load()docsPreviousSitemapNextSnowflake🧑 Instructions for ingesting your own dataset
547
https://python.langchain.com/docs/integrations/document_loaders/snowflake
ComponentsDocument loadersSnowflakeSnowflakeThis notebooks goes over how to load documents from Snowflakepip install snowflake-connector-pythonimport settings as sfrom langchain.document_loaders import SnowflakeLoaderQUERY = "select text, survey_id from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10"snowflake_loader = SnowflakeLoader( query=QUERY, user=s.SNOWFLAKE_USER, password=s.SNOWFLAKE_PASS, account=s.SNOWFLAKE_ACCOUNT, warehouse=s.SNOWFLAKE_WAREHOUSE, role=s.SNOWFLAKE_ROLE, database=s.SNOWFLAKE_DATABASE, schema=s.SNOWFLAKE_SCHEMA,)snowflake_documents = snowflake_loader.load()print(snowflake_documents)from snowflakeLoader import SnowflakeLoaderimport settings as sQUERY = "select text, survey_id as source from CLOUD_DATA_SOLUTIONS.HAPPY_OR_NOT.OPEN_FEEDBACK limit 10"snowflake_loader = SnowflakeLoader( query=QUERY, user=s.SNOWFLAKE_USER, password=s.SNOWFLAKE_PASS, account=s.SNOWFLAKE_ACCOUNT, warehouse=s.SNOWFLAKE_WAREHOUSE, role=s.SNOWFLAKE_ROLE, database=s.SNOWFLAKE_DATABASE, schema=s.SNOWFLAKE_SCHEMA, metadata_columns=["source"],)snowflake_documents = snowflake_loader.load()print(snowflake_documents)PreviousSlackNextSource Code
548
https://python.langchain.com/docs/integrations/document_loaders/source_code
ComponentsDocument loadersSource CodeOn this pageSource CodeThis notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a seperate document.This approach can potentially improve the accuracy of QA models over source code. Currently, the supported languages for code parsing are Python and JavaScript. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax.pip install esprimaimport warningswarnings.filterwarnings("ignore")from pprint import pprintfrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParserloader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py", ".js"], parser=LanguageParser(),)docs = loader.load()len(docs) 6for document in docs: pprint(document.metadata) {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'simplified_code', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'simplified_code', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'}print("\n\n--8<--\n\n".join([document.page_content for document in docs])) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") --8<-- def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() --8<-- # Code for: class MyClass: # Code for: def main(): if __name__ == "__main__": main() --8<-- class MyClass { constructor(name) { this.name = name; } greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { main();The parser can be disabled for small files. The parameter parser_threshold indicates the minimum number of lines that the source code file must have to be segmented using the parser.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=1000),)docs = loader.load()len(docs) 1print(docs[0].page_content) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() if __name__ == "__main__": main() Splitting​Additional splitting could be needed for those functions, classes, or scripts that are too big.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".js"], parser=LanguageParser(language=Language.JS),)docs = loader.load()from langchain.text_splitter import ( RecursiveCharacterTextSplitter, Language,)js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)result = js_splitter.split_documents(docs)len(result) 7print("\n\n--8<--\n\n".join([document.page_content for document in result])) class MyClass { constructor(name) { this.name = name; --8<-- } --8<-- greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); --8<-- const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { --8<-- main();PreviousSnowflakeNextSpreedlySplitting
549
https://python.langchain.com/docs/integrations/document_loaders/spreedly
ComponentsDocument loadersSpreedlySpreedlySpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.This notebook covers how to load data from the Spreedly REST API into a format that can be ingested into LangChain, along with example usage for vectorization.Note: this notebook assumes the following packages are installed: openai, chromadb, and tiktoken.import osfrom langchain.document_loaders import SpreedlyLoaderfrom langchain.indexes import VectorstoreIndexCreatorSpreedly API requires an access token, which can be found inside the Spreedly Admin Console.This document loader does not currently support pagination, nor access to more complex objects which require additional parameters. It also requires a resource option which defines what objects you want to load.Following resources are available:gateways_options: Documentationgateways: Documentationreceivers_options: Documentationreceivers: Documentationpayment_methods: Documentationcertificates: Documentationtransactions: Documentationenvironments: Documentationspreedly_loader = SpreedlyLoader( os.environ["SPREEDLY_ACCESS_TOKEN"], "gateways_options")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([spreedly_loader])spreedly_doc_retriever = index.vectorstore.as_retriever() Using embedded DuckDB without persistence: data will be transient# Test the retrieverspreedly_doc_retriever.get_relevant_documents("CRC") [Document(page_content='installment_grace_period_duration\nreference_data_code\ninvoice_number\ntax_management_indicator\noriginal_amount\ninvoice_amount\nvat_tax_rate\nmobile_remote_payment_type\ngratuity_amount\nmdd_field_1\nmdd_field_2\nmdd_field_3\nmdd_field_4\nmdd_field_5\nmdd_field_6\nmdd_field_7\nmdd_field_8\nmdd_field_9\nmdd_field_10\nmdd_field_11\nmdd_field_12\nmdd_field_13\nmdd_field_14\nmdd_field_15\nmdd_field_16\nmdd_field_17\nmdd_field_18\nmdd_field_19\nmdd_field_20\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\ndankort\nmaestro\nelo\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://ics2wsa.ic3.com/commerce/1.x/transactionProcessor\ncompany_name: CyberSource', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='BG\nBH\nBI\nBJ\nBM\nBN\nBO\nBR\nBS\nBT\nBW\nBY\nBZ\nCA\nCC\nCF\nCH\nCK\nCL\nCM\nCN\nCO\nCR\nCV\nCX\nCY\nCZ\nDE\nDJ\nDK\nDO\nDZ\nEC\nEE\nEG\nEH\nES\nET\nFI\nFJ\nFK\nFM\nFO\nFR\nGA\nGB\nGD\nGE\nGF\nGG\nGH\nGI\nGL\nGM\nGN\nGP\nGQ\nGR\nGT\nGU\nGW\nGY\nHK\nHM\nHN\nHR\nHT\nHU\nID\nIE\nIL\nIM\nIN\nIO\nIS\nIT\nJE\nJM\nJO\nJP\nKE\nKG\nKH\nKI\nKM\nKN\nKR\nKW\nKY\nKZ\nLA\nLC\nLI\nLK\nLS\nLT\nLU\nLV\nMA\nMC\nMD\nME\nMG\nMH\nMK\nML\nMN\nMO\nMP\nMQ\nMR\nMS\nMT\nMU\nMV\nMW\nMX\nMY\nMZ\nNA\nNC\nNE\nNF\nNG\nNI\nNL\nNO\nNP\nNR\nNU\nNZ\nOM\nPA\nPE\nPF\nPH\nPK\nPL\nPN\nPR\nPT\nPW\nPY\nQA\nRE\nRO\nRS\nRU\nRW\nSA\nSB\nSC\nSE\nSG\nSI\nSK\nSL\nSM\nSN\nST\nSV\nSZ\nTC\nTD\nTF\nTG\nTH\nTJ\nTK\nTM\nTO\nTR\nTT\nTV\nTW\nTZ\nUA\nUG\nUS\nUY\nUZ\nVA\nVC\nVE\nVI\nVN\nVU\nWF\nWS\nYE\nYT\nZA\nZM\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\njcb\nmaestro\nelo\nnaranja\ncabal\nunionpay\nregions: asia_pacific\neurope\nmiddle_east\nnorth_america\nhomepage: http://worldpay.com\ndisplay_api_url: https://secure.worldpay.com/jsp/merchant/xml/paymentService.jsp\ncompany_name: WorldPay', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='gateway_specific_fields: receipt_email\nradar_session_id\nskip_radar_rules\napplication_fee\nstripe_account\nmetadata\nidempotency_key\nreason\nrefund_application_fee\nrefund_fee_amount\nreverse_transfer\naccount_id\ncustomer_id\nvalidate\nmake_default\ncancellation_reason\ncapture_method\nconfirm\nconfirmation_method\ncustomer\ndescription\nmoto\noff_session\non_behalf_of\npayment_method_types\nreturn_email\nreturn_url\nsave_payment_method\nsetup_future_usage\nstatement_descriptor\nstatement_descriptor_suffix\ntransfer_amount\ntransfer_destination\ntransfer_group\napplication_fee_amount\nrequest_three_d_secure\nerror_on_requires_action\nnetwork_transaction_id\nclaim_without_transaction_id\nfulfillment_date\nevent_type\nmodal_challenge\nidempotent_request\nmerchant_reference\ncustomer_reference\nshipping_address_zip\nshipping_from_zip\nshipping_amount\nline_items\nsupported_countries: AE\nAT\nAU\nBE\nBG\nBR\nCA\nCH\nCY\nCZ\nDE\nDK\nEE\nES\nFI\nFR\nGB\nGR\nHK\nHU\nIE\nIN\nIT\nJP\nLT\nLU\nLV\nMT\nMX\nMY\nNL\nNO\nNZ\nPL\nPT\nRO\nSE\nSG\nSI\nSK\nUS\nsupported_cardtypes: visa', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}), Document(page_content='mdd_field_57\nmdd_field_58\nmdd_field_59\nmdd_field_60\nmdd_field_61\nmdd_field_62\nmdd_field_63\nmdd_field_64\nmdd_field_65\nmdd_field_66\nmdd_field_67\nmdd_field_68\nmdd_field_69\nmdd_field_70\nmdd_field_71\nmdd_field_72\nmdd_field_73\nmdd_field_74\nmdd_field_75\nmdd_field_76\nmdd_field_77\nmdd_field_78\nmdd_field_79\nmdd_field_80\nmdd_field_81\nmdd_field_82\nmdd_field_83\nmdd_field_84\nmdd_field_85\nmdd_field_86\nmdd_field_87\nmdd_field_88\nmdd_field_89\nmdd_field_90\nmdd_field_91\nmdd_field_92\nmdd_field_93\nmdd_field_94\nmdd_field_95\nmdd_field_96\nmdd_field_97\nmdd_field_98\nmdd_field_99\nmdd_field_100\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\nmaestro\nelo\nunion_pay\ncartes_bancaires\nmada\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://api.cybersource.com\ncompany_name: CyberSource REST', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'})]PreviousSource CodeNextStripe
550
https://python.langchain.com/docs/integrations/document_loaders/stripe
ComponentsDocument loadersStripeStripeStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.This notebook covers how to load data from the Stripe REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import StripeLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Stripe API requires an access token, which can be found inside of the Stripe dashboard.This document loader also requires a resource option which defines what data you want to load.Following resources are available:balance_transations Documentationcharges Documentationcustomers Documentationevents Documentationrefunds Documentationdisputes Documentationstripe_loader = StripeLoader("charges")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([stripe_loader])stripe_doc_retriever = index.vectorstore.as_retriever()PreviousSpreedlyNextSubtitle
551
https://python.langchain.com/docs/integrations/document_loaders/subtitle
ComponentsDocument loadersSubtitleSubtitleThe SubRip file format is described on the Matroska multimedia container format website as "perhaps the most basic of all subtitle formats." SubRip (SubRip Text) files are named with the extension .srt, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequentially, starting at 1. The timecode format used is hours:minutes:seconds,milliseconds with time units fixed to two zero-padded digits and fractions fixed to three zero-padded digits (00:00:00,000). The fractional separator used is the comma, since the program was written in France.How to load data from subtitle (.srt) filesPlease, download the example .srt file from here.pip install pysrtfrom langchain.document_loaders import SRTLoaderloader = SRTLoader( "example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt")docs = loader.load()docs[0].page_content[:100] '<i>Corruption discovered\nat the core of the Banking Clan!</i> <i>Reunited, Rush Clovis\nand Senator A'PreviousStripeNextTelegram
552
https://python.langchain.com/docs/integrations/document_loaders/telegram
ComponentsDocument loadersTelegramTelegramTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.This notebook covers how to load data from Telegram into a format that can be ingested into LangChain.from langchain.document_loaders import TelegramChatFileLoader, TelegramChatApiLoaderloader = TelegramChatFileLoader("example_data/telegram.json")loader.load() [Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace 🧤 ðŸ\x8d’ on 2020-01-01T00:00:05: You're a minute late!\n\n", metadata={'source': 'example_data/telegram.json'})]TelegramChatApiLoader loads data directly from any specified chat from Telegram. In order to export the data, you will need to authenticate your Telegram account. You can get the API_HASH and API_ID from https://my.telegram.org/auth?to=appschat_entity – recommended to be the entity of a channel.loader = TelegramChatApiLoader( chat_entity="<CHAT_URL>", # recommended to use Entity here api_hash="<API HASH >", api_id="<API_ID>", user_name="", # needed only for caching the session.)loader.load()PreviousSubtitleNextTencent COS Directory
553
https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_directory
ComponentsDocument loadersTencent COS DirectoryOn this pageTencent COS DirectoryThis covers how to load document objects from a Tencent COS Directory.#! pip install cos-python-sdk-v5from langchain.document_loaders import TencentCOSDirectoryLoaderfrom qcloud_cos import CosConfigconf = CosConfig( Region="your cos region", SecretId="your cos secret_id", SecretKey="your cos secret_key",)loader = TencentCOSDirectoryLoader(conf=conf, bucket="you_cos_bucket")loader.load()Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = TencentCOSDirectoryLoader(conf=conf, bucket="you_cos_bucket", prefix="fake")loader.load()PreviousTelegramNextTencent COS FileSpecifying a prefix
554
https://python.langchain.com/docs/integrations/document_loaders/tencent_cos_file
ComponentsDocument loadersTencent COS FileTencent COS FileThis covers how to load document object from a Tencent COS File.#! pip install cos-python-sdk-v5from langchain.document_loaders import TencentCOSFileLoaderfrom qcloud_cos import CosConfigconf = CosConfig( Region="your cos region", SecretId="your cos secret_id", SecretKey="your cos secret_key",)loader = TencentCOSFileLoader(conf=conf, bucket="you_cos_bucket", key="fake.docx")loader.load()PreviousTencent COS DirectoryNextTensorFlow Datasets
555
https://python.langchain.com/docs/integrations/document_loaders/tensorflow_datasets
ComponentsDocument loadersTensorFlow DatasetsOn this pageTensorFlow DatasetsTensorFlow Datasets is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as tf.data.Datasets, enabling easy-to-use and high-performance input pipelines. To get started see the guide and the list of datasets.This notebook shows how to load TensorFlow Datasets into a Document format that we can use downstream.Installation​You need to install tensorflow and tensorflow-datasets python packages.pip install tensorflowpip install tensorflow-datasetsExample​As an example, we use the mlqa/en dataset.MLQA (Multilingual Question Answering Dataset) is a benchmark dataset for evaluating multilingual question answering performance. The dataset consists of 7 languages: Arabic, German, Spanish, English, Hindi, Vietnamese, Chinese.Homepage: https://github.com/facebookresearch/MLQASource code: tfds.datasets.mlqa.BuilderDownload size: 72.21 MiB# Feature structure of `mlqa/en` dataset:FeaturesDict({ 'answers': Sequence({ 'answer_start': int32, 'text': Text(shape=(), dtype=string), }), 'context': Text(shape=(), dtype=string), 'id': string, 'question': Text(shape=(), dtype=string), 'title': Text(shape=(), dtype=string),})import tensorflow as tfimport tensorflow_datasets as tfds# try directly access this dataset:ds = tfds.load('mlqa/en', split='test')ds = ds.take(1) # Only take a single exampleds <_TakeDataset element_spec={'answers': {'answer_start': TensorSpec(shape=(None,), dtype=tf.int32, name=None), 'text': TensorSpec(shape=(None,), dtype=tf.string, name=None)}, 'context': TensorSpec(shape=(), dtype=tf.string, name=None), 'id': TensorSpec(shape=(), dtype=tf.string, name=None), 'question': TensorSpec(shape=(), dtype=tf.string, name=None), 'title': TensorSpec(shape=(), dtype=tf.string, name=None)}>Now we have to create a custom function to convert dataset sample into a Document.This is a requirement. There is no standard format for the TF datasets that's why we need to make a custom transformation function.Let's use context field as the Document.page_content and place other fields in the Document.metadata.def decode_to_str(item: tf.Tensor) -> str: return item.numpy().decode('utf-8')def mlqaen_example_to_document(example: dict) -> Document: return Document( page_content=decode_to_str(example["context"]), metadata={ "id": decode_to_str(example["id"]), "title": decode_to_str(example["title"]), "question": decode_to_str(example["question"]), "answer": decode_to_str(example["answers"]["text"][0]), }, ) for example in ds: doc = mlqaen_example_to_document(example) print(doc) break page_content='After completing the journey around South America, on 23 February 2006, Queen Mary 2 met her namesake, the original RMS Queen Mary, which is permanently docked at Long Beach, California. Escorted by a flotilla of smaller ships, the two Queens exchanged a "whistle salute" which was heard throughout the city of Long Beach. Queen Mary 2 met the other serving Cunard liners Queen Victoria and Queen Elizabeth 2 on 13 January 2008 near the Statue of Liberty in New York City harbour, with a celebratory fireworks display; Queen Elizabeth 2 and Queen Victoria made a tandem crossing of the Atlantic for the meeting. This marked the first time three Cunard Queens have been present in the same location. Cunard stated this would be the last time these three ships would ever meet, due to Queen Elizabeth 2\'s impending retirement from service in late 2008. However this would prove not to be the case, as the three Queens met in Southampton on 22 April 2008. Queen Mary 2 rendezvoused with Queen Elizabeth 2 in Dubai on Saturday 21 March 2009, after the latter ship\'s retirement, while both ships were berthed at Port Rashid. With the withdrawal of Queen Elizabeth 2 from Cunard\'s fleet and its docking in Dubai, Queen Mary 2 became the only ocean liner left in active passenger service.' metadata={'id': '5116f7cccdbf614d60bcd23498274ffd7b1e4ec7', 'title': 'RMS Queen Mary 2', 'question': 'What year did Queen Mary 2 complete her journey around South America?', 'answer': '2006'} 2023-08-03 14:27:08.482983: W tensorflow/core/kernels/data/cache_dataset_ops.cc:854] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.from langchain.schema import Documentfrom langchain.document_loaders import TensorflowDatasetLoaderloader = TensorflowDatasetLoader( dataset_name="mlqa/en", split_name="test", load_max_docs=3, sample_to_document_function=mlqaen_example_to_document, )TensorflowDatasetLoader has these parameters:dataset_name: the name of the dataset to loadsplit_name: the name of the split to load. Defaults to "train".load_max_docs: a limit to the number of loaded documents. Defaults to 100.sample_to_document_function: a function that converts a dataset sample to a Documentdocs = loader.load()len(docs) 2023-08-03 14:27:22.998964: W tensorflow/core/kernels/data/cache_dataset_ops.cc:854] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead. 3docs[0].page_content 'After completing the journey around South America, on 23 February 2006, Queen Mary 2 met her namesake, the original RMS Queen Mary, which is permanently docked at Long Beach, California. Escorted by a flotilla of smaller ships, the two Queens exchanged a "whistle salute" which was heard throughout the city of Long Beach. Queen Mary 2 met the other serving Cunard liners Queen Victoria and Queen Elizabeth 2 on 13 January 2008 near the Statue of Liberty in New York City harbour, with a celebratory fireworks display; Queen Elizabeth 2 and Queen Victoria made a tandem crossing of the Atlantic for the meeting. This marked the first time three Cunard Queens have been present in the same location. Cunard stated this would be the last time these three ships would ever meet, due to Queen Elizabeth 2\'s impending retirement from service in late 2008. However this would prove not to be the case, as the three Queens met in Southampton on 22 April 2008. Queen Mary 2 rendezvoused with Queen Elizabeth 2 in Dubai on Saturday 21 March 2009, after the latter ship\'s retirement, while both ships were berthed at Port Rashid. With the withdrawal of Queen Elizabeth 2 from Cunard\'s fleet and its docking in Dubai, Queen Mary 2 became the only ocean liner left in active passenger service.'docs[0].metadata {'id': '5116f7cccdbf614d60bcd23498274ffd7b1e4ec7', 'title': 'RMS Queen Mary 2', 'question': 'What year did Queen Mary 2 complete her journey around South America?', 'answer': '2006'}PreviousTencent COS FileNext2MarkdownInstallationExample
556
https://python.langchain.com/docs/integrations/document_loaders/tomarkdown
ComponentsDocument loaders2Markdown2Markdown2markdown service transforms website content into structured markdown files.# You will need to get your own API key. See https://2markdown.com/loginapi_key = ""from langchain.document_loaders import ToMarkdownLoaderloader = ToMarkdownLoader.from_api_key( url="https://python.langchain.com/en/latest/", api_key=api_key)docs = loader.load()print(docs[0].page_content) ## Contents - [Getting Started](#getting-started) - [Modules](#modules) - [Use Cases](#use-cases) - [Reference Docs](#reference-docs) - [LangChain Ecosystem](#langchain-ecosystem) - [Additional Resources](#additional-resources) ## Welcome to LangChain [\#](\#welcome-to-langchain "Permalink to this headline") **LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be: 1. _Data-aware_: connect a language model to other sources of data 2. _Agentic_: allow a language model to interact with its environment The LangChain framework is designed around these principles. This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see [here](https://docs.langchain.com/docs/). For the JavaScript documentation, see [here](https://js.langchain.com/docs/). ## Getting Started [\#](\#getting-started "Permalink to this headline") How to get started using LangChain to create an Language Model application. - [Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html) Concepts and terminology. - [Concepts and terminology](https://python.langchain.com/en/latest/getting_started/concepts.html) Tutorials created by community experts and presented on YouTube. - [Tutorials](https://python.langchain.com/en/latest/getting_started/tutorials.html) ## Modules [\#](\#modules "Permalink to this headline") These modules are the core abstractions which we view as the building blocks of any LLM-powered application. For each module LangChain provides standard, extendable interfaces. LanghChain also provides external integrations and even end-to-end implementations for off-the-shelf use. The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides. The modules are (from least to most complex): - [Models](https://python.langchain.com/docs/modules/model_io/models/): Supported model types and integrations. - [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization. - [Memory](https://python.langchain.com/en/latest/modules/memory.html): Memory refers to state that is persisted between calls of a chain/agent. - [Indexes](https://python.langchain.com/en/latest/modules/data_connection.html): Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data. - [Chains](https://python.langchain.com/en/latest/modules/chains.html): Chains are structured sequences of calls (to an LLM or to a different utility). - [Agents](https://python.langchain.com/en/latest/modules/agents.html): An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete. - [Callbacks](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html): Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application. ## Use Cases [\#](\#use-cases "Permalink to this headline") Best practices and built-in implementations for common LangChain use cases: - [Autonomous Agents](https://python.langchain.com/en/latest/use_cases/autonomous_agents.html): Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI. - [Agent Simulations](https://python.langchain.com/en/latest/use_cases/agent_simulations.html): Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities. - [Personal Assistants](https://python.langchain.com/en/latest/use_cases/personal_assistants.html): One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data. - [Question Answering](https://python.langchain.com/en/latest/use_cases/question_answering.html): Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer. - [Chatbots](https://python.langchain.com/en/latest/use_cases/chatbots.html): Language models love to chat, making this a very natural use of them. - [Querying Tabular Data](https://python.langchain.com/en/latest/use_cases/tabular.html): Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc). - [Code Understanding](https://python.langchain.com/en/latest/use_cases/code.html): Recommended reading if you want to use language models to analyze code. - [Interacting with APIs](https://python.langchain.com/en/latest/use_cases/apis.html): Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions. - [Extraction](https://python.langchain.com/en/latest/use_cases/extraction.html): Extract structured information from text. - [Summarization](https://python.langchain.com/en/latest/use_cases/summarization.html): Compressing longer documents. A type of Data-Augmented Generation. - [Evaluation](https://python.langchain.com/en/latest/use_cases/evaluation.html): Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation. ## Reference Docs [\#](\#reference-docs "Permalink to this headline") Full documentation on all methods, classes, installation methods, and integration setups for LangChain. - [Reference Documentation](https://python.langchain.com/en/latest/reference.html) ## LangChain Ecosystem [\#](\#langchain-ecosystem "Permalink to this headline") Guides for how other companies/products can be used with LangChain. - [LangChain Ecosystem](https://python.langchain.com/en/latest/ecosystem.html) ## Additional Resources [\#](\#additional-resources "Permalink to this headline") Additional resources we think may be useful as you develop your application! - [LangChainHub](https://github.com/hwchase17/langchain-hub): The LangChainHub is a place to share and explore other prompts, chains, and agents. - [Gallery](https://python.langchain.com/en/latest/additional_resources/gallery.html): A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications. - [Deployments](https://python.langchain.com/en/latest/additional_resources/deployments.html): A collection of instructions, code snippets, and template repositories for deploying LangChain apps. - [Tracing](https://python.langchain.com/en/latest/additional_resources/tracing.html): A guide on using tracing in LangChain to visualize the execution of chains and agents. - [Model Laboratory](https://python.langchain.com/en/latest/additional_resources/model_laboratory.html): Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. - [Discord](https://discord.gg/6adMQxSpJS): Join us on our Discord to discuss all things LangChain! - [YouTube](https://python.langchain.com/en/latest/additional_resources/youtube.html): A collection of the LangChain tutorials and videos. - [Production Support](https://forms.gle/57d8AmXBYp8PP8tZA): As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.PreviousTensorFlow DatasetsNextTOML
557
https://python.langchain.com/docs/integrations/document_loaders/toml
ComponentsDocument loadersTOMLTOMLTOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for "Tom's Obvious, Minimal Language" referring to its creator, Tom Preston-Werner.If you need to load Toml files, use the TomlLoader.from langchain.document_loaders import TomlLoaderloader = TomlLoader("example_data/fake_rule.toml")rule = loader.load()rule [Document(page_content='{"internal": {"creation_date": "2023-05-01", "updated_date": "2022-05-01", "release": ["release_type"], "min_endpoint_version": "some_semantic_version", "os_list": ["operating_system_list"]}, "rule": {"uuid": "some_uuid", "name": "Fake Rule Name", "description": "Fake description of rule", "query": "process where process.name : \\"somequery\\"\\n", "threat": [{"framework": "MITRE ATT&CK", "tactic": {"name": "Execution", "id": "TA0002", "reference": "https://attack.mitre.org/tactics/TA0002/"}}]}}', metadata={'source': 'example_data/fake_rule.toml'})]Previous2MarkdownNextTrello
558
https://python.langchain.com/docs/integrations/document_loaders/trello
ComponentsDocument loadersTrelloOn this pageTrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.The TrelloLoader allows you to load cards from a Trello board and is implemented on top of py-trelloThis currently supports api_key/token only.Credentials generation: https://trello.com/power-ups/admin/Click in the manual token generation link to get the token.To specify the API key and token you can either set the environment variables TRELLO_API_KEY and TRELLO_TOKEN or you can pass api_key and token directly into the from_credentials convenience constructor method.This loader allows you to provide the board name to pull in the corresponding cards into Document objects.Notice that the board "name" is also called "title" in oficial documentation:https://support.atlassian.com/trello/docs/changing-a-boards-title-and-description/You can also specify several load parameters to include / remove different fields both from the document page_content properties and metadata.Features​Load cards from a Trello board.Filter cards based on their status (open or closed).Include card names, comments, and checklists in the loaded documents.Customize the additional metadata fields to include in the document.By default all card fields are included for the full text page_content and metadata accordinly.#!pip install py-trello beautifulsoup4 lxml# If you have already set the API key and token using environment variables,# you can skip this cell and comment out the `api_key` and `token` named arguments# in the initialization steps below.from getpass import getpassAPI_KEY = getpass()TOKEN = getpass() ········ ········from langchain.document_loaders import TrelloLoader# Get the open cards from "Awesome Board"loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, card_filter="open",)documents = loader.load()print(documents[0].page_content)print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'labels': ['Demand Marketing'], 'list': 'Done', 'closed': False, 'due_date': ''}# Get all the cards from "Awesome Board" but only include the# card list(column) as extra metadata.loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, extra_metadata=("list"),)documents = loader.load()print(documents[0].page_content)print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'list': 'Done'}# Get the cards from "Another Board" and exclude the card name,# checklist and comments from the Document page_content text.loader = TrelloLoader.from_credentials( "test", api_key=API_KEY, token=TOKEN, include_card_name=False, include_checklist=False, include_comments=False,)documents = loader.load()print("Document: " + documents[0].page_content)print(documents[0].metadata)PreviousTOMLNextTSVFeatures
559
https://python.langchain.com/docs/integrations/document_loaders/tsv
ComponentsDocument loadersTSVOn this pageTSVA tab-separated values (TSV) file is a simple, text-based file format for storing tabular data.[3] Records are separated by newlines, and values within a record are separated by tab characters.UnstructuredTSVLoader​You can also load the table using the UnstructuredTSVLoader. One advantage of using UnstructuredTSVLoader is that if you use it in "elements" mode, an HTML representation of the table will be available in the metadata.from langchain.document_loaders.tsv import UnstructuredTSVLoaderloader = UnstructuredTSVLoader( file_path="example_data/mlb_teams_2012.csv", mode="elements")docs = loader.load()print(docs[0].metadata["text_as_html"]) <table border="1" class="dataframe"> <tbody> <tr> <td>Nationals, 81.34, 98</td> </tr> <tr> <td>Reds, 82.20, 97</td> </tr> <tr> <td>Yankees, 197.96, 95</td> </tr> <tr> <td>Giants, 117.62, 94</td> </tr> <tr> <td>Braves, 83.31, 94</td> </tr> <tr> <td>Athletics, 55.37, 94</td> </tr> <tr> <td>Rangers, 120.51, 93</td> </tr> <tr> <td>Orioles, 81.43, 93</td> </tr> <tr> <td>Rays, 64.17, 90</td> </tr> <tr> <td>Angels, 154.49, 89</td> </tr> <tr> <td>Tigers, 132.30, 88</td> </tr> <tr> <td>Cardinals, 110.30, 88</td> </tr> <tr> <td>Dodgers, 95.14, 86</td> </tr> <tr> <td>White Sox, 96.92, 85</td> </tr> <tr> <td>Brewers, 97.65, 83</td> </tr> <tr> <td>Phillies, 174.54, 81</td> </tr> <tr> <td>Diamondbacks, 74.28, 81</td> </tr> <tr> <td>Pirates, 63.43, 79</td> </tr> <tr> <td>Padres, 55.24, 76</td> </tr> <tr> <td>Mariners, 81.97, 75</td> </tr> <tr> <td>Mets, 93.35, 74</td> </tr> <tr> <td>Blue Jays, 75.48, 73</td> </tr> <tr> <td>Royals, 60.91, 72</td> </tr> <tr> <td>Marlins, 118.07, 69</td> </tr> <tr> <td>Red Sox, 173.18, 69</td> </tr> <tr> <td>Indians, 78.43, 68</td> </tr> <tr> <td>Twins, 94.08, 66</td> </tr> <tr> <td>Rockies, 78.06, 64</td> </tr> <tr> <td>Cubs, 88.19, 61</td> </tr> <tr> <td>Astros, 60.65, 55</td> </tr> </tbody> </table>PreviousTrelloNextTwitterUnstructuredTSVLoader
560
https://python.langchain.com/docs/integrations/document_loaders/twitter
ComponentsDocument loadersTwitterTwitterTwitter is an online social media and social networking service.This loader fetches the text from the Tweets of a list of Twitter users, using the tweepy Python package. You must initialize the loader with your Twitter API token, and you need to pass in the Twitter username you want to extract.from langchain.document_loaders import TwitterTweetLoader#!pip install tweepyloader = TwitterTweetLoader.from_bearer_token( oauth2_bearer_token="YOUR BEARER TOKEN", twitter_users=["elonmusk"], number_tweets=50, # Default value is 100)# Or load from access token and consumer keys# loader = TwitterTweetLoader.from_secrets(# access_token='YOUR ACCESS TOKEN',# access_token_secret='YOUR ACCESS TOKEN SECRET',# consumer_key='YOUR CONSUMER KEY',# consumer_secret='YOUR CONSUMER SECRET',# twitter_users=['elonmusk'],# number_tweets=50,# )documents = loader.load()documents[:5] [Document(page_content='@MrAndyNgo @REI One store after another shutting down', metadata={'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat @joshrogin @glennbeck Large ships are fundamentally vulnerable to ballistic (hypersonic) missiles', metadata={'created_at': 'Tue Apr 18 03:43:25 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat The Golden Rule', metadata={'created_at': 'Tue Apr 18 03:37:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat 🧐', metadata={'created_at': 'Tue Apr 18 03:35:48 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@TRHLofficial What’s he talking about and why is it sponsored by Erik’s son?', metadata={'created_at': 'Tue Apr 18 03:32:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}})]PreviousTSVNextUnstructured File
561
https://python.langchain.com/docs/integrations/document_loaders/unstructured_file
ComponentsDocument loadersUnstructured FileOn this pageUnstructured FileThis notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.# # Install packagepip install "unstructured[all-docs]"# # Install other dependencies# # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst# !brew install libmagic# !brew install poppler# !brew install tesseract# # If parsing xml / html documents:# !brew install libxml2# !brew install libxslt# import nltk# nltk.download('punkt')from langchain.document_loaders import UnstructuredFileLoaderloader = UnstructuredFileLoader("./example_data/state_of_the_union.txt")docs = loader.load()docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit'Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredFileLoader( "./example_data/state_of_the_union.txt", mode="elements")docs = loader.load()docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]Define a Partitioning Strategy​Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partition the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below.from langchain.document_loaders import UnstructuredFileLoaderloader = UnstructuredFileLoader( "layout-parser-paper-fast.pdf", strategy="fast", mode="elements")docs = loader.load()docs[:5] [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)]PDF Example​Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. Modes of operation are single all the text from all elements are combined into one (default)elements maintain individual elementspaged texts from each page are only combinedwget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../"loader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements")docs = loader.load()docs[:5] [Document(page_content='LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI shannons@allenai.org', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen zhang@brown.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)]If you need to post process the unstructured elements after extraction, you can pass in a list of str -> str functions to the post_processors kwarg when you instantiate the UnstructuredFileLoader. This applies to other Unstructured loaders as well. Below is an example.from langchain.document_loaders import UnstructuredFileLoaderfrom unstructured.cleaners.core import clean_extra_whitespaceloader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements", post_processors=[clean_extra_whitespace],)docs = loader.load()docs[:5] [Document(page_content='LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'}), Document(page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 2 0 2', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='n u J', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 258.36), (16.34, 286.14), (36.34, 286.14), (36.34, 258.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'})]Unstructured API​If you want to get up and running with less set up, you can simply run pip install unstructured and use UnstructuredAPIFileLoader or UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API. You can generate a free Unstructured API key here. The Unstructured documentation page will have instructions on how to generate an API key once they’re available. Check out the instructions here if you’d like to self-host the Unstructured API or run it locally.from langchain.document_loaders import UnstructuredAPIFileLoaderfilenames = ["example_data/fake.docx", "example_data/fake-email.eml"]loader = UnstructuredAPIFileLoader( file_path=filenames[0], api_key="FAKE_API_KEY",)docs = loader.load()docs[0] Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})You can also batch multiple files through the Unstructured API in a single API using UnstructuredAPIFileLoader.loader = UnstructuredAPIFileLoader( file_path=filenames, api_key="FAKE_API_KEY",)docs = loader.load()docs[0] Document(page_content='Lorem ipsum dolor sit amet.\n\nThis is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']})PreviousTwitterNextURLRetain ElementsDefine a Partitioning StrategyPDF ExampleUnstructured API
562
https://python.langchain.com/docs/integrations/document_loaders/url
ComponentsDocument loadersURLOn this pageURLThis covers how to load HTML documents from a list of URLs into a document format that we can use downstream.from langchain.document_loaders import UnstructuredURLLoaderurls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023",]Pass in ssl_verify=False with headers=headers to get past ssl_verification error.loader = UnstructuredURLLoader(urls=urls)data = loader.load()Selenium URL LoaderThis covers how to load HTML documents from a list of URLs using the SeleniumURLLoader.Using selenium allows us to load pages that require JavaScript to render.Setup​To use the SeleniumURLLoader, you will need to install selenium and unstructured.from langchain.document_loaders import SeleniumURLLoaderurls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8",]loader = SeleniumURLLoader(urls=urls)data = loader.load()Playwright URL LoaderThis covers how to load HTML documents from a list of URLs using the PlaywrightURLLoader.As in the Selenium case, Playwright allows us to load pages that need JavaScript to render.Setup​To use the PlaywrightURLLoader, you will need to install playwright and unstructured. Additionally, you will need to install the Playwright Chromium browser:# Install playwrightpip install "playwright"pip install "unstructured"playwright installfrom langchain.document_loaders import PlaywrightURLLoaderurls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8",]loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"])data = loader.load()PreviousUnstructured FileNextWeatherSetupSetup
563
https://python.langchain.com/docs/integrations/document_loaders/weather
ComponentsDocument loadersWeatherWeatherOpenWeatherMap is an open source weather service providerThis loader fetches the weather data from the OpenWeatherMap's OneCall API, using the pyowm Python package. You must initialize the loader with your OpenWeatherMap API token and the names of the cities you want the weather data for.from langchain.document_loaders import WeatherDataLoader#!pip install pyowm# Set API key either by passing it in to constructor directly# or by setting the environment variable "OPENWEATHERMAP_API_KEY".from getpass import getpassOPENWEATHERMAP_API_KEY = getpass()loader = WeatherDataLoader.from_params( ["chennai", "vellore"], openweathermap_api_key=OPENWEATHERMAP_API_KEY)documents = loader.load()documentsPreviousURLNextWebBaseLoader
564
https://python.langchain.com/docs/integrations/document_loaders/web_base
ComponentsDocument loadersWebBaseLoaderOn this pageWebBaseLoaderThis covers how to use WebBaseLoader to load all text from HTML webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoaderfrom langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://www.espn.com/")To bypass SSL verification errors during fetching, you can set the "verify" option:loader.requests_kwargs = {'verify':False}data = loader.load()data [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most8h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court10h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0)]"""# Use this piece of code for testing new custom BeautifulSoup parsersimport requestsfrom bs4 import BeautifulSouphtml_doc = requests.get("{INSERT_NEW_URL_HERE}")soup = BeautifulSoup(html_doc.text, 'html.parser')# Beautiful soup logic to be exported to langchain.document_loaders.webpage.py# Example: transcript = soup.select_one("td[class='scrtext']").text# BS4 documentation can be found here: https://www.crummy.com/software/BeautifulSoup/bs4/doc/""";Loading multiple webpages​You can also load multiple webpages at once by passing in a list of urls to the loader. This will return a list of documents in the same order as the urls passed in.loader = WebBaseLoader(["https://www.espn.com/", "https://google.com"])docs = loader.load()docs [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0), Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More »Web History | Settings | Sign in\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google© 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)]Load multiple urls concurrently​You can speed up the scraping process by scraping and parsing multiple urls concurrently.There are reasonable limits to concurrent requests, defaulting to 2 per second. If you aren't concerned about being a good citizen, or you control the server you are scraping and don't care about load, you can change the requests_per_second parameter to increase the max concurrent requests. Note, while this will speed up the scraping process, but may cause the server to block you. Be careful!pip install nest_asyncio# fixes a bug with asyncio and jupyterimport nest_asyncionest_asyncio.apply() Requirement already satisfied: nest_asyncio in /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages (1.5.6)loader = WebBaseLoader(["https://www.espn.com/", "https://google.com"])loader.requests_per_second = 1docs = loader.aload()docs [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most7h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0), Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More »Web History | Settings | Sign in\xa0Advanced searchAdvertisingBusiness SolutionsAbout Google© 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)]Loading a xml file, or using a different BeautifulSoup parser​You can also look at SitemapLoader for an example of how to load a sitemap file, which is an example of using this feature.loader = WebBaseLoader( "https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml")loader.default_parser = "xml"docs = loader.load()docs [Document(page_content='\n\n10\nEnergy\n3\n2018-01-01\n2018-01-01\nfalse\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\n§ 431.86\nSection § 431.86\n\nEnergy\nDEPARTMENT OF ENERGY\nENERGY CONSERVATION\nENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT\nCommercial Packaged Boilers\nTest Procedures\n\n\n\n\n§\u2009431.86\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\n(a) Scope. This section provides test procedures, pursuant to the Energy Policy and Conservation Act (EPCA), as amended, which must be followed for measuring the combustion efficiency and/or thermal efficiency of a gas- or oil-fired commercial packaged boiler.\n(b) Testing and Calculations. Determine the thermal efficiency or combustion efficiency of commercial packaged boilers by conducting the appropriate test procedure(s) indicated in Table 1 of this section.\n\nTable 1—Test Requirements for Commercial Packaged Boiler Equipment Classes\n\nEquipment category\nSubcategory\nCertified rated inputBtu/h\n\nStandards efficiency metric(§\u2009431.87)\n\nTest procedure(corresponding to\nstandards efficiency\nmetric required\nby §\u2009431.87)\n\n\n\nHot Water\nGas-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nGas-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nHot Water\nOil-fired\n≥300,000 and ≤2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nOil-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nSteam\nGas-fired (all*)\n≥300,000 and ≤
565
https://python.langchain.com/docs/integrations/document_loaders/whatsapp_chat
ComponentsDocument loadersWhatsApp ChatWhatsApp ChatWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.This notebook covers how to load data from the WhatsApp Chats into a format that can be ingested into LangChain.from langchain.document_loaders import WhatsAppChatLoaderloader = WhatsAppChatLoader("example_data/whatsapp_chat.txt")loader.load()PreviousWebBaseLoaderNextWikipedia
566
https://python.langchain.com/docs/integrations/document_loaders/wikipedia
ComponentsDocument loadersWikipediaOn this pageWikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.This notebook shows how to load wiki pages from wikipedia.org into the Document format that we use downstream.Installation​First, you need to install wikipedia python package.#!pip install wikipediaExamples​WikipediaLoader has these arguments:query: free text which used to find documents in Wikipediaoptional lang: default="en". Use it to search in a specific language part of Wikipediaoptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.from langchain.document_loaders import WikipediaLoaderdocs = WikipediaLoader(query="HUNTER X HUNTER", load_max_docs=2).load()len(docs)docs[0].metadata # meta-information of the Documentdocs[0].page_content[:400] # a content of the DocumentPreviousWhatsApp ChatNextXMLInstallationExamples
567
https://python.langchain.com/docs/integrations/document_loaders/xml
ComponentsDocument loadersXMLXMLThe UnstructuredXMLLoader is used to load XML files. The loader works with .xml files. The page content will be the text extracted from the XML tags.from langchain.document_loaders import UnstructuredXMLLoaderloader = UnstructuredXMLLoader( "example_data/factbook.xml",)docs = loader.load()docs[0] Document(page_content='United States\n\nWashington, DC\n\nJoe Biden\n\nBaseball\n\nCanada\n\nOttawa\n\nJustin Trudeau\n\nHockey\n\nFrance\n\nParis\n\nEmmanuel Macron\n\nSoccer\n\nTrinidad & Tobado\n\nPort of Spain\n\nKeith Rowley\n\nTrack & Field', metadata={'source': 'example_data/factbook.xml'})PreviousWikipediaNextXorbits Pandas DataFrame
568
https://python.langchain.com/docs/integrations/document_loaders/xorbits
ComponentsDocument loadersXorbits Pandas DataFrameXorbits Pandas DataFrameThis notebook goes over how to load data from a xorbits.pandas DataFrame.#!pip install xorbitsimport xorbits.pandas as pddf = pd.read_csv("example_data/mlb_teams_2012.csv")df.head() 0%| | 0.00/100 [00:00<?, ?it/s]<div><style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; }</style><table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Team</th> <th>"Payroll (millions)"</th> <th>"Wins"</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>Nationals</td> <td>81.34</td> <td>98</td> </tr> <tr> <th>1</th> <td>Reds</td> <td>82.20</td> <td>97</td> </tr> <tr> <th>2</th> <td>Yankees</td> <td>197.96</td> <td>95</td> </tr> <tr> <th>3</th> <td>Giants</td> <td>117.62</td> <td>94</td> </tr> <tr> <th>4</th> <td>Braves</td> <td>83.31</td> <td>94</td> </tr> </tbody></table></div>from langchain.document_loaders import XorbitsLoaderloader = XorbitsLoader(df, page_content_column="Team")loader.load() 0%| | 0.00/100 [00:00<?, ?it/s] [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})]# Use lazy load for larger table, which won't read the full table into memoryfor i in loader.lazy_load(): print(i) 0%| | 0.00/100 [00:00<?, ?it/s] page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98} page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97} page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95} page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94} page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94} page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94} page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93} page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93} page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90} page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89} page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88} page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88} page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86} page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85} page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83} page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81} page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81} page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79} page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76} page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75} page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74} page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73} page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72} page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69} page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69} page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68} page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66} page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64} page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61} page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55}PreviousXMLNextYouTube audio
569
https://python.langchain.com/docs/integrations/document_loaders/youtube_audio
ComponentsDocument loadersYouTube audioOn this pageYouTube audioBuilding chat or QA applications on YouTube videos is a topic of high interest.Below we show how to easily go from a YouTube url to audio of the video to text to chat!We wil use the OpenAIWhisperParser, which will use the OpenAI Whisper API to transcribe audio to text, and the OpenAIWhisperParserLocal for local support and running on private clouds or on premise.Note: You will need to have an OPENAI_API_KEY supplied.from langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import OpenAIWhisperParser, OpenAIWhisperParserLocalfrom langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoaderWe will use yt_dlp to download audio for YouTube urls.We will use pydub to split downloaded audio files (such that we adhere to Whisper API's 25MB file size limit).pip install yt_dlp pip install pydub pip install librosaYouTube url to text​Use YoutubeAudioLoader to fetch / download the audio files.Then, ues OpenAIWhisperParser() to transcribe them to text.Let's take the first lecture of Andrej Karpathy's YouTube course as an example! # set a flag to switch between local and remote parsing# change this to True if you want to use local parsinglocal = False# Two Karpathy lecture videosurls = ["https://youtu.be/kCc8FmEb1nY", "https://youtu.be/VMj-3S1tku0"]# Directory to save audio filessave_dir = "~/Downloads/YouTube"# Transcribe the videos to textif local: loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParserLocal())else: loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())docs = loader.load() [youtube] Extracting URL: https://youtu.be/kCc8FmEb1nY [youtube] kCc8FmEb1nY: Downloading webpage [youtube] kCc8FmEb1nY: Downloading android player API JSON [info] kCc8FmEb1nY: Downloading 1 format(s): 140 [dashsegments] Total fragments: 11 [download] Destination: /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a [download] 100% of 107.73MiB in 00:00:18 at 5.92MiB/s [FixupM4a] Correcting container of "/Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a" [ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/Let's build GPT: from scratch, in code, spelled out..m4a; file is already in target format m4a [youtube] Extracting URL: https://youtu.be/VMj-3S1tku0 [youtube] VMj-3S1tku0: Downloading webpage [youtube] VMj-3S1tku0: Downloading android player API JSON [info] VMj-3S1tku0: Downloading 1 format(s): 140 [download] /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a has already been downloaded [download] 100% of 134.98MiB [ExtractAudio] Not converting audio /Users/31treehaus/Desktop/AI/langchain-fork/docs/modules/indexes/document_loaders/examples/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a; file is already in target format m4a# Returns a list of Documents, which can be easily viewed or parseddocs[0].page_content[0:500] "Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I w"Building a chat app from YouTube video​Given Documents, we can easily enable chat / question+answering.from langchain.chains import RetrievalQAfrom langchain.vectorstores import FAISSfrom langchain.chat_models import ChatOpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.text_splitter import RecursiveCharacterTextSplitter# Combine doccombined_docs = [doc.page_content for doc in docs]text = " ".join(combined_docs)# Split themtext_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=150)splits = text_splitter.split_text(text)# Build an indexembeddings = OpenAIEmbeddings()vectordb = FAISS.from_texts(splits, embeddings)# Build a QA chainqa_chain = RetrievalQA.from_chain_type( llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0), chain_type="stuff", retriever=vectordb.as_retriever(),)# Ask a question!query = "Why do we need to zero out the gradient before backprop at each step?"qa_chain.run(query) "We need to zero out the gradient before backprop at each step because the backward pass accumulates gradients in the grad attribute of each parameter. If we don't reset the grad to zero before each backward pass, the gradients will accumulate and add up, leading to incorrect updates and slower convergence. By resetting the grad to zero before each backward pass, we ensure that the gradients are calculated correctly and that the optimization process works as intended."query = "What is the difference between an encoder and decoder?"qa_chain.run(query) 'In the context of transformers, an encoder is a component that reads in a sequence of input tokens and generates a sequence of hidden representations. On the other hand, a decoder is a component that takes in a sequence of hidden representations and generates a sequence of output tokens. The main difference between the two is that the encoder is used to encode the input sequence into a fixed-length representation, while the decoder is used to decode the fixed-length representation into an output sequence. In machine translation, for example, the encoder reads in the source language sentence and generates a fixed-length representation, which is then used by the decoder to generate the target language sentence.'query = "For any token, what are x, k, v, and q?"qa_chain.run(query) 'For any token, x is the input vector that contains the private information of that token, k and q are the key and query vectors respectively, which are produced by forwarding linear modules on x, and v is the vector that is calculated by propagating the same linear module on x again. The key vector represents what the token contains, and the query vector represents what the token is looking for. The vector v is the information that the token will communicate to other tokens if it finds them interesting, and it gets aggregated for the purposes of the self-attention mechanism.'PreviousXorbits Pandas DataFrameNextYouTube transcriptsYouTube url to textBuilding a chat app from YouTube video
570
https://python.langchain.com/docs/integrations/document_loaders/youtube_transcript
ComponentsDocument loadersYouTube transcriptsOn this pageYouTube transcriptsYouTube is an online video sharing and social media platform created by Google.This notebook covers how to load documents from YouTube transcripts.from langchain.document_loaders import YoutubeLoader# !pip install youtube-transcript-apiloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add video info​# ! pip install pytubeloader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True)loader.load()Add language preferences​Language param : It's a list of language codes in a descending priority, en by default.translation param : It's a translate preference when the youtube does'nt have your select language, en by default.loader = YoutubeLoader.from_youtube_url( "https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True, language=["en", "id"], translation="en",)loader.load()YouTube loader from Google Cloud​Prerequisites​Create a Google Cloud project or use an existing projectEnable the Youtube ApiAuthorize credentials for desktop apppip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api🧑 Instructions for ingesting your Google Docs data​By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader.GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Note depending on your set up, the service_account_path needs to be set up. See here for more details.from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader# Init the GoogleApiClientfrom pathlib import Pathgoogle_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json"))# Use a Channelyoutube_loader_channel = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name="Reducible", captions_language="en",)# Use Youtube Idsyoutube_loader_ids = GoogleApiYoutubeLoader( google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True)# returns a list of Documentsyoutube_loader_channel.load()PreviousYouTube audioNextDocument transformersAdd video infoAdd language preferencesYouTube loader from Google CloudPrerequisites🧑 Instructions for ingesting your Google Docs data
571
https://python.langchain.com/docs/integrations/document_transformers
ComponentsDocument transformersDocument transformers📄️ Beautiful SoupBeautiful Soup is a Python package for parsing📄️ Google Cloud Document AIDocument AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume.📄️ Doctran: extract propertiesWe can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata.📄️ Doctran: interrogate documentsDocuments used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents.📄️ Doctran: language translationComparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically.📄️ HTML to texthtml2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text.📄️ NucliaNuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.📄️ OpenAI metadata taggerIt can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.PreviousYouTube transcriptsNextBeautiful Soup
572
https://python.langchain.com/docs/integrations/document_transformers/beautiful_soup
ComponentsDocument transformersBeautiful SoupBeautiful SoupBeautiful Soup is a Python package for parsing HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup). It creates a parse tree for parsed pages that can be used to extract data from HTML,[3] which is useful for web scraping.Beautiful Soup offers fine-grained control over HTML content, enabling specific tag extraction, removal, and content cleaning. It's suited for cases where you want to extract specific information and clean up the HTML content according to your needs.For example, we can scrape text content within <p>, <li>, <div>, and <a> tags from the HTML content:<p>: The paragraph tag. It defines a paragraph in HTML and is used to group together related sentences and/or phrases.<li>: The list item tag. It is used within ordered (<ol>) and unordered (<ul>) lists to define individual items within the list.<div>: The division tag. It is a block-level element used to group other inline or block-level elements.<a>: The anchor tag. It is used to define hyperlinks.from langchain.document_loaders import AsyncChromiumLoaderfrom langchain.document_transformers import BeautifulSoupTransformer# Load HTMLloader = AsyncChromiumLoader(["https://www.wsj.com"])html = loader.load()# Transformbs_transformer = BeautifulSoupTransformer()docs_transformed = bs_transformer.transform_documents(html,tags_to_extract=["p", "li", "div", "a"])docs_transformed[0].page_content[0:500] 'Conservative legal activists are challenging Amazon, Comcast and others using many of the same tools that helped kill affirmative-action programs in colleges.1,2099 min read U.S. stock indexes fell and government-bond prices climbed, after Moody’s lowered credit ratings for 10 smaller U.S. banks and said it was reviewing ratings for six larger ones. The Dow industrials dropped more than 150 points.3 min read Penn Entertainment’s Barstool Sportsbook app will be rebranded as ESPN Bet this fall as 'PreviousDocument transformersNextGoogle Cloud Document AI
573
https://python.langchain.com/docs/integrations/document_transformers/docai
ComponentsDocument transformersGoogle Cloud Document AIGoogle Cloud Document AIDocument AI is a document understanding platform from Google Cloud to transform unstructured data from documents into structured data, making it easier to understand, analyze, and consume.Learn more:Document AI overviewDocument AI videos and labsTry it!The module contains a PDF parser based on DocAI from Google Cloud.You need to install two libraries to use this parser:First, you need to set up a Google Cloud Storage (GCS) bucket and create your own Optical Character Recognition (OCR) processor as described here: https://cloud.google.com/document-ai/docs/create-processorThe GCS_OUTPUT_PATH should be a path to a folder on GCS (starting with gs://) and a PROCESSOR_NAME should look like projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID or projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID/processorVersions/PROCESSOR_VERSION_ID. You can get it either programmatically or copy from the Prediction endpoint section of the Processor details tab in the Google Cloud Console.GCS_OUTPUT_PATH = "gs://BUCKET_NAME/FOLDER_PATH"PROCESSOR_NAME = "projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID"from langchain.document_loaders.blob_loaders import Blobfrom langchain.document_loaders.parsers import DocAIParserNow, create a DocAIParser.parser = DocAIParser( location="us", processor_name=PROCESSOR_NAME, gcs_output_path=GCS_OUTPUT_PATH)For this example, you can use an Alphabet earnings report that's uploaded to a public GCS bucket.2022Q1_alphabet_earnings_release.pdfPass the document to the lazy_parse() method toblob = Blob(path="gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs/2022Q1_alphabet_earnings_release.pdf")We'll get one document per page, 11 in total:docs = list(parser.lazy_parse(blob))print(len(docs)) 11You can run end-to-end parsing of a blob one-by-one. If you have many documents, it might be a better approach to batch them together and maybe even detach parsing from handling the results of parsing.operations = parser.docai_parse([blob])print([op.operation.name for op in operations]) ['projects/543079149601/locations/us/operations/16447136779727347991']You can check whether operations are finished:parser.is_running(operations) TrueAnd when they're finished, you can parse the results:parser.is_running(operations) Falseresults = parser.get_results(operations)print(results[0]) DocAIParsingResults(source_path='gs://vertex-pgt/examples/goog-exhibit-99-1-q1-2023-19.pdf', parsed_path='gs://vertex-pgt/test/run1/16447136779727347991/0')And now we can finally generate Documents from parsed results:docs = list(parser.parse_from_results(results))print(len(docs)) 11PreviousBeautiful SoupNextDoctran: extract properties
574
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
ComponentsDocument transformersDoctran: extract propertiesOn this pageDoctran: extract propertiesWe can extract useful features of documents using the Doctran library, which uses OpenAI's function calling feature to extract specific metadata.Extracting metadata from documents is helpful for a variety of tasks, including:Classification: classifying documents into different categoriesData mining: Extract structured data that can be used for data analysisStyle transfer: Change the way text is written to more closely match expected user input, improving vector search resultspip install doctranimport jsonfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranPropertyExtractorfrom dotenv import load_dotenvload_dotenv() TrueInput​This is the document we'll extract properties from.sample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com.HR Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & CEOPsychicjason@psychic.dev"""print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic jason@psychic.dev documents = [Document(page_content=sample_text)]properties = [ { "name": "category", "description": "What type of email this is.", "type": "string", "enum": ["update", "action_item", "customer_feedback", "announcement", "other"], "required": True, }, { "name": "mentions", "description": "A list of all people mentioned in this email.", "type": "array", "items": { "name": "full_name", "description": "The full name of the person mentioned.", "type": "string", }, "required": True, }, { "name": "eli5", "description": "Explain this email to me like I'm 5 years old.", "type": "string", "required": True, },]property_extractor = DoctranPropertyExtractor(properties=properties)Output​After extracting properties from a document, the result will be returned as a new document with properties provided in the metadataextracted_document = await property_extractor.atransform_documents( documents, properties=properties)print(json.dumps(extracted_document[0].metadata, indent=2)) { "extracted_properties": { "category": "update", "mentions": [ "John Doe", "Jane Smith", "Michael Johnson", "Sarah Thompson", "David Rodriguez", "Jason Fan" ], "eli5": "This is an email from the CEO, Jason Fan, giving updates about different areas in the company. He talks about new security measures and praises John Doe for his work. He also mentions new hires and praises Jane Smith for her work in customer service. The CEO reminds everyone about the upcoming benefits enrollment and says to contact Michael Johnson with any questions. He talks about the marketing team's work and praises Sarah Thompson for increasing their social media followers. There's also a product launch event on July 15th. Lastly, he talks about the research and development projects and praises David Rodriguez for his work. There's a brainstorming session on July 10th." } }PreviousGoogle Cloud Document AINextDoctran: interrogate documentsInputOutput
575
https://python.langchain.com/docs/integrations/document_transformers/doctran_interrogate_document
ComponentsDocument transformersDoctran: interrogate documentsOn this pageDoctran: interrogate documentsDocuments used in a vector store knowledge base are typically stored in a narrative or conversational format. However, most user queries are in question format. If we convert documents into Q&A format before vectorizing them, we can increase the likelihood of retrieving relevant documents, and decrease the likelihood of retrieving irrelevant documents.We can accomplish this using the Doctran library, which uses OpenAI's function calling feature to "interrogate" documents.See this notebook for benchmarks on vector similarity scores for various queries based on raw documents versus interrogated documents.pip install doctranimport jsonfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranQATransformerfrom dotenv import load_dotenvload_dotenv() TrueInput​This is the document we'll interrogatesample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com.HR Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & CEOPsychicjason@psychic.dev"""print(sample_text) [Generated with ChatGPT] Confidential Document - For Internal Use Only Date: July 1, 2023 Subject: Updates and Discussions on Various Topics Dear Team, I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential. Security and Privacy Measures As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com. HR Updates and Employee Benefits Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com). Marketing Initiatives and Campaigns Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company. Research and Development Projects In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th. Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly. Thank you for your attention, and let's continue to work together to achieve our goals. Best regards, Jason Fan Cofounder & CEO Psychic jason@psychic.dev documents = [Document(page_content=sample_text)]qa_transformer = DoctranQATransformer()transformed_document = await qa_transformer.atransform_documents(documents)Output​After interrogating a document, the result will be returned as a new document with questions and answers provided in the metadata.transformed_document = await qa_transformer.atransform_documents(documents)print(json.dumps(transformed_document[0].metadata, indent=2)) { "questions_and_answers": [ { "question": "What is the purpose of this document?", "answer": "The purpose of this document is to provide important updates and discuss various topics that require the team's attention." }, { "question": "Who is responsible for enhancing the network security?", "answer": "John Doe from the IT department is responsible for enhancing the network security." }, { "question": "Where should potential security risks or incidents be reported?", "answer": "Potential security risks or incidents should be reported to the dedicated team at security@example.com." }, { "question": "Who has been recognized for outstanding performance in customer service?", "answer": "Jane Smith has been recognized for her outstanding performance in customer service." }, { "question": "When is the open enrollment period for the employee benefits program?", "answer": "The document does not specify the exact dates for the open enrollment period for the employee benefits program, but it mentions that it is fast approaching." }, { "question": "Who should be contacted for questions or assistance regarding the employee benefits program?", "answer": "For questions or assistance regarding the employee benefits program, the HR representative, Michael Johnson, should be contacted." }, { "question": "Who has been acknowledged for managing the company's social media platforms?", "answer": "Sarah Thompson has been acknowledged for managing the company's social media platforms." }, { "question": "When is the upcoming product launch event?", "answer": "The upcoming product launch event is on July 15th." }, { "question": "Who has been recognized for their contributions to the development of the company's technology?", "answer": "David Rodriguez has been recognized for his contributions to the development of the company's technology." }, { "question": "When is the monthly R&D brainstorming session?", "answer": "The monthly R&D brainstorming session is scheduled for July 10th." }, { "question": "Who should be contacted for questions or concerns regarding the topics discussed in the document?", "answer": "For questions or concerns regarding the topics discussed in the document, Jason Fan, the Cofounder & CEO, should be contacted." } ] }PreviousDoctran: extract propertiesNextDoctran: language translationInputOutput
576
https://python.langchain.com/docs/integrations/document_transformers/doctran_translate_document
ComponentsDocument transformersDoctran: language translationOn this pageDoctran: language translationComparing documents through embeddings has the benefit of working across multiple languages. "Harrison says hello" and "Harrison dice hola" will occupy similar positions in the vector space because they have the same meaning semantically.However, it can still be useful to use an LLM to translate documents into other languages before vectorizing them. This is especially helpful when users are expected to query the knowledge base in different languages, or when state-of-the-art embedding models are not available for a given language.We can accomplish this using the Doctran library, which uses OpenAI's function calling feature to translate documents between languages.pip install doctranfrom langchain.schema import Documentfrom langchain.document_transformers import DoctranTextTranslatorfrom dotenv import load_dotenvload_dotenv() TrueInput​This is the document we'll translatesample_text = """[Generated with ChatGPT]Confidential Document - For Internal Use OnlyDate: July 1, 2023Subject: Updates and Discussions on Various TopicsDear Team,I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.Security and Privacy MeasuresAs part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: john.doe@example.com) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at security@example.com.HR Updates and Employee BenefitsRecently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: michael.johnson@example.com).Marketing Initiatives and CampaignsOur marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.Research and Development ProjectsIn our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: david.rodriguez@example.com) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.Thank you for your attention, and let's continue to work together to achieve our goals.Best regards,Jason FanCofounder & CEOPsychicjason@psychic.dev"""documents = [Document(page_content=sample_text)]qa_translator = DoctranTextTranslator(language="spanish")Output​After translating a document, the result will be returned as a new document with the page_content translated into the target languagetranslated_document = await qa_translator.atransform_documents(documents)print(translated_document[0].page_content) [Generado con ChatGPT] Documento confidencial - Solo para uso interno Fecha: 1 de julio de 2023 Asunto: Actualizaciones y discusiones sobre varios temas Estimado equipo, Espero que este correo electrónico les encuentre bien. En este documento, me gustaría proporcionarles algunas actualizaciones importantes y discutir varios temas que requieren nuestra atención. Por favor, traten la información contenida aquí como altamente confidencial. Medidas de seguridad y privacidad Como parte de nuestro compromiso continuo para garantizar la seguridad y privacidad de los datos de nuestros clientes, hemos implementado medidas robustas en todos nuestros sistemas. Nos gustaría elogiar a John Doe (correo electrónico: john.doe@example.com) del departamento de TI por su diligente trabajo en mejorar nuestra seguridad de red. En adelante, recordamos amablemente a todos que se adhieran estrictamente a nuestras políticas y directrices de protección de datos. Además, si se encuentran con cualquier riesgo de seguridad o incidente potencial, por favor repórtelo inmediatamente a nuestro equipo dedicado en security@example.com. Actualizaciones de RRHH y beneficios para empleados Recientemente, dimos la bienvenida a varios nuevos miembros del equipo que han hecho contribuciones significativas a sus respectivos departamentos. Me gustaría reconocer a Jane Smith (SSN: 049-45-5928) por su sobresaliente rendimiento en el servicio al cliente. Jane ha recibido constantemente comentarios positivos de nuestros clientes. Además, recuerden que el período de inscripción abierta para nuestro programa de beneficios para empleados se acerca rápidamente. Si tienen alguna pregunta o necesitan asistencia, por favor contacten a nuestro representante de RRHH, Michael Johnson (teléfono: 418-492-3850, correo electrónico: michael.johnson@example.com). Iniciativas y campañas de marketing Nuestro equipo de marketing ha estado trabajando activamente en el desarrollo de nuevas estrategias para aumentar la conciencia de marca y fomentar la participación del cliente. Nos gustaría agradecer a Sarah Thompson (teléfono: 415-555-1234) por sus excepcionales esfuerzos en la gestión de nuestras plataformas de redes sociales. Sarah ha aumentado con éxito nuestra base de seguidores en un 20% solo en el último mes. Además, por favor marquen sus calendarios para el próximo evento de lanzamiento de producto el 15 de julio. Animamos a todos los miembros del equipo a asistir y apoyar este emocionante hito para nuestra empresa. Proyectos de investigación y desarrollo En nuestra búsqueda de la innovación, nuestro departamento de investigación y desarrollo ha estado trabajando incansablemente en varios proyectos. Me gustaría reconocer el excepcional trabajo de David Rodríguez (correo electrónico: david.rodriguez@example.com) en su papel de líder de proyecto. Las contribuciones de David al desarrollo de nuestra tecnología de vanguardia han sido fundamentales. Además, nos gustaría recordar a todos que compartan sus ideas y sugerencias para posibles nuevos proyectos durante nuestra sesión de lluvia de ideas de I+D mensual, programada para el 10 de julio. Por favor, traten la información de este documento con la máxima confidencialidad y asegúrense de que no se comparte con personas no autorizadas. Si tienen alguna pregunta o inquietud sobre los temas discutidos, no duden en ponerse en contacto conmigo directamente. Gracias por su atención, y sigamos trabajando juntos para alcanzar nuestros objetivos. Saludos cordiales, Jason Fan Cofundador y CEO Psychic jason@psychic.devPreviousDoctran: interrogate documentsNextHTML to textInputOutput
577
https://python.langchain.com/docs/integrations/document_transformers/html2text
ComponentsDocument transformersHTML to textHTML to texthtml2text is a Python package that converts a page of HTML into clean, easy-to-read plain ASCII text. The ASCII also happens to be a valid Markdown (a text-to-HTML format).pip install html2textfrom langchain.document_loaders import AsyncHtmlLoaderurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]loader = AsyncHtmlLoader(urls)docs = loader.load() Fetching pages: 100%|############| 2/2 [00:00<00:00, 10.75it/s]from langchain.document_transformers import Html2TextTransformerurls = ["https://www.espn.com", "https://lilianweng.github.io/posts/2023-06-23-agent/"]html2text = Html2TextTransformer()docs_transformed = html2text.transform_documents(docs)docs_transformed[0].page_content[1000:2000] " * ESPNFC\n\n * X Games\n\n * SEC Network\n\n## ESPN Apps\n\n * ESPN\n\n * ESPN Fantasy\n\n## Follow ESPN\n\n * Facebook\n\n * Twitter\n\n * Instagram\n\n * Snapchat\n\n * YouTube\n\n * The ESPN Daily Podcast\n\n2023 FIFA Women's World Cup\n\n## Follow live: Canada takes on Nigeria in group stage of Women's World Cup\n\n2m\n\nEPA/Morgan Hancock\n\n## TOP HEADLINES\n\n * Snyder fined $60M over findings in investigation\n * NFL owners approve $6.05B sale of Commanders\n * Jags assistant comes out as gay in NFL milestone\n * O's alone atop East after topping slumping Rays\n * ACC's Phillips: Never condoned hazing at NU\n\n * Vikings WR Addison cited for driving 140 mph\n * 'Taking his time': Patient QB Rodgers wows Jets\n * Reyna got U.S. assurances after Berhalter rehire\n * NFL Future Power Rankings\n\n## USWNT AT THE WORLD CUP\n\n### USA VS. VIETNAM: 9 P.M. ET FRIDAY\n\n## How do you defend against Alex Morgan? Former opponents sound off\n\nThe U.S. forward is unstoppable at this level, scoring 121 goals and adding 49"docs_transformed[1].page_content[1000:2000] "t's brain,\ncomplemented by several key components:\n\n * **Planning**\n * Subgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\n * Reflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\n * **Memory**\n * Short-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\n * Long-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\n * **Tool use**\n * The agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution c"PreviousDoctran: language translationNextNuclia
578
https://python.langchain.com/docs/integrations/document_transformers/nuclia_transformer
ComponentsDocument transformersNucliaNucliaNuclia automatically indexes your unstructured data from any internal and external source, providing optimized search results and generative answers. It can handle video and audio transcription, image content extraction, and document parsing.Nuclia Understanding API document transformer splits text into paragraphs and sentences, identifies entities, provides a summary of the text and generates embeddings for all the sentences.To use the Nuclia Understanding API, you need to have a Nuclia account. You can create one for free at https://nuclia.cloud, and then create a NUA key.from langchain.document_transformers.nuclia_text_transform import NucliaTextTransformer#!pip install --upgrade protobuf#!pip install nucliadb-protosimport osos.environ["NUCLIA_ZONE"] = "<YOUR_ZONE>" # e.g. europe-1os.environ["NUCLIA_NUA_KEY"] = "<YOUR_API_KEY>"To use the Nuclia document transformer, you need to instantiate a NucliaUnderstandingAPI tool with enable_ml set to True:from langchain.tools.nuclia import NucliaUnderstandingAPInua = NucliaUnderstandingAPI(enable_ml=True)The Nuclia document transformer must be called in async mode, so you need to use the atransform_documents method:import asynciofrom langchain.document_transformers.nuclia_text_transform import NucliaTextTransformerfrom langchain.schema.document import Documentasync def process(): documents = [ Document(page_content="<TEXT 1>", metadata={}), Document(page_content="<TEXT 2>", metadata={}), Document(page_content="<TEXT 3>", metadata={}), ] nuclia_transformer = NucliaTextTransformer(nua) transformed_documents = await nuclia_transformer.atransform_documents(documents) print(transformed_documents)asyncio.run(process())PreviousHTML to textNextOpenAI metadata tagger
579
https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger
ComponentsDocument transformersOpenAI metadata taggerOn this pageOpenAI metadata taggerIt can often be useful to tag ingested documents with structured metadata, such as the title, tone, or length of a document, to allow for a more targeted similarity search later. However, for large numbers of documents, performing this labelling process manually can be tedious.The OpenAIMetadataTagger document transformer automates this process by extracting metadata from each provided document according to a provided schema. It uses a configurable OpenAI Functions-powered chain under the hood, so if you pass a custom LLM instance, it must be an OpenAI model with functions support. Note: This document transformer works best with complete documents, so it's best to run it first with whole documents before doing any other splitting or processing!For example, let's say you wanted to index a set of movie reviews. You could initialize the document transformer with a valid JSON Schema object as follows:from langchain.schema import Documentfrom langchain.chat_models import ChatOpenAIfrom langchain.document_transformers.openai_functions import create_metadata_taggerschema = { "properties": { "movie_title": {"type": "string"}, "critic": {"type": "string"}, "tone": {"type": "string", "enum": ["positive", "negative"]}, "rating": { "type": "integer", "description": "The number of stars the critic rated the movie", }, }, "required": ["movie_title", "critic", "tone"],}# Must be an OpenAI model that supports functionsllm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")document_transformer = create_metadata_tagger(metadata_schema=schema, llm=llm)You can then simply pass the document transformer a list of documents, and it will extract metadata from the contents:original_documents = [ Document( page_content="Review of The Bee Movie\nBy Roger Ebert\n\nThis is the greatest movie ever made. 4 out of 5 stars." ), Document( page_content="Review of The Godfather\nBy Anonymous\n\nThis movie was super boring. 1 out of 5 stars.", metadata={"reliable": False}, ),]enhanced_documents = document_transformer.transform_documents(original_documents)import jsonprint( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false}The new documents can then be further processed by a text splitter before being loaded into a vector store. Extracted fields will not overwrite existing metadata.You can also initialize the document transformer with a Pydantic schema:from typing import Literalfrom pydantic import BaseModel, Fieldclass Properties(BaseModel): movie_title: str critic: str tone: Literal["positive", "negative"] rating: int = Field(description="Rating out of 5 stars")document_transformer = create_metadata_tagger(Properties, llm)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Anonymous", "tone": "negative", "rating": 1, "reliable": false}Customization​You can pass the underlying tagging chain the standard LLMChain arguments in the document transformer constructor. For example, if you wanted to ask the LLM to focus specific details in the input documents, or extract metadata in a certain style, you could pass in a custom prompt:from langchain.prompts import ChatPromptTemplateprompt = ChatPromptTemplate.from_template( """Extract relevant information from the following text.Anonymous critics are actually Roger Ebert.{input}""")document_transformer = create_metadata_tagger(schema, llm, prompt=prompt)enhanced_documents = document_transformer.transform_documents(original_documents)print( *[d.page_content + "\n\n" + json.dumps(d.metadata) for d in enhanced_documents], sep="\n\n---------------\n\n") Review of The Bee Movie By Roger Ebert This is the greatest movie ever made. 4 out of 5 stars. {"movie_title": "The Bee Movie", "critic": "Roger Ebert", "tone": "positive", "rating": 4} --------------- Review of The Godfather By Anonymous This movie was super boring. 1 out of 5 stars. {"movie_title": "The Godfather", "critic": "Roger Ebert", "tone": "negative", "rating": 1, "reliable": false}PreviousNucliaNextText embedding modelsCustomization
580
https://python.langchain.com/docs/integrations/text_embedding
ComponentsText embedding modelsText embedding models📄️ Aleph AlphaThere are two possible ways to use Aleph Alpha's semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach.📄️ AwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.📄️ AzureOpenAILet's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.📄️ Baidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.📄️ BedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.📄️ BGE on Hugging FaceBGE models on the HuggingFace are the best open-source embedding models.📄️ ClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.📄️ CohereLet's load the Cohere Embedding class.📄️ DashScopeLet's load the DashScope Embedding class.📄️ DeepInfraDeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.📄️ EDEN AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)📄️ ElasticsearchWalkthrough of how to generate embeddings using a hosted embedding model in Elasticsearch📄️ Embaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.📄️ ERNIE Embedding-V1ERNIE Embedding-V1 is a text representation model based on Baidu Wenxin's large-scale model technology,📄️ Fake EmbeddingsLangChain also provides a fake embedding class. You can use this to test your pipelines.📄️ Google Vertex AI PaLMVertex AI PaLM API is a service on Google Cloud exposing the embedding models.📄️ GPT4AllGPT4All is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc.📄️ GradientGradient allows to create Embeddings as well fine tune and get completions on LLMs with a simple web API.📄️ Hugging FaceLet's load the Hugging Face Embedding class.📄️ InstructEmbeddingsLet's load the HuggingFace instruct Embeddings class.📄️ JinaLet's load the Jina Embedding class.📄️ Llama-cppThis notebook goes over how to use Llama-cpp embeddings within LangChain📄️ LLMRailsLet's load the LLMRails Embeddings class.📄️ LocalAILet's load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at https//localai.io/features/embeddings/index.html.📄️ MiniMaxMiniMax offers an embeddings service.📄️ ModelScopeModelScope is big repository of the models and datasets.📄️ MosaicMLMosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.📄️ NLP CloudNLP Cloud is an artificial intelligence platform that allows you to use the most advanced AI engines, and even train your own engines with your own data.📄️ OllamaLet's load the Ollama Embeddings class.📄️ OpenAILet's load the OpenAI Embedding class.📄️ SageMakerLet's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.📄️ Self HostedLet's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.📄️ Sentence TransformersSentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package.📄️ SpaCyspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.📄️ TensorflowHubLet's load the TensorflowHub Embedding class.📄️ Xorbits inference (Xinference)This notebook goes over how to use Xinference embeddings within LangChainPreviousOpenAI metadata taggerNextAleph Alpha
581
https://python.langchain.com/docs/integrations/text_embedding/aleph_alpha
ComponentsText embedding modelsAleph AlphaOn this pageAleph AlphaThere are two possible ways to use Aleph Alpha's semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric embeddings are the suggested approach.Asymmetric​from langchain.embeddings import AlephAlphaAsymmetricSemanticEmbeddingdocument = "This is a content of the document"query = "What is the content of the document?"embeddings = AlephAlphaAsymmetricSemanticEmbedding(normalize=True, compress_to_size=128)doc_result = embeddings.embed_documents([document])query_result = embeddings.embed_query(query)Symmetric​from langchain.embeddings import AlephAlphaSymmetricSemanticEmbeddingtext = "This is a test text"embeddings = AlephAlphaSymmetricSemanticEmbedding(normalize=True, compress_to_size=128)doc_result = embeddings.embed_documents([text])query_result = embeddings.embed_query(text)PreviousText embedding modelsNextAwaDBAsymmetricSymmetric
582
https://python.langchain.com/docs/integrations/text_embedding/awadb
ComponentsText embedding modelsAwaDBOn this pageAwaDBAwaDB is an AI Native database for the search and storage of embedding vectors used by LLM Applications.This notebook explains how to use AwaEmbeddings in LangChain.# pip install awadbimport the library​from langchain.embeddings import AwaEmbeddingsEmbedding = AwaEmbeddings()Set embedding modelUsers can use Embedding.set_model() to specify the embedding model. \ The input of this function is a string which represents the model's name. \ The list of currently supported models can be obtained here \ \ The default model is all-mpnet-base-v2, it can be used without setting.text = "our embedding test"Embedding.set_model("all-mpnet-base-v2")res_query = Embedding.embed_query("The test information")res_document = Embedding.embed_documents(["test1", "another test"])PreviousAleph AlphaNextAzureOpenAIimport the library
583
https://python.langchain.com/docs/integrations/text_embedding/azureopenai
ComponentsText embedding modelsAzureOpenAIAzureOpenAILet's load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints.# set the environment variables needed for openai package to know to reach out to azureimport osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"os.environ["OPENAI_API_VERSION"] = "2023-05-15"from langchain.embeddings import OpenAIEmbeddingsembeddings = OpenAIEmbeddings(deployment="your-embeddings-deployment-name")text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousAwaDBNextBaidu Qianfan
584
https://python.langchain.com/docs/integrations/text_embedding/baidu_qianfan_endpoint
ComponentsText embedding modelsBaidu QianfanOn this pageBaidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.Basically, those model are split into the following type:EmbeddingChatCompletionIn this notebook, we will introduce how to use langchain with Qianfan mainly in Embedding corresponding to the package langchain/embeddings in langchain:API Initialization​To use the LLM services based on Baidu Qianfan, you have to initialize these parameters:You could either choose to init the AK,SK in enviroment variables or init params:export QIANFAN_AK=XXXexport QIANFAN_SK=XXX"""For basic init and call"""from langchain.embeddings import QianfanEmbeddingsEndpoint import osos.environ["QIANFAN_AK"] = "your_ak"os.environ["QIANFAN_SK"] = "your_sk"embed = QianfanEmbeddingsEndpoint( # qianfan_ak='xxx', # qianfan_sk='xxx')res = embed.embed_documents(["hi", "world"])async def aioEmbed(): res = await embed.aembed_query("qianfan") print(res[:8])await aioEmbed()import asyncioasync def aioEmbedDocs(): res = await embed.aembed_documents(["hi", "world"]) for r in res: print("", r[:8])await aioEmbedDocs() [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: trying to refresh access_token [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: sucessfully refresh access_token [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: requesting llm api endpoint: /embeddings/embedding-v1 [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: async requesting llm api endpoint: /embeddings/embedding-v1 [INFO] [09-15 20:01:35] logging.py:55 [t:140292313159488]: async requesting llm api endpoint: /embeddings/embedding-v1 [-0.03313107788562775, 0.052325375378131866, 0.04951248690485954, 0.0077608139254152775, -0.05907672271132469, -0.010798933915793896, 0.03741293027997017, 0.013969100080430508] [0.0427522286772728, -0.030367236584424973, -0.14847028255462646, 0.055074431002140045, -0.04177454113960266, -0.059512972831726074, -0.043774791061878204, 0.0028191760648041964] [0.03803155943751335, -0.013231384567916393, 0.0032379645854234695, 0.015074018388986588, -0.006529552862048149, -0.13813287019729614, 0.03297128155827522, 0.044519297778606415]Use different models in Qianfan​In the case you want to deploy your own model based on Ernie Bot or third-party open sources model, you could follow these steps:(Optional, if the model are included in the default models, skip it)Deploy your model in Qianfan Console, get your own customized deploy endpoint.Set up the field called endpoint in the initlization:embed = QianfanEmbeddingsEndpoint( model="bge_large_zh", endpoint="bge_large_zh" )res = embed.embed_documents(["hi", "world"])for r in res : print(r[:8]) [INFO] [09-15 20:01:40] logging.py:55 [t:140292313159488]: requesting llm api endpoint: /embeddings/bge_large_zh [-0.0001582596160005778, -0.025089964270591736, -0.03997539356350899, 0.013156415894627571, 0.000135212714667432, 0.012428865768015385, 0.016216561198234558, -0.04126659780740738] [0.0019113451708108187, -0.008625439368188381, -0.0531032420694828, -0.0018436014652252197, -0.01818147301673889, 0.010310115292668343, -0.008867680095136166, -0.021067561581730843]PreviousAzureOpenAINextBedrockAPI InitializationUse different models in Qianfan
585
https://python.langchain.com/docs/integrations/text_embedding/bedrock
ComponentsText embedding modelsBedrockBedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.%pip install boto3from langchain.embeddings import BedrockEmbeddingsembeddings = BedrockEmbeddings( credentials_profile_name="bedrock-admin", region_name="us-east-1")embeddings.embed_query("This is a content of the document")embeddings.embed_documents(["This is a content of the document", "This is another document"])# async embed queryawait embeddings.aembed_query("This is a content of the document")# async embed documentsawait embeddings.aembed_documents(["This is a content of the document", "This is another document"])PreviousBaidu QianfanNextBGE on Hugging Face
586
https://python.langchain.com/docs/integrations/text_embedding/bge_huggingface
ComponentsText embedding modelsBGE on Hugging FaceBGE on Hugging FaceBGE models on the HuggingFace are the best open-source embedding models. BGE model is created by the Beijing Academy of Artificial Intelligence (BAAI). BAAI is a private non-profit organization engaged in AI research and development.This notebook shows how to use BGE Embeddings through Hugging Face#!pip install sentence_transformersfrom langchain.embeddings import HuggingFaceBgeEmbeddingsmodel_name = "BAAI/bge-small-en"model_kwargs = {'device': 'cpu'}encode_kwargs = {'normalize_embeddings': False}hf = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs)embedding = hf.embed_query("hi this is harrison")len(embedding) 384PreviousBedrockNextClarifai
587
https://python.langchain.com/docs/integrations/text_embedding/clarifai
ComponentsText embedding modelsClarifaiClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.This example goes over how to use LangChain to interact with Clarifai models. Text embedding models in particular can be found here.To use Clarifai, you must have an account and a Personal Access Token (PAT) key. Check here to get or create a PAT.Dependencies# Install required dependenciespip install clarifaiImportsHere we will be setting the personal access token. You can find your PAT under settings/security in your Clarifai account.# Please login and get your API key from https://clarifai.com/settings/securityfrom getpass import getpassCLARIFAI_PAT = getpass() ········# Import the required modulesfrom langchain.embeddings import ClarifaiEmbeddingsfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainInputCreate a prompt template to be used with the LLM Chain:template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])SetupSet the user id and app id to the application in which the model resides. You can find a list of public models on https://clarifai.com/explore/modelsYou will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task.USER_ID = "salesforce"APP_ID = "blip"MODEL_ID = "multimodal-embedder-blip-2"# You can provide a specific model version as the model_version_id arg.# MODEL_VERSION_ID = "MODEL_VERSION_ID"# Initialize a Clarifai embedding modelembeddings = ClarifaiEmbeddings( pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousBGE on Hugging FaceNextCohere
588
https://python.langchain.com/docs/integrations/text_embedding/cohere
ComponentsText embedding modelsCohereCohereLet's load the Cohere Embedding class.from langchain.embeddings import CohereEmbeddingsembeddings = CohereEmbeddings(cohere_api_key=cohere_api_key)text = "This is a test document."query_result = embeddings.embed_query(text)doc_result = embeddings.embed_documents([text])PreviousClarifaiNextDashScope
589
https://python.langchain.com/docs/integrations/text_embedding/dashscope
ComponentsText embedding modelsDashScopeDashScopeLet's load the DashScope Embedding class.from langchain.embeddings import DashScopeEmbeddingsembeddings = DashScopeEmbeddings( model="text-embedding-v1", dashscope_api_key="your-dashscope-api-key")text = "This is a test document."query_result = embeddings.embed_query(text)print(query_result)doc_results = embeddings.embed_documents(["foo"])print(doc_results)PreviousCohereNextDeepInfra
590
https://python.langchain.com/docs/integrations/text_embedding/deepinfra
ComponentsText embedding modelsDeepInfraDeepInfraDeepInfra is a serverless inference as a service that provides access to a variety of LLMs and embeddings models. This notebook goes over how to use LangChain with DeepInfra for text embeddings.# sign up for an account: https://deepinfra.com/login?utm_source=langchainfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass() ········import osos.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKENfrom langchain.embeddings import DeepInfraEmbeddingsembeddings = DeepInfraEmbeddings( model_id="sentence-transformers/clip-ViT-B-32", query_instruction="", embed_instruction="",)docs = ["Dog is not a cat", "Beta is the second letter of Greek alphabet"]document_result = embeddings.embed_documents(docs)query = "What is the first letter of Greek alphabet"query_result = embeddings.embed_query(query)import numpy as npquery_numpy = np.array(query_result)for doc_res, doc in zip(document_result, docs): document_numpy = np.array(doc_res) similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy) ) print(f'Cosine similarity between "{doc}" and query: {similarity}') Cosine similarity between "Dog is not a cat" and query: 0.7489097144129355 Cosine similarity between "Beta is the second letter of Greek alphabet" and query: 0.9519380640702013PreviousDashScopeNextEDEN AI
591
https://python.langchain.com/docs/integrations/text_embedding/edenai
ComponentsText embedding modelsEDEN AIOn this pageEDEN AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)This example goes over how to use LangChain to interact with Eden AI embedding modelsAccessing the EDENAI's API requires an API key, which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/account/settingsOnce we have a key we'll want to set it as an environment variable by running:export EDENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named parameter when initiating the EdenAI embedding class:from langchain.embeddings.edenai import EdenAiEmbeddingsembeddings = EdenAiEmbeddings(edenai_api_key="...",provider="...")Calling a model​The EdenAI API brings together various providers.To access a specific model, you can simply use the "provider" when calling.embeddings = EdenAiEmbeddings(provider="openai")docs = ["It's raining right now", "cats are cute"]document_result = embeddings.embed_documents(docs)query = "my umbrella is broken"query_result = embeddings.embed_query(query)import numpy as npquery_numpy = np.array(query_result)for doc_res, doc in zip(document_result, docs): document_numpy = np.array(doc_res) similarity = np.dot(query_numpy, document_numpy) / ( np.linalg.norm(query_numpy) * np.linalg.norm(document_numpy) ) print(f'Cosine similarity between "{doc}" and query: {similarity}') Cosine similarity between "It's raining right now" and query: 0.849261496107252 Cosine similarity between "cats are cute" and query: 0.7525900655705218PreviousDeepInfraNextElasticsearchCalling a model
592
https://python.langchain.com/docs/integrations/text_embedding/elasticsearch
ComponentsText embedding modelsElasticsearchOn this pageElasticsearchWalkthrough of how to generate embeddings using a hosted embedding model in ElasticsearchThe easiest way to instantiate the ElasticsearchEmbeddings class it eitherusing the from_credentials constructor if you are using Elastic Cloudor using the from_es_connection constructor with any Elasticsearch clusterpip -q install elasticsearch langchainimport elasticsearchfrom langchain.embeddings.elasticsearch import ElasticsearchEmbeddings# Define the model IDmodel_id = "your_model_id"Testing with from_credentials​This required an Elastic Cloud cloud_id# Instantiate ElasticsearchEmbeddings using credentialsembeddings = ElasticsearchEmbeddings.from_credentials( model_id, es_cloud_id="your_cloud_id", es_user="your_user", es_password="your_password",)# Create embeddings for multiple documentsdocuments = [ "This is an example document.", "Another example document to generate embeddings for.",]document_embeddings = embeddings.embed_documents(documents)# Print document embeddingsfor i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}")# Create an embedding for a single queryquery = "This is a single query."query_embedding = embeddings.embed_query(query)# Print query embeddingprint(f"Embedding for query: {query_embedding}")Testing with Existing Elasticsearch client connection​This can be used with any Elasticsearch deployment# Create Elasticsearch connectiones_connection = Elasticsearch( hosts=["https://es_cluster_url:port"], basic_auth=("user", "password"))# Instantiate ElasticsearchEmbeddings using es_connectionembeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection,)# Create embeddings for multiple documentsdocuments = [ "This is an example document.", "Another example document to generate embeddings for.",]document_embeddings = embeddings.embed_documents(documents)# Print document embeddingsfor i, embedding in enumerate(document_embeddings): print(f"Embedding for document {i+1}: {embedding}")# Create an embedding for a single queryquery = "This is a single query."query_embedding = embeddings.embed_query(query)# Print query embeddingprint(f"Embedding for query: {query_embedding}")PreviousEDEN AINextEmbaasTesting with from_credentialsTesting with Existing Elasticsearch client connection
593
https://python.langchain.com/docs/integrations/text_embedding/embaas
ComponentsText embedding modelsEmbaasOn this pageEmbaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.In this tutorial, we will show you how to use the embaas Embeddings API to generate embeddings for a given text.Prerequisites​Create your free embaas account at https://embaas.io/register and generate an API key.# Set API keyembaas_api_key = "YOUR_API_KEY"# or set environment variableos.environ["EMBAAS_API_KEY"] = "YOUR_API_KEY"from langchain.embeddings import EmbaasEmbeddingsembeddings = EmbaasEmbeddings()# Create embeddings for a single documentdoc_text = "This is a test document."doc_text_embedding = embeddings.embed_query(doc_text)# Print created embeddingprint(doc_text_embedding)# Create embeddings for multiple documentsdoc_texts = ["This is a test document.", "This is another test document."]doc_texts_embeddings = embeddings.embed_documents(doc_texts)# Print created embeddingsfor i, doc_text_embedding in enumerate(doc_texts_embeddings): print(f"Embedding for document {i + 1}: {doc_text_embedding}")# Using a different model and/or custom instructionembeddings = EmbaasEmbeddings( model="instructor-large", instruction="Represent the Wikipedia document for retrieval",)For more detailed information about the embaas Embeddings API, please refer to the official embaas API documentation.PreviousElasticsearchNextERNIE Embedding-V1Prerequisites
594
https://python.langchain.com/docs/integrations/text_embedding/ernie
ComponentsText embedding modelsERNIE Embedding-V1ERNIE Embedding-V1ERNIE Embedding-V1 is a text representation model based on Baidu Wenxin's large-scale model technology, which converts text into a vector form represented by numerical values, and is used in text retrieval, information recommendation, knowledge mining and other scenarios.from langchain.embeddings import ErnieEmbeddingsembeddings = ErnieEmbeddings()query_result = embeddings.embed_query("foo")doc_results = embeddings.embed_documents(["foo"])PreviousEmbaasNextFake Embeddings