id
stringlengths 14
15
| text
stringlengths 23
2.21k
| source
stringlengths 52
97
|
---|---|---|
05a517102e8d-0 | Mastodon | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/mastodon |
05a517102e8d-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersMastodonMastodonMastodon is a federated social media and social networking service.This loader fetches the text from the "toots" of a list of | https://python.langchain.com/docs/integrations/document_loaders/mastodon |
05a517102e8d-2 | and social networking service.This loader fetches the text from the "toots" of a list of Mastodon accounts, using the Mastodon.py Python package.Public accounts can the queried by default without any authentication. If non-public accounts or instances are queried, you have to register an application for your account which gets you an access token, and set that token and your account's API base URL.Then you need to pass in the Mastodon account names you want to extract, in the @account@instance format.from langchain.document_loaders import MastodonTootsLoader#!pip install Mastodon.pyloader = MastodonTootsLoader( mastodon_accounts=["@Gargron@mastodon.social"], number_toots=50, # Default value is 100)# Or set up access information to use a Mastodon app.# Note that the access token can either be passed into# constructor or you can set the envirovnment "MASTODON_ACCESS_TOKEN".# loader = MastodonTootsLoader(# access_token="<ACCESS TOKEN OF MASTODON APP>",# api_base_url="<API BASE URL OF MASTODON APP INSTANCE>",# mastodon_accounts=["@Gargron@mastodon.social"],# number_toots=50, # Default value is 100# )documents = loader.load()for doc in documents[:3]: print(doc.page_content) print("=" * 80) <p>It is tough to leave this behind and go back to reality. And some people live here! I’m sure there are downsides but it sounds pretty good to me right now.</p> ================================================================================ <p>I wish we could stay here a little longer, but it is time to go home 🥲</p> | https://python.langchain.com/docs/integrations/document_loaders/mastodon |
05a517102e8d-3 | but it is time to go home 🥲</p> ================================================================================ <p>Last day of the honeymoon. And it’s <a href="https://mastodon.social/tags/caturday" class="mention hashtag" rel="tag">#<span>caturday</span></a>! This cute tabby came to the restaurant to beg for food and got some chicken.</p> ================================================================================The toot texts (the documents' page_content) is by default HTML as returned by the Mastodon API.PreviousLarkSuite (FeiShu)NextMediaWikiDumpCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/mastodon |
7f00aaca2a14-0 | Geopandas | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/geopandas |
7f00aaca2a14-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGeopandasGeopandasGeopandas is an open source project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by | https://python.langchain.com/docs/integrations/document_loaders/geopandas |
7f00aaca2a14-2 | make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. Geometric operations are performed by shapely. Geopandas further depends on fiona for file access and matplotlib for plotting.LLM applications (chat, QA) that utilize geospatial data are an interesting area for exploration.pip install sodapy pip install pandas pip install geopandasimport astimport pandas as pdimport geopandas as gpdfrom langchain.document_loaders import OpenCityDataLoaderCreate a GeoPandas dataframe from Open City Data as an example input.# Load Open City Datadataset = "tmnf-yvry" # San Francisco crime dataloader = OpenCityDataLoader(city_id="data.sfgov.org", dataset_id=dataset, limit=5000)docs = loader.load()# Convert list of dictionaries to DataFramedf = pd.DataFrame([ast.literal_eval(d.page_content) for d in docs])# Extract latitude and longitudedf["Latitude"] = df["location"].apply(lambda loc: loc["coordinates"][1])df["Longitude"] = df["location"].apply(lambda loc: loc["coordinates"][0])# Create geopandas DFgdf = gpd.GeoDataFrame( df, geometry=gpd.points_from_xy(df.Longitude, df.Latitude), crs="EPSG:4326")# Only keep valid longitudes and latitudes for San Franciscogdf = gdf[ (gdf["Longitude"] >= -123.173825) & (gdf["Longitude"] <= -122.281780) & (gdf["Latitude"] >= 37.623983) & (gdf["Latitude"] <= 37.929824)]Visiualization of the sample of SF crimne data. import matplotlib.pyplot as plt# Load San Francisco map | https://python.langchain.com/docs/integrations/document_loaders/geopandas |
7f00aaca2a14-3 | of the sample of SF crimne data. import matplotlib.pyplot as plt# Load San Francisco map datasf = gpd.read_file("https://data.sfgov.org/resource/3psu-pn9h.geojson")# Plot the San Francisco map and the pointsfig, ax = plt.subplots(figsize=(10, 10))sf.plot(ax=ax, color="white", edgecolor="black")gdf.plot(ax=ax, color="red", markersize=5)plt.show()  Load GeoPandas dataframe as a Document for downstream processing (embedding, chat, etc). The geometry will be the default page_content columns, and all other columns are placed in metadata.But, we can specify the page_content_column.from langchain.document_loaders import GeoDataFrameLoaderloader = GeoDataFrameLoader(data_frame=gdf, page_content_column="geometry")docs = loader.load()docs[0] Document(page_content='POINT (-122.420084075249 37.7083109744362)', metadata={'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, | https://python.langchain.com/docs/integrations/document_loaders/geopandas |
7f00aaca2a14-4 | 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309', ':@computed_region_6qbp_sg9q': nan, ':@computed_region_qgnn_b9vv': nan, ':@computed_region_ajp5_b2md': nan, ':@computed_region_yftq_j783': nan, ':@computed_region_p5aj_wyqh': nan, ':@computed_region_fyvs_ahh9': nan, ':@computed_region_6pnf_4xz7': nan, ':@computed_region_jwn9_ihcz': nan, ':@computed_region_9dfj_4gjx': nan, ':@computed_region_4isq_27mq': nan, ':@computed_region_pigm_ib2e': nan, ':@computed_region_9jxd_iqea': nan, ':@computed_region_6ezc_tdp2': nan, ':@computed_region_h4ep_8xdi': nan, ':@computed_region_n4xg_c4py': nan, ':@computed_region_fcz8_est8': nan, ':@computed_region_nqbw_i6c3': nan, ':@computed_region_2dwj_jsy4': nan, 'Latitude': 37.7083109744362, 'Longitude': -122.420084075249})PreviousFigmaNextGitCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/geopandas |
f7b89c5e44d3-0 | Source Code | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/source_code |
f7b89c5e44d3-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersSource CodeOn this pageSource CodeThis notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded | https://python.langchain.com/docs/integrations/document_loaders/source_code |
f7b89c5e44d3-2 | files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a seperate document.This approach can potentially improve the accuracy of QA models over source code. Currently, the supported languages for code parsing are Python and JavaScript. The language used for parsing can be configured, along with the minimum number of lines required to activate the splitting based on syntax.pip install esprimaimport warningswarnings.filterwarnings("ignore")from pprint import pprintfrom langchain.text_splitter import Languagefrom langchain.document_loaders.generic import GenericLoaderfrom langchain.document_loaders.parsers import LanguageParserloader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py", ".js"], parser=LanguageParser(),)docs = loader.load()len(docs) 6for document in docs: pprint(document.metadata) {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'simplified_code', 'language': <Language.PYTHON: 'python'>, 'source': 'example_data/source_code/example.py'} {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'functions_classes', 'language': | https://python.langchain.com/docs/integrations/document_loaders/source_code |
f7b89c5e44d3-3 | {'content_type': 'functions_classes', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'} {'content_type': 'simplified_code', 'language': <Language.JS: 'js'>, 'source': 'example_data/source_code/example.js'}print("\n\n--8<--\n\n".join([document.page_content for document in docs])) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") --8<-- def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() --8<-- # Code for: class MyClass: # Code for: def main(): if __name__ == "__main__": main() --8<-- class MyClass { constructor(name) { this.name = name; } greet() { console.log(`Hello, ${this.name}!`); } } | https://python.langchain.com/docs/integrations/document_loaders/source_code |
f7b89c5e44d3-4 | ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { main();The parser can be disabled for small files. The parameter parser_threshold indicates the minimum number of lines that the source code file must have to be segmented using the parser.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".py"], parser=LanguageParser(language=Language.PYTHON, parser_threshold=1000),)docs = loader.load()len(docs) 1print(docs[0].page_content) class MyClass: def __init__(self, name): self.name = name def greet(self): print(f"Hello, {self.name}!") def main(): name = input("Enter your name: ") obj = MyClass(name) obj.greet() if __name__ == "__main__": main() Splitting​Additional splitting could | https://python.langchain.com/docs/integrations/document_loaders/source_code |
f7b89c5e44d3-5 | main() Splitting​Additional splitting could be needed for those functions, classes, or scripts that are too big.loader = GenericLoader.from_filesystem( "./example_data/source_code", glob="*", suffixes=[".js"], parser=LanguageParser(language=Language.JS),)docs = loader.load()from langchain.text_splitter import ( RecursiveCharacterTextSplitter, Language,)js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0)result = js_splitter.split_documents(docs)len(result) 7print("\n\n--8<--\n\n".join([document.page_content for document in result])) class MyClass { constructor(name) { this.name = name; --8<-- } --8<-- greet() { console.log(`Hello, ${this.name}!`); } } --8<-- function main() { const name = prompt("Enter your name:"); --8<-- const obj = new MyClass(name); obj.greet(); } --8<-- // Code for: class MyClass { // Code for: function main() { --8<-- | https://python.langchain.com/docs/integrations/document_loaders/source_code |
f7b89c5e44d3-6 | // Code for: function main() { --8<-- main();PreviousSnowflakeNextSpreedlySplittingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/source_code |
20fa83bf70b0-0 | Notion DB 2/2 | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/notiondb |
20fa83bf70b0-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersNotion DB 2/2On this pageNotion DB 2/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, | https://python.langchain.com/docs/integrations/document_loaders/notiondb |
20fa83bf70b0-2 | is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.NotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects.Requirements​A Notion DatabaseNotion Integration TokenSetup​1. Create a Notion Table Database​Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns:Title: set Title as the default property.Categories: A Multi-select property to store categories associated with the page.Keywords: A Multi-select property to store keywords associated with the page.Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages.2. Create a Notion Integration​To create a Notion Integration, follow these steps:Visit the Notion Developers page and log in with your Notion account.Click on the "+ New integration" button.Give your integration a name and choose the workspace where your database is located.Select the require capabilities, this extension only need the Read content capabilityClick the "Submit" button to create the integration. | https://python.langchain.com/docs/integrations/document_loaders/notiondb |
20fa83bf70b0-3 | Once the integration is created, you'll be provided with an Integration Token (API key). Copy this token and keep it safe, as you'll need it to use the NotionDBLoader.3. Connect the Integration to the Database​To connect your integration to the database, follow these steps:Open your database in Notion.Click on the three-dot menu icon in the top right corner of the database view.Click on the "+ New integration" button.Find your integration, you may need to start typing its name in the search box.Click on the "Connect" button to connect the integration to the database.4. Get the Database ID​To get the database ID, follow these steps:Open your database in Notion.Click on the three-dot menu icon in the top right corner of the database view.Select "Copy link" from the menu to copy the database URL to your clipboard.The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=.... In this example, the database ID is 8935f9d140a04f95a872520c4f123456.With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database.Usage​NotionDBLoader is part of the langchain package's document loaders. You can use it as follows:from getpass import getpassNOTION_TOKEN = getpass()DATABASE_ID = getpass() ········ ········from langchain.document_loaders import NotionDBLoaderloader = NotionDBLoader( | https://python.langchain.com/docs/integrations/document_loaders/notiondb |
20fa83bf70b0-4 | langchain.document_loaders import NotionDBLoaderloader = NotionDBLoader( integration_token=NOTION_TOKEN, database_id=DATABASE_ID, request_timeout_sec=30, # optional, defaults to 10)docs = loader.load()print(docs) PreviousNotion DB 1/2NextObsidianRequirementsSetup1. Create a Notion Table Database2. Create a Notion Integration3. Connect the Integration to the Database4. Get the Database IDUsageCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/notiondb |
57a1900c876c-0 | Unstructured File | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
57a1900c876c-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersUnstructured FileOn this pageUnstructured FileThis notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
57a1900c876c-2 | use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.# # Install packagepip install "unstructured[local-inference]"pip install layoutparser[layoutmodels,tesseract]# # Install other dependencies# # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst# !brew install libmagic# !brew install poppler# !brew install tesseract# # If parsing xml / html documents:# !brew install libxml2# !brew install libxslt# import nltk# nltk.download('punkt')from langchain.document_loaders import UnstructuredFileLoaderloader = UnstructuredFileLoader("./example_data/state_of_the_union.txt")docs = loader.load()docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit'Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredFileLoader( "./example_data/state_of_the_union.txt", mode="elements")docs = loader.load()docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
57a1900c876c-3 | Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]Define a Partitioning Strategy​Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partition the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below.from langchain.document_loaders import UnstructuredFileLoaderloader = UnstructuredFileLoader( "layout-parser-paper-fast.pdf", strategy="fast", mode="elements")docs = loader.load()docs[:5] [Document(page_content='1', lookup_str='', | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
57a1900c876c-4 | loader.load()docs[:5] [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)]PDF Example​Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. Modes of operation are single all the text from all elements are combined into one (default)elements maintain individual elementspaged texts from each page are only combinedwget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../"loader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements")docs = loader.load()docs[:5] | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
57a1900c876c-5 | mode="elements")docs = loader.load()docs[:5] [Document(page_content='LayoutParser : A Uni�ed Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI shannons@allenai.org', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen zhang@brown.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)]If you need to post process the unstructured elements after extraction, you can pass in a list of Element -> Element functions to the post_processors kwarg when you instantiate the UnstructuredFileLoader. This applies to other Unstructured loaders as well. Below is an example. Post processors are only applied if you run the loader in "elements" mode.from langchain.document_loaders import UnstructuredFileLoaderfrom unstructured.cleaners.core import clean_extra_whitespaceloader = UnstructuredFileLoader( "./example_data/layout-parser-paper.pdf", mode="elements", post_processors=[clean_extra_whitespace],)docs = loader.load()docs[:5] [Document(page_content='LayoutParser: A Uni�ed | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
57a1900c876c-6 | [Document(page_content='LayoutParser: A Uni�ed Toolkit for Deep Learning Based Document Image Analysis', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((157.62199999999999, 114.23496279999995), (157.62199999999999, 146.5141628), (457.7358962799999, 146.5141628), (457.7358962799999, 114.23496279999995)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'}), Document(page_content='Zejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain Lee4, Jacob Carlson3, and Weining Li5', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((134.809, 168.64029940800003), (134.809, 192.2517444), (480.5464199080001, 192.2517444), (480.5464199080001, 168.64029940800003)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 Allen Institute for AI shannons@allenai.org 2 Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
57a1900c876c-7 | Brown University ruochen zhang@brown.edu 3 Harvard University {melissadell,jacob carlson}@fas.harvard.edu 4 University of Washington bcgl@cs.washington.edu 5 University of Waterloo w422li@uwaterloo.ca', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((207.23000000000002, 202.57205439999996), (207.23000000000002, 311.8195408), (408.12676, 311.8195408), (408.12676, 202.57205439999996)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='1 2 0 2', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'UncategorizedText'}), Document(page_content='n u J', metadata={'source': './example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 258.36), (16.34, 286.14), (36.34, 286.14), (36.34, | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
57a1900c876c-8 | 286.14), (36.34, 286.14), (36.34, 258.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'filename': 'layout-parser-paper.pdf', 'file_directory': './example_data', 'filetype': 'application/pdf', 'page_number': 1, 'category': 'Title'})]Unstructured API​If you want to get up and running with less set up, you can simply run pip install unstructured and use UnstructuredAPIFileLoader or UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API. You can generate a free Unstructured API key here. The Unstructured documentation page will have instructions on how to generate an API key once they’re available. Check out the instructions here if you’d like to self-host the Unstructured API or run it locally.from langchain.document_loaders import UnstructuredAPIFileLoaderfilenames = ["example_data/fake.docx", "example_data/fake-email.eml"]loader = UnstructuredAPIFileLoader( file_path=filenames[0], api_key="FAKE_API_KEY",)docs = loader.load()docs[0] Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})You can also batch multiple files through the Unstructured API in a single API using UnstructuredAPIFileLoader.loader = UnstructuredAPIFileLoader( file_path=filenames, api_key="FAKE_API_KEY",)docs = loader.load()docs[0] Document(page_content='Lorem ipsum dolor sit amet.\n\nThis is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
57a1900c876c-9 | points:\n\nRoses are red\n\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']})PreviousTwitterNextURLRetain ElementsDefine a Partitioning StrategyPDF ExampleUnstructured APICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/unstructured_file |
ac412183640b-0 | Psychic | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/psychic |
ac412183640b-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersPsychicOn this pagePsychicThis notebook covers how to load documents from Psychic. See here for more details.Prerequisites​Follow the Quick Start section in this | https://python.langchain.com/docs/integrations/document_loaders/psychic |
ac412183640b-2 | Psychic. See here for more details.Prerequisites​Follow the Quick Start section in this documentLog into the Psychic dashboard and get your secret keyInstall the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify.Loading documents​Use the PsychicLoader class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library).# Uncomment this to install psychicapi if you don't already have it installedpoetry run pip -q install psychicapi [notice] A new release of pip is available: 23.0.1 -> 23.1.2 [notice] To update, run: pip install --upgrade pipfrom langchain.document_loaders import PsychicLoaderfrom psychicapi import ConnectorId# Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value# This loader uses our test credentialsgoogle_drive_loader = PsychicLoader( api_key="7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e", connector_id=ConnectorId.gdrive.value, connection_id="google-test",)documents = google_drive_loader.load()Converting the docs to embeddings​We can now convert these documents into embeddings and store them in a vector database like Chromafrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.llms import OpenAIfrom langchain.chains import RetrievalQAWithSourcesChaintext_splitter = | https://python.langchain.com/docs/integrations/document_loaders/psychic |
ac412183640b-3 | import OpenAIfrom langchain.chains import RetrievalQAWithSourcesChaintext_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)texts = text_splitter.split_documents(documents)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)chain = RetrievalQAWithSourcesChain.from_chain_type( OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())chain({"question": "what is psychic?"}, return_only_outputs=True)PreviousPandas DataFrameNextPySpark DataFrame LoaderPrerequisitesLoading documentsConverting the docs to embeddingsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/psychic |
d6f8a5c2e880-0 | Images | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/image |
d6f8a5c2e880-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersImagesOn this pageImagesThis covers how to load images such as JPG or PNG into a document format that we can use downstream.Using Unstructured​#!pip | https://python.langchain.com/docs/integrations/document_loaders/image |
d6f8a5c2e880-2 | PNG into a document format that we can use downstream.Using Unstructured​#!pip install pdfminerfrom langchain.document_loaders.image import UnstructuredImageLoaderloader = UnstructuredImageLoader("layout-parser-paper-fast.jpg")data = loader.load()data[0] Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang�, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We | https://python.langchain.com/docs/integrations/document_loaders/image |
d6f8a5c2e880-3 | de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)Retain Elements​Under the hood, Unstructured creates different "elements" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements")data = loader.load()data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)PreviousiFixitNextImage captionsUsing UnstructuredRetain ElementsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/image |
7a3dbb140630-0 | Pandas DataFrame | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
7a3dbb140630-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersPandas DataFramePandas DataFrameThis notebook goes over how to load data from a pandas DataFrame.#!pip install pandasimport pandas as pddf = | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
7a3dbb140630-2 | over how to load data from a pandas DataFrame.#!pip install pandasimport pandas as pddf = pd.read_csv("example_data/mlb_teams_2012.csv")df.head()<div><style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; }</style><table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>Team</th> <th>"Payroll (millions)"</th> <th>"Wins"</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>Nationals</td> <td>81.34</td> <td>98</td> </tr> <tr> <th>1</th> <td>Reds</td> <td>82.20</td> <td>97</td> </tr> <tr> <th>2</th> <td>Yankees</td> <td>197.96</td> | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
7a3dbb140630-3 | <td>197.96</td> <td>95</td> </tr> <tr> <th>3</th> <td>Giants</td> <td>117.62</td> <td>94</td> </tr> <tr> <th>4</th> <td>Braves</td> <td>83.31</td> <td>94</td> </tr> </tbody></table></div>from langchain.document_loaders import DataFrameLoaderloader = DataFrameLoader(df, page_content_column="Team")loader.load() [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
7a3dbb140630-4 | 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
7a3dbb140630-5 | ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
7a3dbb140630-6 | 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})]# Use lazy load for larger table, which won't read the full table into memoryfor i in loader.lazy_load(): print(i) page_content='Nationals' metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98} page_content='Reds' metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97} page_content='Yankees' metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95} page_content='Giants' metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94} page_content='Braves' metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94} page_content='Athletics' metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94} page_content='Rangers' metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93} page_content='Orioles' metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93} page_content='Rays' metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90} | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
7a3dbb140630-7 | 64.17, ' "Wins"': 90} page_content='Angels' metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89} page_content='Tigers' metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88} page_content='Cardinals' metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88} page_content='Dodgers' metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86} page_content='White Sox' metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85} page_content='Brewers' metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83} page_content='Phillies' metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81} page_content='Diamondbacks' metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81} page_content='Pirates' metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79} page_content='Padres' metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76} page_content='Mariners' metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75} page_content='Mets' metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74} | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
7a3dbb140630-8 | (millions)"': 93.35, ' "Wins"': 74} page_content='Blue Jays' metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73} page_content='Royals' metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72} page_content='Marlins' metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69} page_content='Red Sox' metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69} page_content='Indians' metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68} page_content='Twins' metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66} page_content='Rockies' metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64} page_content='Cubs' metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61} page_content='Astros' metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55}PreviousOrg-modeNextPsychicCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/pandas_dataframe |
2cfa95df34d8-0 | Azure Blob Storage File | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file |
2cfa95df34d8-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersAzure Blob Storage FileAzure Blob Storage FileAzure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network | https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file |
2cfa95df34d8-2 | in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.This covers how to load document objects from a Azure Files.#!pip install azure-storage-blobfrom langchain.document_loaders import AzureBlobStorageFileLoaderloader = AzureBlobStorageFileLoader( conn_str="<connection string>", container="<container name>", blob_name="<blob name>",)loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]PreviousAzure Blob Storage ContainerNextBibTeXCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/azure_blob_storage_file |
0df6d1f560b5-0 | Datadog Logs | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/datadog_logs |
0df6d1f560b5-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDatadog LogsDatadog LogsDatadog is a monitoring and analytics platform for cloud-scale applications.This loader fetches the logs from your applications in Datadog using | https://python.langchain.com/docs/integrations/document_loaders/datadog_logs |
0df6d1f560b5-2 | analytics platform for cloud-scale applications.This loader fetches the logs from your applications in Datadog using the datadog_api_client Python package. You must initialize the loader with your Datadog API key and APP key, and you need to pass in the query to extract the desired logs.from langchain.document_loaders import DatadogLogsLoader#!pip install datadog-api-clientquery = "service:agent status:error"loader = DatadogLogsLoader( query=query, api_key=DD_API_KEY, app_key=DD_APP_KEY, from_time=1688732708951, # Optional, timestamp in milliseconds to_time=1688736308951, # Optional, timestamp in milliseconds limit=100, # Optional, default is 100)documents = loader.load()documents [Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpQAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWQAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5', | https://python.langchain.com/docs/integrations/document_loaders/datadog_logs |
0df6d1f560b5-3 | 'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())}), Document(page_content='message: grep: /etc/datadog-agent/system-probe.yaml: No such file or directory', metadata={'id': 'AgAAAYkwpLImvkjRpgAAAAAAAAAYAAAAAEFZa3dwTUFsQUFEWmZfLU5QdElnM3dBWgAAACQAAAAAMDE4OTMwYTQtYzk3OS00MmJjLTlhNDAtOTY4N2EwY2I5ZDdk', 'status': 'error', 'service': 'agent', 'tags': ['accessible-from-goog-gke-node', 'allow-external-ingress-high-ports', 'allow-external-ingress-http', | https://python.langchain.com/docs/integrations/document_loaders/datadog_logs |
0df6d1f560b5-4 | 'allow-external-ingress-high-ports', 'allow-external-ingress-http', 'allow-external-ingress-https', 'container_id:c7d8ecd27b5b3cfdf3b0df04b8965af6f233f56b7c3c2ffabfab5e3b6ccbd6a5', 'container_name:lab_datadog_1', 'datadog.pipelines:false', 'datadog.submission_auth:private_api_key', 'docker_image:datadog/agent:7.41.1', 'env:dd101-dev', 'hostname:lab-host', 'image_name:datadog/agent', 'image_tag:7.41.1', 'instance-id:7497601202021312403', 'instance-type:custom-1-4096', 'instruqt_aws_accounts:', 'instruqt_azure_subscriptions:', 'instruqt_gcp_projects:', 'internal-hostname:lab-host.d4rjybavkary.svc.cluster.local', 'numeric_project_id:3390740675', 'p-d4rjybavkary', 'project:instruqt-prod', 'service:agent', 'short_image:agent', 'source:agent', 'zone:europe-west1-b'], 'timestamp': datetime.datetime(2023, 7, 7, 13, 57, 27, 206000, tzinfo=tzutc())})]PreviousCube Semantic LayerNextDiffbotCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/datadog_logs |
f775ca91f2d8-0 | Roam | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/roam |
f775ca91f2d8-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersRoamOn this pageRoamROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.This notebook covers how to load documents from a | https://python.langchain.com/docs/integrations/document_loaders/roam |
f775ca91f2d8-2 | networked thought, designed to create a personal knowledge base.This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here.🧑 Instructions for ingesting your own dataset​Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.When exporting, make sure to select the Markdown & CSV format option.This will produce a .zip file in your Downloads folder. Move the .zip file into this repository.Run the following command to unzip the zip file (replace the Export... with your own file name as needed).unzip Roam-Export-1675782732639.zip -d Roam_DBfrom langchain.document_loaders import RoamLoaderloader = RoamLoader("Roam_DB")docs = loader.load()PreviousRedditNextRockset🧑 Instructions for ingesting your own datasetCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/roam |
a5257b75d73d-0 | Rockset | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/rockset |
a5257b75d73d-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersRocksetOn this pageRocksetRockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested | https://python.langchain.com/docs/integrations/document_loaders/rockset |
a5257b75d73d-2 | which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).This notebook demonstrates how to use Rockset as a document loader in langchain. To get started, make sure you have a Rockset account and an API key available.Setting up the environment​Go to the Rockset console and get an API key. Find your API region from the API reference. For the purpose of this notebook, we will assume you're using Rockset from Oregon(us-west-2).Set your the environment variable ROCKSET_API_KEY.Install the Rockset python client, which will be used by langchain to interact with the Rockset database.$ pip3 install rocksetLoading DocumentsThe Rockset integration with LangChain allows you to load documents from Rockset collections with SQL queries. In order to do this you must construct a RocksetLoader object. Here is an example snippet that initializes a RocksetLoader.from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 3"), # SQL query ["text"], # content columns metadata_keys=["id", "date"], # metadata columns)Here, you can see that the following query is run:SELECT * FROM langchain_demo LIMIT 3The text column in the collection is used as the page content, and the record's id and date columns are used as metadata (if you do not pass anything into metadata_keys, the whole Rockset document will | https://python.langchain.com/docs/integrations/document_loaders/rockset |
a5257b75d73d-3 | used as metadata (if you do not pass anything into metadata_keys, the whole Rockset document will be used as metadata). To execute the query and access an iterator over the resulting Documents, run:loader.lazy_load()To execute the query and access all resulting Documents at once, run:loader.load()Here is an example response of loader.load():[ Document( page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas a libero porta, dictum ipsum eget, hendrerit neque. Morbi blandit, ex ut suscipit viverra, enim velit tincidunt tellus, a tempor velit nunc et ex. Proin hendrerit odio nec convallis lobortis. Aenean in purus dolor. Vestibulum orci orci, laoreet eget magna in, commodo euismod justo.", metadata={"id": 83209, "date": "2022-11-13T18:26:45.000000Z"} ), Document( page_content="Integer at finibus odio. Nam sit amet enim cursus lacus gravida feugiat vestibulum sed libero. Aenean eleifend est quis elementum tincidunt. Curabitur sit amet ornare erat. Nulla id dolor ut magna volutpat sodales fringilla vel ipsum. Donec ultricies, lacus sed fermentum dignissim, lorem elit aliquam ligula, sed suscipit sapien purus nec ligula.", metadata={"id": 89313, "date": "2022-11-13T18:28:53.000000Z"} ), Document( page_content="Morbi tortor | https://python.langchain.com/docs/integrations/document_loaders/rockset |
a5257b75d73d-4 | ), Document( page_content="Morbi tortor enim, commodo id efficitur vitae, fringilla nec mi. Nullam molestie faucibus aliquet. Praesent a est facilisis, condimentum justo sit amet, viverra erat. Fusce volutpat nisi vel purus blandit, et facilisis felis accumsan. Phasellus luctus ligula ultrices tellus tempor hendrerit. Donec at ultricies leo.", metadata={"id": 87732, "date": "2022-11-13T18:49:04.000000Z"} )]Using multiple columns as content​You can choose to use multiple columns as content:from langchain.document_loaders import RocksetLoaderfrom rockset import RocksetClient, Regions, modelsloader = RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], # TWO content columns)Assuming the "sentence1" field is "This is the first sentence." and the "sentence2" field is "This is the second sentence.", the page_content of the resulting Document would be:This is the first sentence.This is the second sentence.You can define you own function to join content columns by setting the content_columns_joiner argument in the RocksetLoader constructor. content_columns_joiner is a method that takes in a List[Tuple[str, Any]]] as an argument, representing a list of tuples of (column name, column value). By default, this is a method that joins each column value with a new line.For example, if you wanted to join sentence1 and sentence2 with a | https://python.langchain.com/docs/integrations/document_loaders/rockset |
a5257b75d73d-5 | value with a new line.For example, if you wanted to join sentence1 and sentence2 with a space instead of a new line, you could set content_columns_joiner like so:RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: " ".join( [doc[1] for doc in docs] ), # join with space instead of /n)The page_content of the resulting Document would be:This is the first sentence. This is the second sentence.Oftentimes you want to include the column name in the page_content. You can do that like this:RocksetLoader( RocksetClient(Regions.usw2a1, "<api key>"), models.QueryRequestSql(query="SELECT * FROM langchain_demo LIMIT 1 WHERE id=38"), ["sentence1", "sentence2"], content_columns_joiner=lambda docs: "\n".join( [f"{doc[0]}: {doc[1]}" for doc in docs] ),)This would result in the following page_content:sentence1: This is the first sentence.sentence2: This is the second sentence.PreviousRoamNextRSTSetting up the environmentUsing multiple columns as contentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/rockset |
c058b12fd894-0 | Discord | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/discord |
c058b12fd894-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDiscordDiscordDiscord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files | https://python.langchain.com/docs/integrations/document_loaders/discord |
c058b12fd894-2 | Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.Follow these steps to download your Discord data:Go to your User SettingsThen go to Privacy and SafetyHead over to the Request all of my Data and click on Request Data buttonIt might take 30 days for you to receive your data. You'll receive an email at the address which is registered with Discord. That email will have a download button using which you would be able to download your personal Discord data.import pandas as pdimport ospath = input('Please enter the path to the contents of the Discord "messages" folder: ')li = []for f in os.listdir(path): expected_csv_path = os.path.join(path, f, "messages.csv") csv_exists = os.path.isfile(expected_csv_path) if csv_exists: df = pd.read_csv(expected_csv_path, index_col=None, header=0) li.append(df)df = pd.concat(li, axis=0, ignore_index=True, sort=False)from langchain.document_loaders.discord import DiscordChatLoaderloader = DiscordChatLoader(df, user_id_col="ID")print(loader.load())PreviousDiffbotNextDocugamiCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/discord |
f5475ac79013-0 | PySpark DataFrame Loader | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
f5475ac79013-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersPySpark DataFrame LoaderPySpark DataFrame LoaderThis notebook goes over how to load data from a PySpark DataFrame.#!pip install pysparkfrom pyspark.sql import SparkSessionspark | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
f5475ac79013-2 | data from a PySpark DataFrame.#!pip install pysparkfrom pyspark.sql import SparkSessionspark = SparkSession.builder.getOrCreate() Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 23/05/31 14:08:33 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicabledf = spark.read.csv("example_data/mlb_teams_2012.csv", header=True)from langchain.document_loaders import PySparkDataFrameLoaderloader = PySparkDataFrameLoader(spark, df, page_content_column="Team")loader.load() [Stage 8:> (0 + 1) / 1] [Document(page_content='Nationals', metadata={' "Payroll (millions)"': ' 81.34', ' "Wins"': ' 98'}), Document(page_content='Reds', metadata={' "Payroll (millions)"': ' 82.20', ' "Wins"': ' 97'}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': ' 197.96', ' "Wins"': ' 95'}), Document(page_content='Giants', metadata={' "Payroll (millions)"': ' 117.62', ' "Wins"': ' 94'}), | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
f5475ac79013-3 | 117.62', ' "Wins"': ' 94'}), Document(page_content='Braves', metadata={' "Payroll (millions)"': ' 83.31', ' "Wins"': ' 94'}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': ' 55.37', ' "Wins"': ' 94'}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': ' 120.51', ' "Wins"': ' 93'}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': ' 81.43', ' "Wins"': ' 93'}), Document(page_content='Rays', metadata={' "Payroll (millions)"': ' 64.17', ' "Wins"': ' 90'}), Document(page_content='Angels', metadata={' "Payroll (millions)"': ' 154.49', ' "Wins"': ' 89'}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': ' 132.30', ' "Wins"': ' 88'}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': ' 110.30', ' "Wins"': ' 88'}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': ' 95.14', ' "Wins"': ' | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
f5475ac79013-4 | ' 95.14', ' "Wins"': ' 86'}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': ' 96.92', ' "Wins"': ' 85'}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': ' 97.65', ' "Wins"': ' 83'}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': ' 174.54', ' "Wins"': ' 81'}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': ' 74.28', ' "Wins"': ' 81'}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': ' 63.43', ' "Wins"': ' 79'}), Document(page_content='Padres', metadata={' "Payroll (millions)"': ' 55.24', ' "Wins"': ' 76'}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': ' 81.97', ' "Wins"': ' 75'}), Document(page_content='Mets', metadata={' "Payroll (millions)"': ' 93.35', ' "Wins"': ' 74'}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': ' 75.48', ' "Wins"': ' | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
f5475ac79013-5 | ' 75.48', ' "Wins"': ' 73'}), Document(page_content='Royals', metadata={' "Payroll (millions)"': ' 60.91', ' "Wins"': ' 72'}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': ' 118.07', ' "Wins"': ' 69'}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': ' 173.18', ' "Wins"': ' 69'}), Document(page_content='Indians', metadata={' "Payroll (millions)"': ' 78.43', ' "Wins"': ' 68'}), Document(page_content='Twins', metadata={' "Payroll (millions)"': ' 94.08', ' "Wins"': ' 66'}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': ' 78.06', ' "Wins"': ' 64'}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': ' 88.19', ' "Wins"': ' 61'}), Document(page_content='Astros', metadata={' "Payroll (millions)"': ' 60.65', ' "Wins"': ' 55'})]PreviousPsychicNextReadTheDocs DocumentationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
f5475ac79013-6 | © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/pyspark_dataframe |
4fc3854a6f19-0 | BiliBili | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/bilibili |
4fc3854a6f19-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersBiliBiliBiliBiliBilibili is one of the most beloved long-form video sites in China.This loader utilizes the bilibili-api to fetch the text transcript | https://python.langchain.com/docs/integrations/document_loaders/bilibili |
4fc3854a6f19-2 | most beloved long-form video sites in China.This loader utilizes the bilibili-api to fetch the text transcript from Bilibili.With this BiliBiliLoader, users can easily obtain the transcript of their desired video content on the platform.#!pip install bilibili-api-pythonfrom langchain.document_loaders import BiliBiliLoaderloader = BiliBiliLoader(["https://www.bilibili.com/video/BV1xt411o7Xu/"])loader.load()PreviousBibTeXNextBlackboardCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/bilibili |
287d98ff4b36-0 | EverNote | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/evernote |
287d98ff4b36-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersEverNoteEverNoteEverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual | https://python.langchain.com/docs/integrations/document_loaders/evernote |
287d98ff4b36-2 | creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual "notebooks" and can be tagged, annotated, edited, searched, and exported.This notebook shows how to load an Evernote export file (.enex) from disk.A document will be created for each note in the export.# lxml and html2text are required to parse EverNote notes# !pip install lxml# !pip install html2textfrom langchain.document_loaders import EverNoteLoader# By default all notes are combined into a single Documentloader = EverNoteLoader("example_data/testing.enex")loader.load() [Document(page_content='testing this\n\nwhat happens?\n\nto the world?**Jan - March 2022**', metadata={'source': 'example_data/testing.enex'})]# It's likely more useful to return a Document for each noteloader = EverNoteLoader("example_data/testing.enex", load_single_document=False)loader.load() [Document(page_content='testing this\n\nwhat happens?\n\nto the world?', metadata={'title': 'testing', 'created': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=47, tm_sec=46, tm_wday=3, tm_yday=40, tm_isdst=-1), 'updated': time.struct_time(tm_year=2023, tm_mon=2, tm_mday=9, tm_hour=3, tm_min=53, tm_sec=28, tm_wday=3, tm_yday=40, tm_isdst=-1), 'note-attributes.author': 'Harrison Chase', 'source': 'example_data/testing.enex'}), Document(page_content='**Jan - March 2022**', metadata={'title': 'Summer Training Program', 'created': | https://python.langchain.com/docs/integrations/document_loaders/evernote |
287d98ff4b36-3 | - March 2022**', metadata={'title': 'Summer Training Program', 'created': time.struct_time(tm_year=2022, tm_mon=12, tm_mday=27, tm_hour=1, tm_min=59, tm_sec=48, tm_wday=1, tm_yday=361, tm_isdst=-1), 'note-attributes.author': 'Mike McGarry', 'note-attributes.source': 'mobile.iphone', 'source': 'example_data/testing.enex'})]PreviousEPubNextNotebookCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/evernote |
2305f18d97d8-0 | acreom | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/acreom |
2305f18d97d8-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersacreomacreomacreom is a dev-first knowledge base with tasks running on local markdown files.Below is an example on how to load a local acreom vault into | https://python.langchain.com/docs/integrations/document_loaders/acreom |
2305f18d97d8-2 | running on local markdown files.Below is an example on how to load a local acreom vault into Langchain. As the local vault in acreom is a folder of plain text .md files, the loader requires the path to the directory. Vault files may contain some metadata which is stored as a YAML header. These values will be added to the document’s metadata if collect_metadata is set to true. from langchain.document_loaders import AcreomLoaderloader = AcreomLoader("<path-to-acreom-vault>", collect_metadata=False)docs = loader.load()PreviousEtherscan LoaderNextAirbyte JSONCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/acreom |
9f3b7bafd339-0 | Notebook | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/example_data/notebook |
9f3b7bafd339-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataNotebookMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersexample_dataNotebookNotebookThis notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.from | https://python.langchain.com/docs/integrations/document_loaders/example_data/notebook |
9f3b7bafd339-2 | covers how to load data from an .ipynb notebook into a format suitable by LangChain.from langchain.document_loaders import NotebookLoaderloader = NotebookLoader("example_data/notebook.ipynb")NotebookLoader.load() loads the .ipynb notebook file into a Document object.Parameters:include_outputs (bool): whether to include cell outputs in the resulting document (default is False).max_output_length (int): the maximum number of characters to include from each cell output (default is 10).remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False).traceback (bool): whether to include full traceback (default is False).loader.load(include_outputs=True, max_output_length=20, remove_newline=True)PreviousEverNoteNextMicrosoft ExcelCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/example_data/notebook |
f06f9092bd73-0 | Facebook Chat | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/facebook_chat |
f06f9092bd73-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersFacebook ChatFacebook ChatMessenger is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service | https://python.langchain.com/docs/integrations/document_loaders/facebook_chat |
f06f9092bd73-2 | by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010.This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain.# pip install pandasfrom langchain.document_loaders import FacebookChatLoaderloader = FacebookChatLoader("example_data/facebook_chat.json")loader.load() [Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\n\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 03:04:28: Here is $129\n\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 02:59:59: How much do you want?\n\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})]PreviousMicrosoft | https://python.langchain.com/docs/integrations/document_loaders/facebook_chat |
f06f9092bd73-3 | Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})]PreviousMicrosoft ExcelNextFaunaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/facebook_chat |
aa9fcc1f5d89-0 | Google Cloud Storage Directory | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory |
aa9fcc1f5d89-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersGoogle Cloud Storage DirectoryOn this pageGoogle Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructured data.This covers how to load document objects from an Google Cloud Storage | https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory |
aa9fcc1f5d89-2 | a managed service for storing unstructured data.This covers how to load document objects from an Google Cloud Storage (GCS) directory (bucket).# !pip install google-cloud-storagefrom langchain.document_loaders import GCSDirectoryLoaderloader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)]Specifying a prefix​You can | https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory |
aa9fcc1f5d89-3 | lookup_index=0)]Specifying a prefix​You can also specify a prefix for more finegrained control over what files to load.loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake")loader.load() /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) /Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)]PreviousGoogle BigQueryNextGoogle Cloud Storage FileSpecifying a | https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory |
aa9fcc1f5d89-4 | lookup_index=0)]PreviousGoogle BigQueryNextGoogle Cloud Storage FileSpecifying a prefixCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/google_cloud_storage_directory |
23d0f6159e97-0 | Modern Treasury | 🦜�🔗 Langchain | https://python.langchain.com/docs/integrations/document_loaders/modern_treasury |
23d0f6159e97-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersModern TreasuryModern TreasuryModern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.Connect to banks and payment systemsTrack transactions and balances | https://python.langchain.com/docs/integrations/document_loaders/modern_treasury |
23d0f6159e97-2 | unified platform to power products and processes that move money.Connect to banks and payment systemsTrack transactions and balances in real-timeAutomate payment operations for scaleThis notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization.import osfrom langchain.document_loaders import ModernTreasuryLoaderfrom langchain.indexes import VectorstoreIndexCreatorThe Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings.This document loader also requires a resource option which defines what data you want to load.Following resources are available:payment_orders Documentationexpected_payments Documentationreturns Documentationincoming_payment_details Documentationcounterparties Documentationinternal_accounts Documentationexternal_accounts Documentationtransactions Documentationledgers Documentationledger_accounts Documentationledger_transactions Documentationevents Documentationinvoices Documentationmodern_treasury_loader = ModernTreasuryLoader("payment_orders")# Create a vectorstore retriever from the loader# see https://python.langchain.com/en/latest/modules/data_connection/getting_started.html for more detailsindex = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])modern_treasury_doc_retriever = index.vectorstore.as_retriever()PreviousMicrosoft WordNextNotion DB 1/2CommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/integrations/document_loaders/modern_treasury |
Subsets and Splits