url
stringlengths 25
141
| content
stringlengths 2.14k
402k
|
---|---|
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/json_functions/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/)
* [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/)
* [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/)
* [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/)
* [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)
* [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)
* [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/)
* [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/)
* [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)
* [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)
* [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)
* [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/)
* [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* JSON Output Functions Parser
On this page
JSON Output Functions Parser
============================
The JSON Output Functions Parser is a useful tool for parsing structured JSON function responses, such as those from [OpenAI functions](/v0.1/docs/modules/model_io/chat/function_calling/). This parser is particularly useful when you need to extract specific information from complex JSON responses.
Here's how it works:
1. **Output Parser**: You can either pass in a predefined `outputParser`, or the parser will use the default `OutputFunctionsParser`.
2. **Default Behavior**: If the default `OutputFunctionsParser` is used, it extracts the function call from the response generation and applies `JSON.stringify` to it.
3. **argsOnly Parameter**: If the `argsOnly` parameter is set to true, the parser will only return the arguments of the function call, without applying `JSON.stringify` to the response.
4. **Response Parsing**: The response from the output parser is then parsed again, and the result is returned.
Let's look at an example:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { JsonOutputFunctionsParser } from "langchain/output_parsers";import { HumanMessage } from "@langchain/core/messages";// Instantiate the parserconst parser = new JsonOutputFunctionsParser();// Define the function schemaconst extractionFunctionSchema = { name: "extractor", description: "Extracts fields from the input.", parameters: { type: "object", properties: { tone: { type: "string", enum: ["positive", "negative"], description: "The overall tone of the input", }, word_count: { type: "number", description: "The number of words in the input", }, chat_response: { type: "string", description: "A response to the human's input", }, }, required: ["tone", "word_count", "chat_response"], },};// Instantiate the ChatOpenAI classconst model = new ChatOpenAI({ model: "gpt-4" });// Create a new runnable, bind the function to the model, and pipe the output through the parserconst runnable = model .bind({ functions: [extractionFunctionSchema], function_call: { name: "extractor" }, }) .pipe(parser);// Invoke the runnable with an inputconst result = await runnable.invoke([ new HumanMessage("What a beautiful day!"),]);console.log({ result });/**{ result: { tone: 'positive', word_count: 4, chat_response: "Indeed, it's a lovely day!" }} */
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [JsonOutputFunctionsParser](https://api.js.langchain.com/classes/langchain_output_parsers.JsonOutputFunctionsParser.html) from `langchain/output_parsers`
* [HumanMessage](https://api.js.langchain.com/classes/langchain_core_messages.HumanMessage.html) from `@langchain/core/messages`
In this example, we first define a function schema and instantiate the `ChatOpenAI` class. We then create a runnable by binding the function to the model and piping the output through the `JsonOutputFunctionsParser`. When we invoke the runnable with an input, the response is already parsed thanks to the output parser.
The result will be a JSON object that contains the parsed response from the function call.
Streaming[β](#streaming "Direct link to Streaming")
---------------------------------------------------
This parser is also convenient for parsing functions responses in a streaming fashion. It supports either the aggregated functions response or a [JSON patch](https://jsonpatch.com/) diff:
import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";import { ChatOpenAI } from "@langchain/openai";import { JsonOutputFunctionsParser } from "langchain/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";const schema = z.object({ setup: z.string().describe("The setup for the joke"), punchline: z.string().describe("The punchline to the joke"),});const modelParams = { functions: [ { name: "joke", description: "A joke", parameters: zodToJsonSchema(schema), }, ], function_call: { name: "joke" },};const prompt = ChatPromptTemplate.fromTemplate( `tell me a long joke about {foo}`);const model = new ChatOpenAI({ temperature: 0,}).bind(modelParams);const chain = prompt .pipe(model) .pipe(new JsonOutputFunctionsParser({ diff: true }));const stream = await chain.stream({ foo: "bears",});// Stream a diff as JSON patch operationsfor await (const chunk of stream) { console.log(chunk);}/* [] [ { op: 'add', path: '/setup', value: '' } ] [ { op: 'replace', path: '/setup', value: 'Why' } ] [ { op: 'replace', path: '/setup', value: 'Why don' } ] [ { op: 'replace', path: '/setup', value: "Why don't" } ] [ { op: 'replace', path: '/setup', value: "Why don't bears" } ] [ { op: 'replace', path: '/setup', value: "Why don't bears wear" } ] [ { op: 'replace', path: '/setup', value: "Why don't bears wear shoes" } ] [ { op: 'replace', path: '/setup', value: "Why don't bears wear shoes?" }, { op: 'add', path: '/punchline', value: '' } ] [ { op: 'replace', path: '/punchline', value: 'Because' } ] [ { op: 'replace', path: '/punchline', value: 'Because they' } ] [ { op: 'replace', path: '/punchline', value: 'Because they have' } ] [ { op: 'replace', path: '/punchline', value: 'Because they have bear' } ] [ { op: 'replace', path: '/punchline', value: 'Because they have bear feet' } ] [ { op: 'replace', path: '/punchline', value: 'Because they have bear feet!' } ]*/const chain2 = prompt.pipe(model).pipe(new JsonOutputFunctionsParser());const stream2 = await chain2.stream({ foo: "beets",});// Stream the entire aggregated JSON objectfor await (const chunk of stream2) { console.log(chunk);}/* {} { setup: '' } { setup: 'Why' } { setup: 'Why did' } { setup: 'Why did the' } { setup: 'Why did the beet' } { setup: 'Why did the beet go' } { setup: 'Why did the beet go to' } { setup: 'Why did the beet go to therapy' } { setup: 'Why did the beet go to therapy?', punchline: '' } { setup: 'Why did the beet go to therapy?', punchline: 'Because' } { setup: 'Why did the beet go to therapy?', punchline: 'Because it' } { setup: 'Why did the beet go to therapy?', punchline: 'Because it had' } { setup: 'Why did the beet go to therapy?', punchline: 'Because it had a' } { setup: 'Why did the beet go to therapy?', punchline: 'Because it had a lot' } { setup: 'Why did the beet go to therapy?', punchline: 'Because it had a lot of' } { setup: 'Why did the beet go to therapy?', punchline: 'Because it had a lot of unresolved' } { setup: 'Why did the beet go to therapy?', punchline: 'Because it had a lot of unresolved issues' } { setup: 'Why did the beet go to therapy?', punchline: 'Because it had a lot of unresolved issues!' }*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [JsonOutputFunctionsParser](https://api.js.langchain.com/classes/langchain_output_parsers.JsonOutputFunctionsParser.html) from `langchain/output_parsers`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
HTTP Response Output Parser
](/v0.1/docs/modules/model_io/output_parsers/types/http_response/)[
Next
Bytes output parser
](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)
* [Streaming](#streaming)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/bytes/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/)
* [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/)
* [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/)
* [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/)
* [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)
* [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)
* [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/)
* [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/)
* [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)
* [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)
* [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)
* [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/)
* [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* Bytes output parser
On this page
Bytes output parser
===================
The `BytesOutputParser` takes language model output (either an entire response or as a stream) and converts it into binary data. This is particularly useful for streaming output to the frontend from a server.
This output parser can act as a transform stream and work with streamed response chunks from a model.
Usage[β](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { BytesOutputParser } from "@langchain/core/output_parsers";import { RunnableSequence } from "@langchain/core/runnables";const chain = RunnableSequence.from([ new ChatOpenAI({ temperature: 0 }), new BytesOutputParser(),]);const stream = await chain.stream("Hello there!");const decoder = new TextDecoder();for await (const chunk of stream) { if (chunk) { console.log(decoder.decode(chunk)); }}
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [BytesOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.BytesOutputParser.html) from `@langchain/core/output_parsers`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
JSON Output Functions Parser
](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/)[
Next
Combining output parsers
](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/)
* [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/)
* [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/)
* [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/)
* [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)
* [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)
* [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/)
* [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/)
* [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)
* [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)
* [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)
* [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/)
* [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* Combining output parsers
Combining output parsers
========================
Output parsers can be combined using `CombiningOutputParser`. This output parser takes in a list of output parsers, and will ask for (and parse) a combined output that contains all the fields of all the parsers.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";import { StructuredOutputParser, RegexParser, CombiningOutputParser,} from "langchain/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";import { RunnableSequence } from "@langchain/core/runnables";const answerParser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website.",});const confidenceParser = new RegexParser( /Confidence: (A|B|C), Explanation: (.*)/, ["confidence", "explanation"], "noConfidence");const parser = new CombiningOutputParser(answerParser, confidenceParser);const chain = RunnableSequence.from([ PromptTemplate.fromTemplate( "Answer the users question as best as possible.\n{format_instructions}\n{question}" ), new OpenAI({ temperature: 0 }), parser,]);/*Answer the users question as best as possible.Return the following outputs, each formatted as described below:Output 1:The output should be formatted as a JSON instance that conforms to the JSON schema below.As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Here is the output schema:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"source":{"type":"string","description":"source used to answer the user's question, should be a website."}},"required":["answer","source"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```Output 2:Your response should match the following regex: /Confidence: (A|B|C), Explanation: (.*)/What is the capital of France?*/const response = await chain.invoke({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(),});console.log(response);/*{ answer: 'Paris', source: 'https://www.worldatlas.com/articles/what-is-the-capital-of-france.html', confidence: 'A', explanation: 'The capital of France is Paris.'}*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [StructuredOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.StructuredOutputParser.html) from `langchain/output_parsers`
* [RegexParser](https://api.js.langchain.com/classes/langchain_output_parsers.RegexParser.html) from `langchain/output_parsers`
* [CombiningOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.CombiningOutputParser.html) from `langchain/output_parsers`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Bytes output parser
](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)[
Next
List parser
](/v0.1/docs/modules/model_io/output_parsers/types/csv/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/)
* [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/)
* [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/)
* [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/)
* [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)
* [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)
* [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/)
* [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/)
* [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)
* [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)
* [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)
* [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/)
* [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* Custom list parser
Custom list parser
==================
This output parser can be used when you want to return a list of items with a specific length and separator.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";import { PromptTemplate } from "@langchain/core/prompts";import { CustomListOutputParser } from "@langchain/core/output_parsers";import { RunnableSequence } from "@langchain/core/runnables";// With a `CustomListOutputParser`, we can parse a list with a specific length and separator.const parser = new CustomListOutputParser({ length: 3, separator: "\n" });const chain = RunnableSequence.from([ PromptTemplate.fromTemplate( "Provide a list of {subject}.\n{format_instructions}" ), new OpenAI({ temperature: 0 }), parser,]);/*Provide a list of great fiction books (book, author).Your response should be a list of 3 items separated by "\n" (eg: `foo\n bar\n baz`)*/const response = await chain.invoke({ subject: "great fiction books (book, author)", format_instructions: parser.getFormatInstructions(),});console.log(response);/*[ 'The Catcher in the Rye, J.D. Salinger', 'To Kill a Mockingbird, Harper Lee', 'The Great Gatsby, F. Scott Fitzgerald']*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [CustomListOutputParser](https://api.js.langchain.com/classes/langchain_core_output_parsers.CustomListOutputParser.html) from `@langchain/core/output_parsers`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
List parser
](/v0.1/docs/modules/model_io/output_parsers/types/csv/)[
Next
Datetime parser
](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/datetime/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/)
* [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/)
* [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/)
* [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/)
* [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)
* [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)
* [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/)
* [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/)
* [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)
* [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)
* [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)
* [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/)
* [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* Datetime parser
Datetime parser
===============
This OutputParser can be used to parse LLM output into datetime format.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { DatetimeOutputParser } from "langchain/output_parsers";const parser = new DatetimeOutputParser();const prompt = ChatPromptTemplate.fromTemplate(`Answer the users question:{question}{format_instructions}`);const promptWithInstructions = await prompt.partial({ format_instructions: parser.getFormatInstructions(),});const model = new ChatOpenAI({ temperature: 0 });const chain = promptWithInstructions.pipe(model).pipe(parser);const response = await chain.invoke({ question: "When was Chicago incorporated?",});console.log(response, response instanceof Date);/* 1837-03-04T00:00:00.000Z, true*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [DatetimeOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.DatetimeOutputParser.html) from `langchain/output_parsers`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Custom list parser
](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/)[
Next
OpenAI Tools
](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/)
* [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/)
* [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/)
* [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/)
* [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)
* [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)
* [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/)
* [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/)
* [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)
* [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)
* [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)
* [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/)
* [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* OpenAI Tools
On this page
OpenAI Tools
============
These output parsers extract tool calls from OpenAIβs function calling API responses. This means they are only usable with models that support function calling, and specifically the latest `tools` and `tool_choice` parameters. We recommend familiarizing yourself with [function calling](/v0.1/docs/modules/model_io/chat/function_calling/) before reading this guide.
There are a few different variants of output parsers:
* [`JsonOutputToolsParser`](https://api.js.langchain.com/classes/langchain_output_parsers.JsonOutputToolsParser.html): Returns the arguments of the function call as JSON
* [`JsonOutputKeyToolsParser`](https://api.js.langchain.com/classes/langchain_output_parsers.JsonOutputKeyToolsParser.html): Returns the value of specific key in the function call as JSON
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";
const properties = { setup: { type: "string", description: "The setup for the joke", }, punchline: { type: "string", description: "The joke's punchline", },};const tool = { type: "function" as const, function: { name: "joke", description: "Joke to tell user.", parameters: { $schema: "http://json-schema.org/draft-07/schema#", title: "Joke", type: "object", properties, required: ["setup", "punchline"], }, },};
import { ChatPromptTemplate } from "@langchain/core/prompts";const llm = new ChatOpenAI();// Use `.bind` to attach the tool to the modelconst llmWithTools = llm.bind({ tools: [tool], // Optionally, we can pass the tool to the `tool_choice` parameter to // force the model to call the tool. tool_choice: tool,});const prompt = ChatPromptTemplate.fromMessages([ [ "system", "You are the funniest comedian, tell the user a joke about their topic.", ], ["human", "Topic: {topic}"],]);
Now we can use LCEL to pipe our prompt and LLM together.
const chain = prompt.pipe(llmWithTools);
const result = await chain.invoke({ topic: "Large Language Models" });
result.additional_kwargs;
{ function_call: undefined, tool_calls: [ { id: "call_vo9oYcHXKWzS6bJ4bK7Eghmz", type: "function", function: { name: "joke", arguments: "{\n" + ' "setup": "Why did the large language model go on a diet?",\n' + ' "punchline": "It wanted to reduce i'... 17 more characters } } ]}
> #### Inspect the [LangSmith trace](https://smith.langchain.com/public/f2f34c8d-8193-40cb-b3ef-f186fb4de73e/r) from the call above[β](#inspect-the-langsmith-trace-from-the-call-above "Direct link to inspect-the-langsmith-trace-from-the-call-above")
`JsonOutputToolsParser`[β](#jsonoutputtoolsparser "Direct link to jsonoutputtoolsparser")
-----------------------------------------------------------------------------------------
import { JsonOutputToolsParser } from "langchain/output_parsers";const outputParser = new JsonOutputToolsParser();
const chain = prompt.pipe(llmWithTools).pipe(outputParser);
await chain.invoke({ topic: "Large Language Models" });
[ { type: "joke", args: { setup: "Why did the large language model go to therapy?", punchline: "It had too many layers!" } }]
> #### Inspect the [LangSmith trace](https://smith.langchain.com/public/61ce7b9f-d462-499e-be65-8a165d2b47a7/r) with the `JsonOutputToolsParser`[β](#inspect-the-langsmith-trace-with-the-jsonoutputtoolsparser "Direct link to inspect-the-langsmith-trace-with-the-jsonoutputtoolsparser")
`JsonOutputKeyToolsParser`[β](#jsonoutputkeytoolsparser "Direct link to jsonoutputkeytoolsparser")
--------------------------------------------------------------------------------------------------
This merely extracts a single key from the returned response. This is useful for when you are passing in a single tool and just want itβs arguments.
import { JsonOutputKeyToolsParser } from "langchain/output_parsers";const outputParser = new JsonOutputKeyToolsParser({ keyName: "joke" });
const chain = prompt.pipe(llmWithTools).pipe(outputParser);
await chain.invoke({ topic: "Large Language Models" });
[ { setup: "Why did the large language model go to therapy?", punchline: "It had too many layers!" }]
> #### Inspect the [LangSmith trace](https://smith.langchain.com/public/2c9c93d2-d789-4e45-9f9f-e942eace8aed/r) with the `JsonOutputKeyToolsParser`[β](#inspect-the-langsmith-trace-with-the-jsonoutputkeytoolsparser "Direct link to inspect-the-langsmith-trace-with-the-jsonoutputkeytoolsparser")
Some LLMs have support for calling multiple tools in a single response. Because of this, the result of invoking `JsonOutputKeyToolsParser` is always an array. If you would only like a single result to be returned, you can specify `returnSingle` in the constructor.
const outputParserSingle = new JsonOutputKeyToolsParser({ keyName: "joke", returnSingle: true,});
const chain = prompt.pipe(llmWithTools);
const response = await chain.invoke({ topic: "Large Language Models" });
await outputParserSingle.invoke(response);
{ setup: "Why did the large language model go on a diet?", punchline: "It wanted to shed some excess bytes!"}
> #### See the [LangSmith trace](https://smith.langchain.com/public/c05e0409-8085-487d-aee2-2d42b64b9f6d/r) from this output parser.[β](#see-the-langsmith-trace-from-this-output-parser. "Direct link to see-the-langsmith-trace-from-this-output-parser.")
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Datetime parser
](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)[
Next
Auto-fixing parser
](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)
* [`JsonOutputToolsParser`](#jsonoutputtoolsparser)
* [`JsonOutputKeyToolsParser`](#jsonoutputkeytoolsparser)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/deployment/sveltekit/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Next.js](/v0.1/docs/guides/deployment/nextjs/)
* [SvelteKit](/v0.1/docs/guides/deployment/sveltekit/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Deployment](/v0.1/docs/guides/deployment/)
* SvelteKit
On this page
SvelteKit
=========
If you're looking to use LangChain in a [SvelteKit](https://kit.svelte.dev/) project, you can check out [svelte-chat-langchain](https://github.com/SimonPrammer/svelte-chat-langchain).
The app is a SvelteKit implementation of the QA Chatbot [Chat Langchain](https://github.com/langchain-ai/chat-langchain) and is best used as a reference to learn the basics of a QA chatbot over documents or as a starting point for your own custom implementation.
The example shows one possible way to implement ingestion (document loading, splitting, and embedding) as well as RAG (Retrieval Augmented Generation), LCEL, conditional chaining, and streaming.
![SvelteKit Chat Langchain screenshot](/v0.1/assets/images/sveltekit-chat-langchain-6f8b0015dfbb3b2df50669346a14dccb.png)
Links[β](#links "Direct link to Links")
---------------------------------------
* Repository: [https://github.com/SimonPrammer/svelte-chat-langchain](https://github.com/SimonPrammer/svelte-chat-langchain)
* Blog post: [QA Chatbot: Chatting with your Documents made simple](https://simon-prammer.vercel.app/blog/post/easiest-qa-chatbot) for a step-by-step guide!
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Next.js
](/v0.1/docs/guides/deployment/nextjs/)[
Next
Evaluation
](/v0.1/docs/guides/evaluation/)
* [Links](#links)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/deployment/nextjs/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Next.js](/v0.1/docs/guides/deployment/nextjs/)
* [SvelteKit](/v0.1/docs/guides/deployment/sveltekit/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Deployment](/v0.1/docs/guides/deployment/)
* Next.js
Next.js
=======
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain-nextjs-template)
If you're looking to use LangChain in a [Next.js](https://nextjs.org) project, you can check out the [official Next.js starter template](https://github.com/langchain-ai/langchain-nextjs-template).
It shows off streaming and customization, and contains several use-cases around chat, structured output, agents, and retrieval that demonstrate how to use different modules in LangChain together.
![Next.js template demo screenshot](/v0.1/assets/images/nextjs-agent-conversation-c7676652dd4bc13afe280db86b66d301.png)
You can check it out here:
* [https://github.com/langchain-ai/langchain-nextjs-template](https://github.com/langchain-ai/langchain-nextjs-template)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Deployment
](/v0.1/docs/guides/deployment/)[
Next
SvelteKit
](/v0.1/docs/guides/deployment/sveltekit/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/)
* [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/)
* [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/)
* [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/)
* [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)
* [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)
* [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/)
* [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/)
* [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)
* [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)
* [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)
* [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/)
* [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* Auto-fixing parser
Auto-fixing parser
==================
This output parser wraps another output parser, and in the event that the first one fails it calls out to another LLM to fix any errors.
But we can do other things besides throw errors. Specifically, we can pass the misformatted output, along with the formatted instructions, to the model and ask it to fix it.
For this example, we'll use the structured output parser. Here's what happens if we pass it a result that does not comply with the schema:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { z } from "zod";import { ChatOpenAI } from "@langchain/openai";import { StructuredOutputParser, OutputFixingParser,} from "langchain/output_parsers";export const run = async () => { const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites."), }) ); /** This is a bad output because sources is a string, not a list */ const badOutput = `\`\`\`json { "answer": "foo", "sources": "foo.com" } \`\`\``; try { await parser.parse(badOutput); } catch (e) { console.log("Failed to parse bad output: ", e); /* Failed to parse bad output: OutputParserException [Error]: Failed to parse. Text: ```json { "answer": "foo", "sources": "foo.com" } ```. Error: [ { "code": "invalid_type", "expected": "array", "received": "string", "path": [ "sources" ], "message": "Expected array, received string" } ] at StructuredOutputParser.parse (/Users/ankushgola/Code/langchainjs/langchain/src/output_parsers/structured.ts:71:13) at run (/Users/ankushgola/Code/langchainjs/examples/src/prompts/fix_parser.ts:25:18) at <anonymous> (/Users/ankushgola/Code/langchainjs/examples/src/index.ts:33:22) */ } const fixParser = OutputFixingParser.fromLLM( new ChatOpenAI({ temperature: 0 }), parser ); const output = await fixParser.parse(badOutput); console.log("Fixed output: ", output); // Fixed output: { answer: 'foo', sources: [ 'foo.com' ] }};
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [StructuredOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.StructuredOutputParser.html) from `langchain/output_parsers`
* [OutputFixingParser](https://api.js.langchain.com/classes/langchain_output_parsers.OutputFixingParser.html) from `langchain/output_parsers`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
OpenAI Tools
](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)[
Next
Structured output parser
](/v0.1/docs/modules/model_io/output_parsers/types/structured/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/output_parsers/types/structured/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Quick Start](/v0.1/docs/modules/model_io/output_parsers/quick_start/)
* [Custom output parsers](/v0.1/docs/modules/model_io/output_parsers/custom/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* [String output parser](/v0.1/docs/modules/model_io/output_parsers/types/string/)
* [HTTP Response Output Parser](/v0.1/docs/modules/model_io/output_parsers/types/http_response/)
* [JSON Output Functions Parser](/v0.1/docs/modules/model_io/output_parsers/types/json_functions/)
* [Bytes output parser](/v0.1/docs/modules/model_io/output_parsers/types/bytes/)
* [Combining output parsers](/v0.1/docs/modules/model_io/output_parsers/types/combining_output_parser/)
* [List parser](/v0.1/docs/modules/model_io/output_parsers/types/csv/)
* [Custom list parser](/v0.1/docs/modules/model_io/output_parsers/types/custom_list_parser/)
* [Datetime parser](/v0.1/docs/modules/model_io/output_parsers/types/datetime/)
* [OpenAI Tools](/v0.1/docs/modules/model_io/output_parsers/types/openai_tools/)
* [Auto-fixing parser](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)
* [Structured output parser](/v0.1/docs/modules/model_io/output_parsers/types/structured/)
* [XML output parser](/v0.1/docs/modules/model_io/output_parsers/types/xml/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Output Parser Types](/v0.1/docs/modules/model_io/output_parsers/types/)
* Structured output parser
On this page
Structured output parser
========================
This output parser can be used when you want to return multiple fields. If you want complex schema returned (i.e. a JSON object with arrays of strings), use the Zod Schema detailed below.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";import { RunnableSequence } from "@langchain/core/runnables";import { StructuredOutputParser } from "langchain/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";const parser = StructuredOutputParser.fromNamesAndDescriptions({ answer: "answer to the user's question", source: "source used to answer the user's question, should be a website.",});const chain = RunnableSequence.from([ PromptTemplate.fromTemplate( "Answer the users question as best as possible.\n{format_instructions}\n{question}" ), new OpenAI({ temperature: 0 }), parser,]);console.log(parser.getFormatInstructions());/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance."JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites."}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France?*/const response = await chain.invoke({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(),});console.log(response);// { answer: 'Paris', source: 'https://en.wikipedia.org/wiki/Paris' }
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [StructuredOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.StructuredOutputParser.html) from `langchain/output_parsers`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
Structured Output Parser with Zod Schema[β](#structured-output-parser-with-zod-schema "Direct link to Structured Output Parser with Zod Schema")
------------------------------------------------------------------------------------------------------------------------------------------------
This output parser can be also be used when you want to define the output schema using Zod, a TypeScript validation library. The Zod schema passed in needs be parseable from a JSON string, so eg. `z.date()` is not allowed.
import { z } from "zod";import { OpenAI } from "@langchain/openai";import { RunnableSequence } from "@langchain/core/runnables";import { StructuredOutputParser } from "langchain/output_parsers";import { PromptTemplate } from "@langchain/core/prompts";// We can use zod to define a schema for the output using the `fromZodSchema` method of `StructuredOutputParser`.const parser = StructuredOutputParser.fromZodSchema( z.object({ answer: z.string().describe("answer to the user's question"), sources: z .array(z.string()) .describe("sources used to answer the question, should be websites."), }));const chain = RunnableSequence.from([ PromptTemplate.fromTemplate( "Answer the users question as best as possible.\n{format_instructions}\n{question}" ), new OpenAI({ temperature: 0 }), parser,]);console.log(parser.getFormatInstructions());/*Answer the users question as best as possible.You must format your output as a JSON value that adheres to a given "JSON Schema" instance."JSON Schema" is a declarative language that allows you to annotate and validate JSON documents.For example, the example "JSON Schema" instance {{"properties": {{"foo": {{"description": "a list of test words", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}}}would match an object with one required property, "foo". The "type" property specifies "foo" must be an "array", and the "description" property semantically describes it as "a list of test words". The items within "foo" must be strings.Thus, the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of this example "JSON Schema". The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.Your output will be parsed and type-checked according to the provided schema instance, so make sure all fields in your output match the schema exactly and there are no trailing commas!Here is the JSON Schema instance your output must adhere to. Include the enclosing markdown codeblock:```{"type":"object","properties":{"answer":{"type":"string","description":"answer to the user's question"},"sources":{"type":"array","items":{"type":"string"},"description":"sources used to answer the question, should be websites."}},"required":["answer","sources"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}```What is the capital of France?*/const response = await chain.invoke({ question: "What is the capital of France?", format_instructions: parser.getFormatInstructions(),});console.log(response);/*{ answer: 'Paris', sources: [ 'https://en.wikipedia.org/wiki/Paris' ] }*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [RunnableSequence](https://api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [StructuredOutputParser](https://api.js.langchain.com/classes/langchain_output_parsers.StructuredOutputParser.html) from `langchain/output_parsers`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Auto-fixing parser
](/v0.1/docs/modules/model_io/output_parsers/types/output_fixing/)[
Next
XML output parser
](/v0.1/docs/modules/model_io/output_parsers/types/xml/)
* [Structured Output Parser with Zod Schema](#structured-output-parser-with-zod-schema)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/evaluation/string/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* [Criteria Evaluation](/v0.1/docs/guides/evaluation/string/criteria/)
* [Embedding Distance](/v0.1/docs/guides/evaluation/string/embedding_distance/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* [Examples](/v0.1/docs/guides/evaluation/examples/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* String Evaluators
String Evaluators
=================
A string evaluator is a component within LangChain designed to assess the performance of a language model by comparing its generated outputs (predictions) to a reference string or an input. This comparison is a crucial step in the evaluation of language models, providing a measure of the accuracy or quality of the generated text.
In practice, string evaluators are typically used to evaluate a predicted string against a given input, such as a question or a prompt. Often, a reference label or context string is provided to define what a correct or ideal response would look like. These evaluators can be customized to tailor the evaluation process to fit your application's specific requirements.
To create a custom string evaluator, inherit from the abstract `StringEvaluator` exported from `langchain/evaluation` class and implement the `_evaluateStrings` method.
Here's a summary of the key attributes and methods associated with a string evaluator:
* `evaluationName`: Specifies the name of the evaluation.
* `requiresInput`: Boolean attribute that indicates whether the evaluator requires an input string. If True, the evaluator will raise an error when the input isn't provided. If False, a warning will be logged if an input _is_ provided, indicating that it will not be considered in the evaluation.
* `requiresReference`: Boolean attribute specifying whether the evaluator requires a reference label. If True, the evaluator will raise an error when the reference isn't provided. If False, a warning will be logged if a reference _is_ provided, indicating that it will not be considered in the evaluation.
String evaluators also implement the following methods:
* `evaluateStrings`: Evaluates the output of the Chain or Language Model, with support for optional input and label.
The following sections provide detailed information on available string evaluator implementations as well as how to create a custom string evaluator.
[
ποΈ Criteria Evaluation
-----------------------
In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the criteria evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.
](/v0.1/docs/guides/evaluation/string/criteria/)
[
ποΈ Embedding Distance
----------------------
To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector distance metric between the two embedded representations using the embedding\_distance evaluator.
](/v0.1/docs/guides/evaluation/string/embedding_distance/)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Evaluation
](/v0.1/docs/guides/evaluation/)[
Next
Criteria Evaluation
](/v0.1/docs/guides/evaluation/string/criteria/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/evaluation/comparison/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* [Pairwise Embedding Distance](/v0.1/docs/guides/evaluation/comparison/pairwise_embedding_distance/)
* [Pairwise String Comparison](/v0.1/docs/guides/evaluation/comparison/pairwise_string/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* [Examples](/v0.1/docs/guides/evaluation/examples/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* Comparison Evaluators
Comparison Evaluators
=====================
Comparison evaluators in LangChain help measure two different chains or LLM outputs. These evaluators are helpful for comparative analyses, such as A/B testing between two language models, or comparing different versions of the same model. They can also be useful for things like generating preference scores for ai-assisted reinforcement learning.
These evaluators inherit from the `PairwiseStringEvaluator` or `LLMPairwiseStringEvaluator` class, providing a comparison interface for two strings - typically, the outputs from two different prompts or models, or two versions of the same model. In essence, a comparison evaluator performs an evaluation on a pair of strings and returns a dictionary containing the evaluation score and other relevant details.
To create a custom comparison evaluator, inherit from the `PairwiseStringEvaluator` or `LLMPairwiseStringEvaluator` abstract classes exported from `langchain/evaluation` and overwrite the `_evaluateStringPairs` method.
Here's a summary of the key methods and properties of a comparison evaluator:
* `_evaluateStringPairs`: Evaluate the output string pairs. This function should be overwritten when creating custom evaluators.
* `requiresInput`: This property indicates whether this evaluator requires an input string.
* `requiresReference`: This property specifies whether this evaluator requires a reference label.
Detailed information about creating custom evaluators and the available built-in comparison evaluators is provided in the following sections.
[
ποΈ Pairwise Embedding Distance
-------------------------------
One way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.
](/v0.1/docs/guides/evaluation/comparison/pairwise_embedding_distance/)
[
ποΈ Pairwise String Comparison
------------------------------
Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The StringComparison evaluators facilitate this so you can answer questions like:
](/v0.1/docs/guides/evaluation/comparison/pairwise_string/)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Embedding Distance
](/v0.1/docs/guides/evaluation/string/embedding_distance/)[
Next
Pairwise Embedding Distance
](/v0.1/docs/guides/evaluation/comparison/pairwise_embedding_distance/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/evaluation/trajectory/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* [Agent Trajectory](/v0.1/docs/guides/evaluation/trajectory/trajectory_eval/)
* [Examples](/v0.1/docs/guides/evaluation/examples/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* Trajectory Evaluators
Trajectory Evaluators
=====================
Trajectory Evaluators in LangChain provide a more holistic approach to evaluating an agent. These evaluators assess the full sequence of actions taken by an agent and their corresponding responses, which we refer to as the "trajectory". This allows you to better measure an agent's effectiveness and capabilities.
A Trajectory Evaluator implements the `AgentTrajectoryEvaluator` interface, which requires method:
* `evaluateAgentTrajectory`: This method evaluates an agent's trajectory.
The methods accept three main parameters:
* `input`: The initial input given to the agent.
* `prediction`: The final predicted response from the agent.
* `agentTrajectory`: The intermediate steps taken by the agent, given as a list of tuples.
These methods return a dictionary. It is recommended that custom implementations return a `score` (a float indicating the effectiveness of the agent) and `reasoning` (a string explaining the reasoning behind the score).
You can capture an agent's trajectory by initializing the agent with the `returnIntermediateSteps=True` parameter. This lets you collect all intermediate steps without relying on special callbacks.
For a deeper dive into the implementation and use of Trajectory Evaluators, refer to the sections below.
[
ποΈ Agent Trajectory
--------------------
Agents can be difficult to holistically evaluate due to the breadth of actions and generation they can make. We recommend using multiple evaluation techniques appropriate to your use case. One way to evaluate an agent is to look at the whole trajectory of actions taken along with their responses.
](/v0.1/docs/guides/evaluation/trajectory/trajectory_eval/)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Pairwise String Comparison
](/v0.1/docs/guides/evaluation/comparison/pairwise_string/)[
Next
Agent Trajectory
](/v0.1/docs/guides/evaluation/trajectory/trajectory_eval/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/evaluation/examples/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* [Examples](/v0.1/docs/guides/evaluation/examples/)
* [Comparing Chain Outputs](/v0.1/docs/guides/evaluation/examples/comparisons/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* Examples
Examples
========
π§ _Docs under construction_ π§
Below are some examples for inspecting and checking different chains.
[
ποΈ Comparing Chain Outputs
---------------------------
Suppose you have two different prompts (or LLMs). How do you know which will generate "better" results?
](/v0.1/docs/guides/evaluation/examples/comparisons/)
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Agent Trajectory
](/v0.1/docs/guides/evaluation/trajectory/trajectory_eval/)[
Next
Comparing Chain Outputs
](/v0.1/docs/guides/evaluation/examples/comparisons/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/memory/types/buffer/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [\[Beta\] Memory](/v0.1/docs/modules/memory/)
* [Chat Message History](/v0.1/docs/modules/memory/chat_messages/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* [Conversation buffer memory](/v0.1/docs/modules/memory/types/buffer/)
* [Using Buffer Memory with Chat Models](/v0.1/docs/modules/memory/types/buffer_memory_chat/)
* [Conversation buffer window memory](/v0.1/docs/modules/memory/types/buffer_window/)
* [Entity memory](/v0.1/docs/modules/memory/types/entity_summary_memory/)
* [Combined memory](/v0.1/docs/modules/memory/types/multiple_memory/)
* [Conversation summary memory](/v0.1/docs/modules/memory/types/summary/)
* [Conversation summary buffer memory](/v0.1/docs/modules/memory/types/summary_buffer/)
* [Vector store-backed memory](/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Memory](/v0.1/docs/modules/memory/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* Conversation buffer memory
Conversation buffer memory
==========================
This notebook shows how to use `BufferMemory`. This memory allows for storing of messages, then later formats the messages into a prompt input variable.
We can first extract it as a string.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";import { BufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferMemory();const chain = new ConversationChain({ llm: model, memory: memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });
{response: " Hi Jim! It's nice to meet you. My name is AI. What would you like to talk about?"}
const res2 = await chain.call({ input: "What's my name?" });console.log({ res2 });
{response: ' You said your name is Jim. Is there anything else you would like to talk about?'}
You can also load messages into a `BufferMemory` instance by creating and passing in a `ChatHistory` object. This lets you easily pick up state from past conversations:
import { BufferMemory, ChatMessageHistory } from "langchain/memory";import { HumanMessage, AIMessage } from "langchain/schema";const pastMessages = [ new HumanMessage("My name's Jonas"), new AIMessage("Nice to meet you, Jonas!"),];const memory = new BufferMemory({ chatHistory: new ChatMessageHistory(pastMessages),});
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Memory types
](/v0.1/docs/modules/memory/types/)[
Next
Using Buffer Memory with Chat Models
](/v0.1/docs/modules/memory/types/buffer_memory_chat/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/memory/types/buffer_window/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [\[Beta\] Memory](/v0.1/docs/modules/memory/)
* [Chat Message History](/v0.1/docs/modules/memory/chat_messages/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* [Conversation buffer memory](/v0.1/docs/modules/memory/types/buffer/)
* [Using Buffer Memory with Chat Models](/v0.1/docs/modules/memory/types/buffer_memory_chat/)
* [Conversation buffer window memory](/v0.1/docs/modules/memory/types/buffer_window/)
* [Entity memory](/v0.1/docs/modules/memory/types/entity_summary_memory/)
* [Combined memory](/v0.1/docs/modules/memory/types/multiple_memory/)
* [Conversation summary memory](/v0.1/docs/modules/memory/types/summary/)
* [Conversation summary buffer memory](/v0.1/docs/modules/memory/types/summary_buffer/)
* [Vector store-backed memory](/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Memory](/v0.1/docs/modules/memory/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* Conversation buffer window memory
Conversation buffer window memory
=================================
`ConversationBufferWindowMemory` keeps a list of the interactions of the conversation over time. It only uses the last K interactions. This can be useful for keeping a sliding window of the most recent interactions, so the buffer does not get too large
Let's first explore the basic functionality of this type of memory.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";import { BufferWindowMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";const model = new OpenAI({});const memory = new BufferWindowMemory({ k: 1 });const chain = new ConversationChain({ llm: model, memory: memory });const res1 = await chain.call({ input: "Hi! I'm Jim." });console.log({ res1 });
{response: " Hi Jim! It's nice to meet you. My name is AI. What would you like to talk about?"}
const res2 = await chain.call({ input: "What's my name?" });console.log({ res2 });
{response: ' You said your name is Jim. Is there anything else you would like to talk about?'}
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Using Buffer Memory with Chat Models
](/v0.1/docs/modules/memory/types/buffer_memory_chat/)[
Next
Entity memory
](/v0.1/docs/modules/memory/types/entity_summary_memory/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/memory/types/buffer_memory_chat/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [\[Beta\] Memory](/v0.1/docs/modules/memory/)
* [Chat Message History](/v0.1/docs/modules/memory/chat_messages/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* [Conversation buffer memory](/v0.1/docs/modules/memory/types/buffer/)
* [Using Buffer Memory with Chat Models](/v0.1/docs/modules/memory/types/buffer_memory_chat/)
* [Conversation buffer window memory](/v0.1/docs/modules/memory/types/buffer_window/)
* [Entity memory](/v0.1/docs/modules/memory/types/entity_summary_memory/)
* [Combined memory](/v0.1/docs/modules/memory/types/multiple_memory/)
* [Conversation summary memory](/v0.1/docs/modules/memory/types/summary/)
* [Conversation summary buffer memory](/v0.1/docs/modules/memory/types/summary_buffer/)
* [Vector store-backed memory](/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Memory](/v0.1/docs/modules/memory/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* Using Buffer Memory with Chat Models
Using Buffer Memory with Chat Models
====================================
This example covers how to use chat-specific memory classes with chat models. The key thing to notice is that setting `returnMessages: true` makes the memory return a list of chat messages instead of a string.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ConversationChain } from "langchain/chains";import { ChatOpenAI } from "@langchain/openai";import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { BufferMemory } from "langchain/memory";const chat = new ChatOpenAI({ temperature: 0 });const chatPrompt = ChatPromptTemplate.fromMessages([ [ "system", "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.", ], new MessagesPlaceholder("history"), ["human", "{input}"],]);const chain = new ConversationChain({ memory: new BufferMemory({ returnMessages: true, memoryKey: "history" }), prompt: chatPrompt, llm: chat,});const response = await chain.invoke({ input: "hi! whats up?",});console.log(response);
#### API Reference:
* [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [MessagesPlaceholder](https://api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts`
* [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Conversation buffer memory
](/v0.1/docs/modules/memory/types/buffer/)[
Next
Conversation buffer window memory
](/v0.1/docs/modules/memory/types/buffer_window/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/memory/types/summary/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [\[Beta\] Memory](/v0.1/docs/modules/memory/)
* [Chat Message History](/v0.1/docs/modules/memory/chat_messages/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* [Conversation buffer memory](/v0.1/docs/modules/memory/types/buffer/)
* [Using Buffer Memory with Chat Models](/v0.1/docs/modules/memory/types/buffer_memory_chat/)
* [Conversation buffer window memory](/v0.1/docs/modules/memory/types/buffer_window/)
* [Entity memory](/v0.1/docs/modules/memory/types/entity_summary_memory/)
* [Combined memory](/v0.1/docs/modules/memory/types/multiple_memory/)
* [Conversation summary memory](/v0.1/docs/modules/memory/types/summary/)
* [Conversation summary buffer memory](/v0.1/docs/modules/memory/types/summary_buffer/)
* [Vector store-backed memory](/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Memory](/v0.1/docs/modules/memory/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* Conversation summary memory
On this page
Conversation summary memory
===========================
Now let's take a look at using a slightly more complex type of memory - `ConversationSummaryMemory`. This type of memory creates a summary of the conversation over time. This can be useful for condensing information from the conversation over time. Conversation summary memory summarizes the conversation as it happens and stores the current summary in memory. This memory can then be used to inject the summary of the conversation so far into a prompt/chain. This memory is most useful for longer conversations, where keeping the past message history in the prompt verbatim would take up too many tokens.
Let's first explore the basic functionality of this type of memory.
Usage, with an LLM[β](#usage-with-an-llm "Direct link to Usage, with an LLM")
-----------------------------------------------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "@langchain/core/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new OpenAI({ model: "gpt-3.5-turbo", temperature: 0 }), }); const model = new OpenAI({ temperature: 0.9 }); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.invoke({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: " Hi Jim, I'm AI! It's nice to meet you. I'm an AI programmed to provide information about the environment around me. Do you have any specific questions about the area that I can answer for you?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area.' } } */ const res2 = await chain.invoke({ input: "What's my name?" }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: ' You told me your name is Jim.' }, memory: { chat_history: 'Jim introduces himself to the AI and the AI responds, introducing itself as a program designed to provide information about the environment. The AI offers to answer any specific questions Jim may have about the area. Jim asks the AI what his name is, and the AI responds that Jim had previously told it his name.' } } */};
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [ConversationSummaryMemory](https://api.js.langchain.com/classes/langchain_memory.ConversationSummaryMemory.html) from `langchain/memory`
* [LLMChain](https://api.js.langchain.com/classes/langchain_chains.LLMChain.html) from `langchain/chains`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
Usage, with a Chat Model[β](#usage-with-a-chat-model "Direct link to Usage, with a Chat Model")
-----------------------------------------------------------------------------------------------
import { ChatOpenAI } from "@langchain/openai";import { ConversationSummaryMemory } from "langchain/memory";import { LLMChain } from "langchain/chains";import { PromptTemplate } from "@langchain/core/prompts";export const run = async () => { const memory = new ConversationSummaryMemory({ memoryKey: "chat_history", llm: new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 }), }); const model = new ChatOpenAI(); const prompt = PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`); const chain = new LLMChain({ llm: model, prompt, memory }); const res1 = await chain.invoke({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({}) }); /* { res1: { text: "Hello Jim! It's nice to meet you. My name is AI. How may I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance.' } } */ const res2 = await chain.invoke({ input: "What's my name?" }); console.log({ res2, memory: await memory.loadMemoryVariables({}) }); /* { res2: { text: "Your name is Jim. It's nice to meet you, Jim. How can I assist you today?" }, memory: { chat_history: 'Jim introduces himself to the AI and the AI greets him and offers assistance. The AI addresses Jim by name and asks how it can assist him.' } } */};
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [ConversationSummaryMemory](https://api.js.langchain.com/classes/langchain_memory.ConversationSummaryMemory.html) from `langchain/memory`
* [LLMChain](https://api.js.langchain.com/classes/langchain_chains.LLMChain.html) from `langchain/chains`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Combined memory
](/v0.1/docs/modules/memory/types/multiple_memory/)[
Next
Conversation summary buffer memory
](/v0.1/docs/modules/memory/types/summary_buffer/)
* [Usage, with an LLM](#usage-with-an-llm)
* [Usage, with a Chat Model](#usage-with-a-chat-model)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/memory/types/multiple_memory/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [\[Beta\] Memory](/v0.1/docs/modules/memory/)
* [Chat Message History](/v0.1/docs/modules/memory/chat_messages/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* [Conversation buffer memory](/v0.1/docs/modules/memory/types/buffer/)
* [Using Buffer Memory with Chat Models](/v0.1/docs/modules/memory/types/buffer_memory_chat/)
* [Conversation buffer window memory](/v0.1/docs/modules/memory/types/buffer_window/)
* [Entity memory](/v0.1/docs/modules/memory/types/entity_summary_memory/)
* [Combined memory](/v0.1/docs/modules/memory/types/multiple_memory/)
* [Conversation summary memory](/v0.1/docs/modules/memory/types/summary/)
* [Conversation summary buffer memory](/v0.1/docs/modules/memory/types/summary_buffer/)
* [Vector store-backed memory](/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Memory](/v0.1/docs/modules/memory/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* Combined memory
Combined memory
===============
It is also possible to use multiple memory classes in the same chain. To combine multiple memory classes, we can initialize the `CombinedMemory` class, and then use that.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { BufferMemory, CombinedMemory, ConversationSummaryMemory,} from "langchain/memory";import { ConversationChain } from "langchain/chains";import { PromptTemplate } from "@langchain/core/prompts";// buffer memoryconst bufferMemory = new BufferMemory({ memoryKey: "chat_history_lines", inputKey: "input",});// summary memoryconst summaryMemory = new ConversationSummaryMemory({ llm: new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 }), inputKey: "input", memoryKey: "conversation_summary",});//const memory = new CombinedMemory({ memories: [bufferMemory, summaryMemory],});const _DEFAULT_TEMPLATE = `The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Summary of conversation:{conversation_summary}Current conversation:{chat_history_lines}Human: {input}AI:`;const PROMPT = new PromptTemplate({ inputVariables: ["input", "conversation_summary", "chat_history_lines"], template: _DEFAULT_TEMPLATE,});const model = new ChatOpenAI({ temperature: 0.9, verbose: true });const chain = new ConversationChain({ llm: model, memory, prompt: PROMPT });const res1 = await chain.invoke({ input: "Hi! I'm Jim." });console.log({ res1 });/* { res1: { response: "Hello Jim! It's nice to meet you. How can I assist you today?" } }*/const res2 = await chain.invoke({ input: "Can you tell me a joke?" });console.log({ res2 });/* { res2: { response: 'Why did the scarecrow win an award? Because he was outstanding in his field!' } }*/const res3 = await chain.invoke({ input: "What's my name and what joke did you just tell?",});console.log({ res3 });/* { res3: { response: 'Your name is Jim. The joke I just told was about a scarecrow winning an award because he was outstanding in his field.' } }*/
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [BufferMemory](https://api.js.langchain.com/classes/langchain_memory.BufferMemory.html) from `langchain/memory`
* [CombinedMemory](https://api.js.langchain.com/classes/langchain_memory.CombinedMemory.html) from `langchain/memory`
* [ConversationSummaryMemory](https://api.js.langchain.com/classes/langchain_memory.ConversationSummaryMemory.html) from `langchain/memory`
* [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Entity memory
](/v0.1/docs/modules/memory/types/entity_summary_memory/)[
Next
Conversation summary memory
](/v0.1/docs/modules/memory/types/summary/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/memory/types/entity_summary_memory/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [\[Beta\] Memory](/v0.1/docs/modules/memory/)
* [Chat Message History](/v0.1/docs/modules/memory/chat_messages/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* [Conversation buffer memory](/v0.1/docs/modules/memory/types/buffer/)
* [Using Buffer Memory with Chat Models](/v0.1/docs/modules/memory/types/buffer_memory_chat/)
* [Conversation buffer window memory](/v0.1/docs/modules/memory/types/buffer_window/)
* [Entity memory](/v0.1/docs/modules/memory/types/entity_summary_memory/)
* [Combined memory](/v0.1/docs/modules/memory/types/multiple_memory/)
* [Conversation summary memory](/v0.1/docs/modules/memory/types/summary/)
* [Conversation summary buffer memory](/v0.1/docs/modules/memory/types/summary_buffer/)
* [Vector store-backed memory](/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Memory](/v0.1/docs/modules/memory/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* Entity memory
On this page
Entity memory
=============
Entity Memory remembers given facts about specific entities in a conversation. It extracts information on entities (using an LLM) and builds up its knowledge about that entity over time (also using an LLM).
Let's first walk through using this functionality.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";import { EntityMemory, ENTITY_MEMORY_CONVERSATION_TEMPLATE,} from "langchain/memory";import { LLMChain } from "langchain/chains";export const run = async () => { const memory = new EntityMemory({ llm: new OpenAI({ temperature: 0 }), chatHistoryKey: "history", // Default value entitiesKey: "entities", // Default value }); const model = new OpenAI({ temperature: 0.9 }); const chain = new LLMChain({ llm: model, prompt: ENTITY_MEMORY_CONVERSATION_TEMPLATE, // Default prompt - must include the set chatHistoryKey and entitiesKey as input variables. memory, }); const res1 = await chain.invoke({ input: "Hi! I'm Jim." }); console.log({ res1, memory: await memory.loadMemoryVariables({ input: "Who is Jim?" }), }); const res2 = await chain.invoke({ input: "I work in construction. What about you?", }); console.log({ res2, memory: await memory.loadMemoryVariables({ input: "Who is Jim?" }), });};
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [EntityMemory](https://api.js.langchain.com/classes/langchain_memory.EntityMemory.html) from `langchain/memory`
* [ENTITY\_MEMORY\_CONVERSATION\_TEMPLATE](https://api.js.langchain.com/variables/langchain_memory.ENTITY_MEMORY_CONVERSATION_TEMPLATE.html) from `langchain/memory`
* [LLMChain](https://api.js.langchain.com/classes/langchain_chains.LLMChain.html) from `langchain/chains`
### Inspecting the Memory Store[β](#inspecting-the-memory-store "Direct link to Inspecting the Memory Store")
You can also inspect the memory store directly to see the current summary of each entity:
import { OpenAI } from "@langchain/openai";import { EntityMemory, ENTITY_MEMORY_CONVERSATION_TEMPLATE,} from "langchain/memory";import { LLMChain } from "langchain/chains";const memory = new EntityMemory({ llm: new OpenAI({ temperature: 0 }),});const model = new OpenAI({ temperature: 0.9 });const chain = new LLMChain({ llm: model, prompt: ENTITY_MEMORY_CONVERSATION_TEMPLATE, memory,});await chain.invoke({ input: "Hi! I'm Jim." });await chain.invoke({ input: "I work in sales. What about you?",});const res = await chain.invoke({ input: "My office is the Utica branch of Dunder Mifflin. What about you?",});console.log({ res, memory: await memory.loadMemoryVariables({ input: "Who is Jim?" }),});/* { res: "As an AI language model, I don't have an office in the traditional sense. I exist entirely in digital space and am here to assist you with any questions or tasks you may have. Is there anything specific you need help with regarding your work at the Utica branch of Dunder Mifflin?", memory: { entities: { Jim: 'Jim is a human named Jim who works in sales.', Utica: 'Utica is the location of the branch of Dunder Mifflin where Jim works.', 'Dunder Mifflin': 'Dunder Mifflin has a branch in Utica.' } } }*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [EntityMemory](https://api.js.langchain.com/classes/langchain_memory.EntityMemory.html) from `langchain/memory`
* [ENTITY\_MEMORY\_CONVERSATION\_TEMPLATE](https://api.js.langchain.com/variables/langchain_memory.ENTITY_MEMORY_CONVERSATION_TEMPLATE.html) from `langchain/memory`
* [LLMChain](https://api.js.langchain.com/classes/langchain_chains.LLMChain.html) from `langchain/chains`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Conversation buffer window memory
](/v0.1/docs/modules/memory/types/buffer_window/)[
Next
Combined memory
](/v0.1/docs/modules/memory/types/multiple_memory/)
* [Inspecting the Memory Store](#inspecting-the-memory-store)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/memory/types/summary_buffer/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [\[Beta\] Memory](/v0.1/docs/modules/memory/)
* [Chat Message History](/v0.1/docs/modules/memory/chat_messages/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* [Conversation buffer memory](/v0.1/docs/modules/memory/types/buffer/)
* [Using Buffer Memory with Chat Models](/v0.1/docs/modules/memory/types/buffer_memory_chat/)
* [Conversation buffer window memory](/v0.1/docs/modules/memory/types/buffer_window/)
* [Entity memory](/v0.1/docs/modules/memory/types/entity_summary_memory/)
* [Combined memory](/v0.1/docs/modules/memory/types/multiple_memory/)
* [Conversation summary memory](/v0.1/docs/modules/memory/types/summary/)
* [Conversation summary buffer memory](/v0.1/docs/modules/memory/types/summary_buffer/)
* [Vector store-backed memory](/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Memory](/v0.1/docs/modules/memory/)
* [Memory types](/v0.1/docs/modules/memory/types/)
* Conversation summary buffer memory
ConversationSummaryBufferMemory
===============================
`ConversationSummaryBufferMemory` combines the ideas behind [BufferMemory](/v0.1/docs/modules/memory/types/buffer/) and [ConversationSummaryMemory](/v0.1/docs/modules/memory/types/summary/). It keeps a buffer of recent interactions in memory, but rather than just completely flushing old interactions it compiles them into a summary and uses both. Unlike the previous implementation though, it uses token length rather than number of interactions to determine when to flush interactions.
Let's first walk through how to use it:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI, ChatOpenAI } from "@langchain/openai";import { ConversationSummaryBufferMemory } from "langchain/memory";import { ConversationChain } from "langchain/chains";import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate,} from "@langchain/core/prompts";// summary buffer memoryconst memory = new ConversationSummaryBufferMemory({ llm: new OpenAI({ model: "gpt-3.5-turbo-instruct", temperature: 0 }), maxTokenLimit: 10,});await memory.saveContext({ input: "hi" }, { output: "whats up" });await memory.saveContext({ input: "not much you" }, { output: "not much" });const history = await memory.loadMemoryVariables({});console.log({ history });/* { history: { history: 'System: \n' + 'The human greets the AI, to which the AI responds.\n' + 'Human: not much you\n' + 'AI: not much' } }*/// We can also get the history as a list of messages (this is useful if you are using this with a chat prompt).const chatPromptMemory = new ConversationSummaryBufferMemory({ llm: new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 }), maxTokenLimit: 10, returnMessages: true,});await chatPromptMemory.saveContext({ input: "hi" }, { output: "whats up" });await chatPromptMemory.saveContext( { input: "not much you" }, { output: "not much" });// We can also utilize the predict_new_summary method directly.const messages = await chatPromptMemory.chatHistory.getMessages();const previous_summary = "";const predictSummary = await chatPromptMemory.predictNewSummary( messages, previous_summary);console.log(JSON.stringify(predictSummary));// Using in a chain// Let's walk through an example, again setting verbose to true so we can see the prompt.const chatPrompt = ChatPromptTemplate.fromMessages([ SystemMessagePromptTemplate.fromTemplate( "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know." ), new MessagesPlaceholder("history"), HumanMessagePromptTemplate.fromTemplate("{input}"),]);const model = new ChatOpenAI({ temperature: 0.9, verbose: true });const chain = new ConversationChain({ llm: model, memory: chatPromptMemory, prompt: chatPrompt,});const res1 = await chain.invoke({ input: "Hi, what's up?" });console.log({ res1 });/* { res1: 'Hello! I am an AI language model, always ready to have a conversation. How can I assist you today?' }*/const res2 = await chain.invoke({ input: "Just working on writing some documentation!",});console.log({ res2 });/* { res2: "That sounds productive! Documentation is an important aspect of many projects. Is there anything specific you need assistance with regarding your documentation? I'm here to help!" }*/const res3 = await chain.invoke({ input: "For LangChain! Have you heard of it?",});console.log({ res3 });/* { res3: 'Yes, I am familiar with LangChain! It is a blockchain-based language learning platform that aims to connect language learners with native speakers for real-time practice and feedback. It utilizes smart contracts to facilitate secure transactions and incentivize participation. Users can earn tokens by providing language learning services or consuming them for language lessons.' }*/const res4 = await chain.invoke({ input: "That's not the right one, although a lot of people confuse it for that!",});console.log({ res4 });/* { res4: "I apologize for the confusion! Could you please provide some more information about the LangChain you're referring to? That way, I can better understand and assist you with writing documentation for it." }*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [ConversationSummaryBufferMemory](https://api.js.langchain.com/classes/langchain_memory.ConversationSummaryBufferMemory.html) from `langchain/memory`
* [ConversationChain](https://api.js.langchain.com/classes/langchain_chains.ConversationChain.html) from `langchain/chains`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [HumanMessagePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.HumanMessagePromptTemplate.html) from `@langchain/core/prompts`
* [MessagesPlaceholder](https://api.js.langchain.com/classes/langchain_core_prompts.MessagesPlaceholder.html) from `@langchain/core/prompts`
* [SystemMessagePromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.SystemMessagePromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Conversation summary memory
](/v0.1/docs/modules/memory/types/summary/)[
Next
Vector store-backed memory
](/v0.1/docs/modules/memory/types/vectorstore_retriever_memory/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Puppeteer
Webpages, with Puppeteer
========================
Compatibility
Only available on Node.js.
This example goes over how to load data from webpages using Puppeteer. One document will be created for each webpage.
Puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome or Chromium. You can use Puppeteer to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.
If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the [CheerioWebBaseLoader](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio) instead.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install puppeteer
yarn add puppeteer
pnpm add puppeteer
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";/** * Loader uses `page.evaluate(() => document.body.innerHTML)` * as default evaluate function **/const loader = new PuppeteerWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();
Options[β](#options "Direct link to Options")
---------------------------------------------
Here's an explanation of the parameters you can pass to the PuppeteerWebBaseLoader constructor using the PuppeteerWebBaseLoaderOptions interface:
type PuppeteerWebBaseLoaderOptions = { launchOptions?: PuppeteerLaunchOptions; gotoOptions?: PuppeteerGotoOptions; evaluate?: (page: Page, browser: Browser) => Promise<string>;};
1. `launchOptions`: an optional object that specifies additional options to pass to the puppeteer.launch() method. This can include options such as the headless flag to launch the browser in headless mode, or the slowMo option to slow down Puppeteer's actions to make them easier to follow.
2. `gotoOptions`: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.
3. `evaluate`: an optional function that can be used to evaluate JavaScript code on the page using the page.evaluate() method. This can be useful for extracting data from the page or interacting with page elements. The function should return a Promise that resolves to a string containing the result of the evaluation.
By passing these options to the `PuppeteerWebBaseLoader` constructor, you can customize the behavior of the loader and use Puppeteer's powerful features to scrape and interact with web pages.
Here is a basic example to do it:
import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";const loaderWithOptions = new PuppeteerWebBaseLoader( "https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate , in this case you get page and browser instances */ async evaluate(page, browser) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); await browser.close(); return result; }, });const docsFromLoaderWithOptions = await loaderWithOptions.load();console.log({ docsFromLoaderWithOptions });
#### API Reference:
* [PuppeteerWebBaseLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_puppeteer.PuppeteerWebBaseLoader.html) from `langchain/document_loaders/web/puppeteer`
### Screenshots[β](#screenshots "Direct link to Screenshots")
To take a screenshot of a site, initialize the loader the same as above, and call the `.screenshot()` method. This will return an instance of `Document` where the page content is a base64 encoded image, and the metadata contains a `source` field with the URL of the page.
import { PuppeteerWebBaseLoader } from "langchain/document_loaders/web/puppeteer";const loaderWithOptions = new PuppeteerWebBaseLoader("https://langchain.com", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", },});const screenshot = await loaderWithOptions.screenshot();console.log({ screenshot });
#### API Reference:
* [PuppeteerWebBaseLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_puppeteer.PuppeteerWebBaseLoader.html) from `langchain/document_loaders/web/puppeteer`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Cheerio
](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)[
Next
Playwright
](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Cheerio
Webpages, with Cheerio
======================
This example goes over how to load data from webpages using Cheerio. One document will be created for each webpage.
Cheerio is a fast and lightweight library that allows you to parse and traverse HTML documents using a jQuery-like syntax. You can use Cheerio to extract data from web pages, without having to render them in a browser.
However, Cheerio does not simulate a web browser, so it cannot execute JavaScript code on the page. This means that it cannot extract data from dynamic web pages that require JavaScript to render. To do that, you can use the [`PlaywrightWebBaseLoader`](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright) or [`PuppeteerWebBaseLoader`](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer) instead.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();
Usage, with a custom selector[β](#usage-with-a-custom-selector "Direct link to Usage, with a custom selector")
--------------------------------------------------------------------------------------------------------------
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";const loader = new CheerioWebBaseLoader( "https://news.ycombinator.com/item?id=34817881", { selector: "p.athing", });const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Web Loaders
](/v0.2/docs/integrations/document_loaders/web_loaders/)[
Next
Puppeteer
](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Playwright
Webpages, with Playwright
=========================
Compatibility
Only available on Node.js.
This example goes over how to load data from webpages using Playwright. One document will be created for each webpage.
Playwright is a Node.js library that provides a high-level API for controlling multiple browser engines, including Chromium, Firefox, and WebKit. You can use Playwright to automate web page interactions, including extracting data from dynamic web pages that require JavaScript to render.
If you want a lighterweight solution, and the webpages you want to load do not require JavaScript to render, you can use the [`CheerioWebBaseLoader`](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio) instead.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install playwright
yarn add playwright
pnpm add playwright
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { PlaywrightWebBaseLoader } from "langchain/document_loaders/web/playwright";/** * Loader uses `page.content()` * as default evaluate function **/const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/");const docs = await loader.load();
Options[β](#options "Direct link to Options")
---------------------------------------------
Here's an explanation of the parameters you can pass to the PlaywrightWebBaseLoader constructor using the PlaywrightWebBaseLoaderOptions interface:
type PlaywrightWebBaseLoaderOptions = { launchOptions?: LaunchOptions; gotoOptions?: PlaywrightGotoOptions; evaluate?: PlaywrightEvaluate;};
1. `launchOptions`: an optional object that specifies additional options to pass to the playwright.chromium.launch() method. This can include options such as the headless flag to launch the browser in headless mode.
2. `gotoOptions`: an optional object that specifies additional options to pass to the page.goto() method. This can include options such as the timeout option to specify the maximum navigation time in milliseconds, or the waitUntil option to specify when to consider the navigation as successful.
3. `evaluate`: an optional function that can be used to evaluate JavaScript code on the page using a custom evaluation function. This can be useful for extracting data from the page, interacting with page elements, or handling specific HTTP responses. The function should return a Promise that resolves to a string containing the result of the evaluation.
By passing these options to the `PlaywrightWebBaseLoader` constructor, you can customize the behavior of the loader and use Playwright's powerful features to scrape and interact with web pages.
Here is a basic example to do it:
import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const url = "https://www.tabnews.com.br/";const loader = new PlaywrightWebBaseLoader(url);const docs = await loader.load();// raw HTML page contentconst extractedContents = docs[0].pageContent;
And a more advanced example:
import { PlaywrightWebBaseLoader, Page, Browser,} from "langchain/document_loaders/web/playwright";const loader = new PlaywrightWebBaseLoader("https://www.tabnews.com.br/", { launchOptions: { headless: true, }, gotoOptions: { waitUntil: "domcontentloaded", }, /** Pass custom evaluate, in this case you get page and browser instances */ async evaluate(page: Page, browser: Browser, response: Response | null) { await page.waitForResponse("https://www.tabnews.com.br/va/view"); const result = await page.evaluate(() => document.body.innerHTML); return result; },});const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Puppeteer
](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)[
Next
Apify Dataset
](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* College Confidential
College Confidential
====================
This example goes over how to load data from the college confidential website, using Cheerio. One document will be created for each page.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { CollegeConfidentialLoader } from "langchain/document_loaders/web/college_confidential";const loader = new CollegeConfidentialLoader( "https://www.collegeconfidential.com/colleges/brown-university/");const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Browserbase Loader
](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)[
Next
Confluence
](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Apify Dataset
Apify Dataset
=============
This guide shows how to use [Apify](https://apify.com) with LangChain to load documents from an Apify Dataset.
Overview[β](#overview "Direct link to Overview")
------------------------------------------------
[Apify](https://apify.com) is a cloud platform for web scraping and data extraction, which provides an [ecosystem](https://apify.com/store) of more than a thousand ready-made apps called _Actors_ for various web scraping, crawling, and data extraction use cases.
This guide shows how to load documents from an [Apify Dataset](https://docs.apify.com/platform/storage/dataset) β a scalable append-only storage built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel.
Datasets are typically used to save results of Actors. For example, [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor deeply crawls websites such as documentation, knowledge bases, help centers, or blogs, and then stores the text content of webpages into a dataset, from which you can feed the documents into a vector index and answer questions from it.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
You'll first need to install the official Apify client:
* npm
* Yarn
* pnpm
npm install apify-client
yarn add apify-client
pnpm add apify-client
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
You'll also need to sign up and retrieve your [Apify API token](https://console.apify.com/account/integrations).
Usage[β](#usage "Direct link to Usage")
---------------------------------------
### From a New Dataset[β](#from-a-new-dataset "Direct link to From a New Dataset")
If you don't already have an existing dataset on the Apify platform, you'll need to initialize the document loader by calling an Actor and waiting for the results.
**Note:** Calling an Actor can take a significant amount of time, on the order of hours, or even days for large sites!
Here's an example:
import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." * } */const loader = await ApifyDatasetLoader.fromActorCall( "apify/website-content-crawler", { startUrls: [{ url: "https://js.langchain.com/docs/" }], }, { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN }, });const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new ChatOpenAI({ temperature: 0,});const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: "What is LangChain?" });console.log(res.answer);console.log(res.context.map((doc) => doc.metadata.source));/* LangChain is a framework for developing applications powered by language models. [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/
#### API Reference:
* [ApifyDatasetLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_apify_dataset.ApifyDatasetLoader.html) from `langchain/document_loaders/web/apify_dataset`
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [createStuffDocumentsChain](https://v02.api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents`
* [createRetrievalChain](https://v02.api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval`
From an Existing Dataset[β](#from-an-existing-dataset "Direct link to From an Existing Dataset")
------------------------------------------------------------------------------------------------
If you already have an existing dataset on the Apify platform, you can initialize the document loader with the constructor directly:
import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { Document } from "@langchain/core/documents";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createRetrievalChain } from "langchain/chains/retrieval";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";/* * datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents. * In the below example, the Apify dataset format looks like this: * { * "url": "https://apify.com", * "text": "Apify is the best web scraping and automation platform." * } */const loader = new ApifyDatasetLoader("your-dataset-id", { datasetMappingFunction: (item) => new Document({ pageContent: (item.text || "") as string, metadata: { source: item.url }, }), clientOptions: { token: "your-apify-token", // Or set as process.env.APIFY_API_TOKEN },});const docs = await loader.load();const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());const model = new ChatOpenAI({ temperature: 0,});const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm: model, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: "What is LangChain?" });console.log(res.answer);console.log(res.context.map((doc) => doc.metadata.source));/* LangChain is a framework for developing applications powered by language models. [ 'https://js.langchain.com/docs/', 'https://js.langchain.com/docs/modules/chains/', 'https://js.langchain.com/docs/modules/chains/llmchain/', 'https://js.langchain.com/docs/category/functions-4' ]*/
#### API Reference:
* [ApifyDatasetLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_apify_dataset.ApifyDatasetLoader.html) from `langchain/document_loaders/web/apify_dataset`
* [HNSWLib](https://v02.api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [createRetrievalChain](https://v02.api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval`
* [createStuffDocumentsChain](https://v02.api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Playwright
](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)[
Next
AssemblyAI Audio Transcript
](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/confluence | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Confluence
On this page
Confluence
==========
Compatibility
Only available on Node.js.
This covers how to load document objects from pages in a Confluence space.
Credentials[β](#credentials "Direct link to Credentials")
---------------------------------------------------------
* You'll need to set up an access token and provide it along with your confluence username in order to authenticate the request
* You'll also need the `space key` for the space containing the pages to load as documents. This can be found in the url when navigating to your space e.g. `https://example.atlassian.net/wiki/spaces/{SPACE_KEY}`
* And you'll need to install `html-to-text` to parse the pages into plain text
* npm
* Yarn
* pnpm
npm install html-to-text
yarn add html-to-text
pnpm add html-to-text
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { ConfluencePagesLoader } from "langchain/document_loaders/web/confluence";const username = process.env.CONFLUENCE_USERNAME;const accessToken = process.env.CONFLUENCE_ACCESS_TOKEN;const personalAccessToken = process.env.CONFLUENCE_PAT;if (username && accessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", username, accessToken, }); const documents = await loader.load(); console.log(documents);} else if (personalAccessToken) { const loader = new ConfluencePagesLoader({ baseUrl: "https://example.atlassian.net/wiki", spaceKey: "~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE", personalAccessToken, }); const documents = await loader.load(); console.log(documents);} else { console.log( "You need either a username and access token, or a personal access token (PAT), to use this example." );}
#### API Reference:
* [ConfluencePagesLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_confluence.ConfluencePagesLoader.html) from `langchain/document_loaders/web/confluence`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
College Confidential
](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)[
Next
Couchbase
](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Credentials](#credentials)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/browserbase | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Browserbase Loader
On this page
Browserbase Loader
==================
Description[β](#description "Direct link to Description")
---------------------------------------------------------
[Browserbase](https://browserbase.com) is a serverless platform for running headless browsers, it offers advanced debugging, session recordings, stealth mode, integrated proxies and captcha solving.
Installation[β](#installation "Direct link to Installation")
------------------------------------------------------------
* Get an API key from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`).
* Install the [Browserbase SDK](http://github.com/browserbase/js-sdk):
* npm
* Yarn
* pnpm
npm i @browserbasehq/sdk
yarn add @browserbasehq/sdk
pnpm add @browserbasehq/sdk
Example[β](#example "Direct link to Example")
---------------------------------------------
Utilize the `BrowserbaseLoader` as follows to allow your agent to load websites:
import { BrowserbaseLoader } from "langchain/document_loaders/web/browserbase";const loader = new BrowserbaseLoader(["https://example.com"], { textContent: true,});const docs = await loader.load();
#### API Reference:
* [BrowserbaseLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_browserbase.BrowserbaseLoader.html) from `langchain/document_loaders/web/browserbase`
Arguments[β](#arguments "Direct link to Arguments")
---------------------------------------------------
* `urls`: Required. List of URLs to load.
Options[β](#options "Direct link to Options")
---------------------------------------------
* `api_key`: Optional. Specifies Browserbase API key. Defaults is the `BROWSERBASE_API_KEY` environment variable.
* `text_content`: Optional. Load pages as readable text. Default is `False`.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Azure Blob Storage File
](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)[
Next
College Confidential
](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Description](#description)
* [Installation](#installation)
* [Example](#example)
* [Arguments](#arguments)
* [Options](#options)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* AssemblyAI Audio Transcript
AssemblyAI Audio Transcript
===========================
This covers how to load audio (and video) transcripts as document objects from a file using the [AssemblyAI API](https://www.assemblyai.com/docs/api-reference/transcript).
Usage[β](#usage "Direct link to Usage")
---------------------------------------
First, you'll need to install the official AssemblyAI package:
* npm
* Yarn
* pnpm
npm install assemblyai
yarn add assemblyai
pnpm add assemblyai
To use the loaders you need an [AssemblyAI account](https://www.assemblyai.com/dashboard/signup) and [get your AssemblyAI API key from the dashboard](https://www.assemblyai.com/app/account).
Then, configure the API key as the `ASSEMBLYAI_API_KEY` environment variable or the `apiKey` options parameter.
import { AudioTranscriptLoader, // AudioTranscriptParagraphsLoader, // AudioTranscriptSentencesLoader} from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";// Use `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` for splitting the transcript into paragraphs or sentencesconst loader = new AudioTranscriptLoader( { audio: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/api-reference/transcript#create-a-transcript }, { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });
#### API Reference:
* [AudioTranscriptLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_assemblyai.AudioTranscriptLoader.html) from `langchain/document_loaders/web/assemblyai`
> **info**
>
> * You can use the `AudioTranscriptParagraphsLoader` or `AudioTranscriptSentencesLoader` to split the transcript into paragraphs or sentences.
> * The `audio` parameter can be a URL, a local file path, a buffer, or a stream.
> * The `audio` can also be a video file. See the [list of supported file types in the FAQ doc](https://www.assemblyai.com/docs/concepts/faq#:~:text=file%20types%20are%20supported).
> * If you don't pass in the `apiKey` option, the loader will use the `ASSEMBLYAI_API_KEY` environment variable.
> * You can add more properties in addition to `audio`. Find the full list of request parameters in the [AssemblyAI API docs](https://www.assemblyai.com/docs/api-reference/transcript#create-a-transcript).
You can also use the `AudioSubtitleLoader` to get `srt` or `vtt` subtitles as a document.
import { AudioSubtitleLoader } from "langchain/document_loaders/web/assemblyai";// You can also use a local file path and the loader will upload it to AssemblyAI for you.const audioUrl = "https://storage.googleapis.com/aai-docs-samples/espn.m4a";const loader = new AudioSubtitleLoader( { audio: audioUrl, // any other parameters as documented here: https://www.assemblyai.com/docs/api-reference/transcript#create-a-transcript }, "srt", // srt or vtt { apiKey: "<ASSEMBLYAI_API_KEY>", // or set the `ASSEMBLYAI_API_KEY` env variable });const docs = await loader.load();console.dir(docs, { depth: Infinity });
#### API Reference:
* [AudioSubtitleLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_assemblyai.AudioSubtitleLoader.html) from `langchain/document_loaders/web/assemblyai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Apify Dataset
](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)[
Next
Azure Blob Storage Container
](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/couchbase | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Couchbase
Couchbase
=========
[Couchbase](http://couchbase.com/) is an award-winning distributed NoSQL cloud database that delivers unmatched versatility, performance, scalability, and financial value for all of your cloud, mobile, AI, and edge computing applications.
This guide shows how to use load documents from couchbase database.
Installation
============
* npm
* Yarn
* pnpm
npm install couchbase
yarn add couchbase
pnpm add couchbase
Usage[β](#usage "Direct link to Usage")
---------------------------------------
### Querying for Documents from Couchbase[β](#querying-for-documents-from-couchbase "Direct link to Querying for Documents from Couchbase")
For more details on connecting to a Couchbase cluster, please check the [Node.js SDK documentation](https://docs.couchbase.com/nodejs-sdk/current/howtos/managing-connections.html#connection-strings).
For help with querying for documents using SQL++ (SQL for JSON), please check the [documentation](https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/index.html).
import { CouchbaseDocumentLoader } from "langchain/document_loaders/web/couchbase";import { Cluster } from "couchbase";const connectionString = "couchbase://localhost"; // valid couchbase connection stringconst dbUsername = "Administrator"; // valid database user with read access to the bucket being queriedconst dbPassword = "Password"; // password for the database user// query is a valid SQL++ queryconst query = ` SELECT h.* FROM \`travel-sample\`.inventory.hotel h WHERE h.country = 'United States' LIMIT 1`;
### Connect to Couchbase Cluster[β](#connect-to-couchbase-cluster "Direct link to Connect to Couchbase Cluster")
const couchbaseClient = await Cluster.connect(connectionString, { username: dbUsername, password: dbPassword, configProfile: "wanDevelopment",});
### Create the Loader[β](#create-the-loader "Direct link to Create the Loader")
const loader = new CouchbaseDocumentLoader( couchbaseClient, // The connected couchbase cluster client query // A valid SQL++ query which will return the required data);
### Load Documents[β](#load-documents "Direct link to Load Documents")
You can fetch the documents by calling the `load` method of the loader. It will return a list with all the documents. If you want to avoid this blocking call, you can call `lazy_load` method that returns an Iterator.
// using load methoddocs = await loader.load();console.log(docs);
// using lazy_loadfor await (const doc of this.lazyLoad()) { console.log(doc); break; // break based on required condition}
### Specifying Fields with Content and Metadata[β](#specifying-fields-with-content-and-metadata "Direct link to Specifying Fields with Content and Metadata")
The fields that are part of the Document content can be specified using the `pageContentFields` parameter. The metadata fields for the Document can be specified using the `metadataFields` parameter.
const loaderWithSelectedFields = new CouchbaseDocumentLoader( couchbaseClient, query, // pageContentFields [ "address", "name", "city", "phone", "country", "geo", "description", "reviews", ], ["id"] // metadataFields);const filtered_docs = await loaderWithSelectedFields.load();console.log(filtered_docs);
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Confluence
](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)[
Next
Figma
](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/figma | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Figma
Figma
=====
This example goes over how to load data from a Figma file. You will need a Figma access token in order to get started.
import { FigmaFileLoader } from "langchain/document_loaders/web/figma";const loader = new FigmaFileLoader({ accessToken: "FIGMA_ACCESS_TOKEN", // or load it from process.env.FIGMA_ACCESS_TOKEN nodeIds: ["id1", "id2", "id3"], fileKey: "key",});const docs = await loader.load();console.log({ docs });
#### API Reference:
* [FigmaFileLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_figma.FigmaFileLoader.html) from `langchain/document_loaders/web/figma`
You can find your Figma file's key and node ids by opening the file in your browser and extracting them from the URL:
https://www.figma.com/file/<YOUR FILE KEY HERE>/LangChainJS-Test?type=whiteboard&node-id=<YOUR NODE ID HERE>&t=e6lqWkKecuYQRyRg-0
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Couchbase
](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)[
Next
Firecrawl
](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Firecrawl
Firecrawl
=========
This guide shows how to use [Firecrawl](https://firecrawl.dev) with LangChain to load web data into an LLM-ready format using Firecrawl.
Overview[β](#overview "Direct link to Overview")
------------------------------------------------
[FireCrawl](https://firecrawl.dev) crawls and convert any website into LLM-ready data. It crawls all accessible subpages and give you clean markdown and metadata for each. No sitemap required.
FireCrawl handles complex tasks such as reverse proxies, caching, rate limits, and content blocked by JavaScript. Built by the [mendable.ai](https://mendable.ai) team.
This guide shows how to scrap and crawl entire websites and load them using the `FireCrawlLoader` in LangChain.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
Sign up and get your free [FireCrawl API key](https://firecrawl.dev) to start. FireCrawl offers 300 free credits to get you started, and it's [open-source](https://github.com/mendableai/firecrawl) in case you want to self-host.
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Here's an example of how to use the `FireCrawlLoader` to load web search results:
Firecrawl offers 2 modes: `scrape` and `crawl`. In `scrape` mode, Firecrawl will only scrape the page you provide. In `crawl` mode, Firecrawl will crawl the entire website.
* npm
* Yarn
* pnpm
npm install @mendable/firecrawl-js
yarn add @mendable/firecrawl-js
pnpm add @mendable/firecrawl-js
import { FireCrawlLoader } from "langchain/document_loaders/web/firecrawl";const loader = new FireCrawlLoader({ url: "https://firecrawl.dev", // The URL to scrape apiKey: process.env.FIRECRAWL_API_KEY, // Optional, defaults to `FIRECRAWL_API_KEY` in your env. mode: "scrape", // The mode to run the crawler in. Can be "scrape" for single urls or "crawl" for all accessible subpages params: { // optional parameters based on Firecrawl API docs // For API documentation, visit https://docs.firecrawl.dev },});const docs = await loader.load();
#### API Reference:
* [FireCrawlLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_firecrawl.FireCrawlLoader.html) from `langchain/document_loaders/web/firecrawl`
### Additional Parameters[β](#additional-parameters "Direct link to Additional Parameters")
For `params` you can pass any of the params according to the [Firecrawl documentation](https://docs.firecrawl.dev).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Figma
](/v0.2/docs/integrations/document_loaders/web_loaders/figma)[
Next
GitBook
](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/gitbook | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* GitBook
GitBook
=======
This example goes over how to load data from any GitBook, using Cheerio. One document will be created for each page.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
Load from single GitBook page[β](#load-from-single-gitbook-page "Direct link to Load from single GitBook page")
---------------------------------------------------------------------------------------------------------------
import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader( "https://docs.gitbook.com/product-tour/navigation");const docs = await loader.load();
Load from all paths in a given GitBook[β](#load-from-all-paths-in-a-given-gitbook "Direct link to Load from all paths in a given GitBook")
------------------------------------------------------------------------------------------------------------------------------------------
For this to work, the GitbookLoader needs to be initialized with the root path ([https://docs.gitbook.com](https://docs.gitbook.com) in this example) and have `shouldLoadAllPaths` set to `true`.
import { GitbookLoader } from "langchain/document_loaders/web/gitbook";const loader = new GitbookLoader("https://docs.gitbook.com", { shouldLoadAllPaths: true,});const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Firecrawl
](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)[
Next
GitHub
](/v0.2/docs/integrations/document_loaders/web_loaders/github)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/imsdb | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* IMSDB
IMSDB
=====
This example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { IMSDBLoader } from "langchain/document_loaders/web/imsdb";const loader = new IMSDBLoader("https://imsdb.com/scripts/BlacKkKlansman.html");const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Hacker News
](/v0.2/docs/integrations/document_loaders/web_loaders/hn)[
Next
Notion API
](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/pdf | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* PDF files
On this page
PDF files
=========
You can use this version of the popular PDFLoader in web environments. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the `splitPages` option to `false`.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install pdf-parse
yarn add pdf-parse
pnpm add pdf-parse
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { WebPDFLoader } from "langchain/document_loaders/web/pdf";const blob = new Blob(); // e.g. from a file inputconst loader = new WebPDFLoader(blob);const docs = await loader.load();console.log({ docs });
#### API Reference:
* [WebPDFLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_pdf.WebPDFLoader.html) from `langchain/document_loaders/web/pdf`
Usage, custom `pdfjs` build[β](#usage-custom-pdfjs-build "Direct link to usage-custom-pdfjs-build")
---------------------------------------------------------------------------------------------------
By default we use the `pdfjs` build bundled with `pdf-parse`, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of `pdfjs-dist` or if you want to use a custom build of `pdfjs-dist`, you can do so by providing a custom `pdfjs` function that returns a promise that resolves to the `PDFJS` object.
In the following example we use the "legacy" (see [pdfjs docs](https://github.com/mozilla/pdf.js/wiki/Frequently-Asked-Questions#which-browsersenvironments-are-supported)) build of `pdfjs-dist`, which includes several polyfills not included in the default build.
* npm
* Yarn
* pnpm
npm install pdfjs-dist
yarn add pdfjs-dist
pnpm add pdfjs-dist
import { WebPDFLoader } from "langchain/document_loaders/web/pdf";const blob = new Blob(); // e.g. from a file inputconst loader = new WebPDFLoader(blob, { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});
Eliminating extra spaces[β](#eliminating-extra-spaces "Direct link to Eliminating extra spaces")
------------------------------------------------------------------------------------------------
PDFs come in many varieties, which makes reading them a challenge. The loader parses individual text elements and joins them together with a space by default, but if you are seeing excessive spaces, this may not be the desired behavior. In that case, you can override the separator with an empty string like this:
import { WebPDFLoader } from "langchain/document_loaders/web/pdf";const blob = new Blob(); // e.g. from a file inputconst loader = new WebPDFLoader(blob, { parsedItemSeparator: "",});
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Notion API
](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)[
Next
Recursive URL Loader
](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [Setup](#setup)
* [Usage](#usage)
* [Usage, custom `pdfjs` build](#usage-custom-pdfjs-build)
* [Eliminating extra spaces](#eliminating-extra-spaces)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/notionapi | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Notion API
Notion API
==========
This guide will take you through the steps required to load documents from Notion pages and databases using the Notion API.
Overview[β](#overview "Direct link to Overview")
------------------------------------------------
Notion is a versatile productivity platform that consolidates note-taking, task management, and data organization tools into one interface.
This document loader is able to take full Notion pages and databases and turn them into a LangChain Documents ready to be integrated into your projects.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
1. You will first need to install the official Notion client and the [notion-to-md](https://www.npmjs.com/package/notion-to-md) package as peer dependencies:
* npm
* Yarn
* pnpm
npm install @notionhq/client notion-to-md
yarn add @notionhq/client notion-to-md
pnpm add @notionhq/client notion-to-md
2. Create a [Notion integration](https://www.notion.so/my-integrations) and securely record the Internal Integration Secret (also known as `NOTION_INTEGRATION_TOKEN`).
3. Add a connection to your new integration on your page or database. To do this open your Notion page, go to the settings pips in the top right and scroll down to `Add connections` and select your new integration.
4. Get the `PAGE_ID` or `DATABASE_ID` for the page or database you want to load.
> The 32 char hex in the url path represents the `ID`. For example:
> PAGE\_ID: [https://www.notion.so/skarard/LangChain-Notion-API-`b34ca03f219c4420a6046fc4bdfdf7b4`](https://www.notion.so/skarard/LangChain-Notion-API-b34ca03f219c4420a6046fc4bdfdf7b4)
> DATABASE\_ID: [https://www.notion.so/skarard/`c393f19c3903440da0d34bf9c6c12ff2`?v=9c70a0f4e174498aa0f9021e0a9d52de](https://www.notion.so/skarard/c393f19c3903440da0d34bf9c6c12ff2?v=9c70a0f4e174498aa0f9021e0a9d52de)
> REGEX: `/(?<!=)[0-9a-f]{32}/`
Example Usage[β](#example-usage "Direct link to Example Usage")
---------------------------------------------------------------
import { NotionAPILoader } from "langchain/document_loaders/web/notionapi";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";// Loading a page (including child pages all as separate documents)const pageLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<PAGE_ID>", type: "page",});const splitter = new RecursiveCharacterTextSplitter();// A page contents is likely to be more than 1000 characters so it's split into multiple documents (important for vectorization)const pageDocs = await pageLoader.loadAndSplit(splitter);console.log({ pageDocs });// Loading a database (each row is a separate document with all properties as metadata)const dbLoader = new NotionAPILoader({ clientOptions: { auth: "<NOTION_INTEGRATION_TOKEN>", }, id: "<DATABASE_ID>", type: "database", onDocumentLoaded: (current, total, currentTitle) => { console.log(`Loaded Page: ${currentTitle} (${current}/${total})`); }, callerOptions: { maxConcurrency: 64, // Default value }, propertiesAsHeader: true, // Prepends a front matter header of the page properties to the page contents});// A database row contents is likely to be less than 1000 characters so it's not split into multiple documentsconst dbDocs = await dbLoader.load();console.log({ dbDocs });
#### API Reference:
* [NotionAPILoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_notionapi.NotionAPILoader.html) from `langchain/document_loaders/web/notionapi`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
IMSDB
](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)[
Next
PDF files
](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/hn | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Hacker News
Hacker News
===========
This example goes over how to load data from the hacker news website, using Cheerio. One document will be created for each page.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { HNLoader } from "langchain/document_loaders/web/hn";const loader = new HNLoader("https://news.ycombinator.com/item?id=34817881");const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
GitHub
](/v0.2/docs/integrations/document_loaders/web_loaders/github)[
Next
IMSDB
](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/github | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* GitHub
GitHub
======
This example goes over how to load data from a GitHub repository. You can set the `GITHUB_ACCESS_TOKEN` environment variable to a GitHub access token to increase the rate limit and access private repositories.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
The GitHub loader requires the [ignore npm package](https://www.npmjs.com/package/ignore) as a peer dependency. Install it like this:
* npm
* Yarn
* pnpm
npm install ignore
yarn add ignore
pnpm add ignore
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/langchain-ai/langchainjs", { branch: "main", recursive: false, unknown: "warn", maxConcurrency: 5, // Defaults to 2 } ); const docs = await loader.load(); console.log({ docs });};
#### API Reference:
* [GithubRepoLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github`
The loader will ignore binary files like images.
### Using .gitignore Syntax[β](#using-gitignore-syntax "Direct link to Using .gitignore Syntax")
To ignore specific files, you can pass in an `ignorePaths` array into the constructor:
import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/langchain-ai/langchainjs", { branch: "main", recursive: false, unknown: "warn", ignorePaths: ["*.md"] } ); const docs = await loader.load(); console.log({ docs }); // Will not include any .md files};
#### API Reference:
* [GithubRepoLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github`
### Using a Different GitHub Instance[β](#using-a-different-github-instance "Direct link to Using a Different GitHub Instance")
You may want to target a different GitHub instance than `github.com`, e.g. if you have a GitHub Enterprise instance for your company. For this you need two additional parameters:
* `baseUrl` - the base URL of your GitHub instance, so the githubUrl matches `<baseUrl>/<owner>/<repo>/...`
* `apiUrl` - the URL of the API endpoint of your GitHub instance
import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.your.company/org/repo-name", { baseUrl: "https://github.your.company", apiUrl: "https://github.your.company/api/v3", accessToken: "ghp_A1B2C3D4E5F6a7b8c9d0", branch: "main", recursive: true, unknown: "warn", } ); const docs = await loader.load(); console.log({ docs });};
#### API Reference:
* [GithubRepoLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github`
### Dealing with Submodules[β](#dealing-with-submodules "Direct link to Dealing with Submodules")
In case your repository has submodules, you have to decide if the loader should follow them or not. You can control this with the boolean `processSubmodules` parameter. By default, submodules are not processed. Note that processing submodules works only in conjunction with setting the `recursive` parameter to true.
import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/langchain-ai/langchainjs", { branch: "main", recursive: true, processSubmodules: true, unknown: "warn", } ); const docs = await loader.load(); console.log({ docs });};
#### API Reference:
* [GithubRepoLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github`
Note, that the loader will not follow submodules which are located on another GitHub instance than the one of the current repository.
### Stream large repository[β](#stream-large-repository "Direct link to Stream large repository")
For situations where processing large repositories in a memory-efficient manner is required. You can use the `loadAsStream` method to asynchronously streams documents from the entire GitHub repository.
import { GithubRepoLoader } from "langchain/document_loaders/web/github";export const run = async () => { const loader = new GithubRepoLoader( "https://github.com/langchain-ai/langchainjs", { branch: "main", recursive: false, unknown: "warn", maxConcurrency: 3, // Defaults to 2 } ); const docs = []; for await (const doc of loader.loadAsStream()) { docs.push(doc); } console.log({ docs });};
#### API Reference:
* [GithubRepoLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_github.GithubRepoLoader.html) from `langchain/document_loaders/web/github`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
GitBook
](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)[
Next
Hacker News
](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Recursive URL Loader
Recursive URL Loader
====================
When loading content from a website, we may want to process load all URLs on a page.
For example, let's look at the [LangChain.js introduction](/v0.2/docs/introduction) docs.
This has many interesting child pages that we may want to load, split, and later retrieve in bulk.
The challenge is traversing the tree of child pages and assembling a list!
We do this using the RecursiveUrlLoader.
This also gives us the flexibility to exclude some children, customize the extractor, and more.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
To get started, you'll need to install the [`jsdom`](https://www.npmjs.com/package/jsdom) package:
* npm
* Yarn
* pnpm
npm i jsdom
yarn add jsdom
pnpm add jsdom
We also suggest adding a package like [`html-to-text`](https://www.npmjs.com/package/html-to-text) or [`@mozilla/readability`](https://www.npmjs.com/package/@mozilla/readability) for extracting the raw text from the page.
* npm
* Yarn
* pnpm
npm i html-to-text
yarn add html-to-text
pnpm add html-to-text
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { compile } from "html-to-text";import { RecursiveUrlLoader } from "langchain/document_loaders/web/recursive_url";const url = "/docs/introduction";const compiledConvert = compile({ wordwrap: 130 }); // returns (text: string) => string;const loader = new RecursiveUrlLoader(url, { extractor: compiledConvert, maxDepth: 1, excludeDirs: ["/docs/api/"],});const docs = await loader.load();
Options[β](#options "Direct link to Options")
---------------------------------------------
interface Options { excludeDirs?: string[]; // webpage directories to exclude. extractor?: (text: string) => string; // a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like html-to-text to extract the text. By default, it just returns the page as it is. maxDepth?: number; // the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job. timeout?: number; // the timeout for each request, in the unit of seconds. By default, it is set to 10000 (10 seconds). preventOutside?: boolean; // whether to prevent crawling outside the root url. By default, it is set to true. callerOptions?: AsyncCallerConstructorParams; // the options to call the AsyncCaller for example setting max concurrency (default is 64)}
However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned results are good enough.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
PDF files
](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)[
Next
S3 File
](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Sonix Audio
Sonix Audio
===========
Compatibility
Only available on Node.js.
This covers how to load document objects from an audio file using the [Sonix](https://sonix.ai/) API.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
To run this loader you will need to create an account on the [https://sonix.ai/](https://sonix.ai/) and obtain an auth key from the [https://my.sonix.ai/api](https://my.sonix.ai/api) page.
You'll also need to install the `sonix-speech-recognition` library:
* npm
* Yarn
* pnpm
npm install sonix-speech-recognition
yarn add sonix-speech-recognition
pnpm add sonix-speech-recognition
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Once auth key is configured, you can use the loader to create transcriptions and then convert them into a Document. In the `request` parameter, you can either specify a local file by setting `audioFilePath` or a remote file using `audioUrl`. You will also need to specify the audio language. See the list of supported languages [here](https://sonix.ai/docs/api#languages).
import { SonixAudioTranscriptionLoader } from "langchain/document_loaders/web/sonix_audio";const loader = new SonixAudioTranscriptionLoader({ sonixAuthKey: "SONIX_AUTH_KEY", request: { audioFilePath: "LOCAL_AUDIO_FILE_PATH", fileName: "FILE_NAME", language: "en", },});const docs = await loader.load();console.log(docs);
#### API Reference:
* [SonixAudioTranscriptionLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_sonix_audio.SonixAudioTranscriptionLoader.html) from `langchain/document_loaders/web/sonix_audio`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Sitemap Loader
](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)[
Next
Blockchain Data
](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/serpapi | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* SerpAPI Loader
SerpAPI Loader
==============
This guide shows how to use SerpAPI with LangChain to load web search results.
Overview[β](#overview "Direct link to Overview")
------------------------------------------------
[SerpAPI](https://serpapi.com/) is a real-time API that provides access to search results from various search engines. It is commonly used for tasks like competitor analysis and rank tracking. It empowers businesses to scrape, extract, and make sense of data from all search engines' result pages.
This guide shows how to load web search results using the `SerpAPILoader` in LangChain. The `SerpAPILoader` simplifies the process of loading and processing web search results from SerpAPI.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
You'll need to sign up and retrieve your [SerpAPI API key](https://serpapi.com/dashboard).
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Here's an example of how to use the `SerpAPILoader`:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { SerpAPILoader } from "langchain/document_loaders/web/serpapi";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";// Initialize the necessary componentsconst llm = new ChatOpenAI();const embeddings = new OpenAIEmbeddings();const apiKey = "Your SerpAPI API key";// Define your question and queryconst question = "Your question here";const query = "Your query here";// Use SerpAPILoader to load web search resultsconst loader = new SerpAPILoader({ q: query, apiKey });const docs = await loader.load();// Use MemoryVectorStore to store the loaded documents in memoryconst vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: question,});console.log(res.answer);
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [SerpAPILoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_serpapi.SerpAPILoader.html) from `langchain/document_loaders/web/serpapi`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [createStuffDocumentsChain](https://v02.api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents`
* [createRetrievalChain](https://v02.api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval`
In this example, the `SerpAPILoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. A retrieval chain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SerpAPILoader` can streamline the process of loading and processing web search results.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
SearchApi Loader
](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)[
Next
Sitemap Loader
](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/searchapi | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* SearchApi Loader
SearchApi Loader
================
This guide shows how to use SearchApi with LangChain to load web search results.
Overview[β](#overview "Direct link to Overview")
------------------------------------------------
[SearchApi](https://www.searchapi.io/) is a real-time API that grants developers access to results from a variety of search engines, including engines like [Google Search](https://www.searchapi.io/docs/google), [Google News](https://www.searchapi.io/docs/google-news), [Google Scholar](https://www.searchapi.io/docs/google-scholar), [YouTube Transcripts](https://www.searchapi.io/docs/youtube-transcripts) or any other engine that could be found in documentation. This API enables developers and businesses to scrape and extract meaningful data directly from the result pages of all these search engines, providing valuable insights for different use-cases.
This guide shows how to load web search results using the `SearchApiLoader` in LangChain. The `SearchApiLoader` simplifies the process of loading and processing web search results from SearchApi.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
You'll need to sign up and retrieve your [SearchApi API key](https://www.searchapi.io/).
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Here's an example of how to use the `SearchApiLoader`:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { TokenTextSplitter } from "@langchain/textsplitters";import { SearchApiLoader } from "langchain/document_loaders/web/searchapi";import { ChatPromptTemplate } from "@langchain/core/prompts";import { createStuffDocumentsChain } from "langchain/chains/combine_documents";import { createRetrievalChain } from "langchain/chains/retrieval";// Initialize the necessary componentsconst llm = new ChatOpenAI({ model: "gpt-3.5-turbo-1106",});const embeddings = new OpenAIEmbeddings();const apiKey = "Your SearchApi API key";// Define your question and queryconst question = "Your question here";const query = "Your query here";// Use SearchApiLoader to load web search resultsconst loader = new SearchApiLoader({ q: query, apiKey, engine: "google" });const docs = await loader.load();const textSplitter = new TokenTextSplitter({ chunkSize: 800, chunkOverlap: 100,});const splitDocs = await textSplitter.splitDocuments(docs);// Use MemoryVectorStore to store the loaded documents in memoryconst vectorStore = await MemoryVectorStore.fromDocuments( splitDocs, embeddings);const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([ [ "system", "Answer the user's questions based on the below context:\n\n{context}", ], ["human", "{input}"],]);const combineDocsChain = await createStuffDocumentsChain({ llm, prompt: questionAnsweringPrompt,});const chain = await createRetrievalChain({ retriever: vectorStore.asRetriever(), combineDocsChain,});const res = await chain.invoke({ input: question,});console.log(res.answer);
#### API Reference:
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [TokenTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.TokenTextSplitter.html) from `@langchain/textsplitters`
* [SearchApiLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_searchapi.SearchApiLoader.html) from `langchain/document_loaders/web/searchapi`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [createStuffDocumentsChain](https://v02.api.js.langchain.com/functions/langchain_chains_combine_documents.createStuffDocumentsChain.html) from `langchain/chains/combine_documents`
* [createRetrievalChain](https://v02.api.js.langchain.com/functions/langchain_chains_retrieval.createRetrievalChain.html) from `langchain/chains/retrieval`
In this example, the `SearchApiLoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. A retrieval chain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SearchApiLoader` can streamline the process of loading and processing web search results.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
S3 File
](/v0.2/docs/integrations/document_loaders/web_loaders/s3)[
Next
SerpAPI Loader
](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/sitemap | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Sitemap Loader
On this page
Sitemap Loader
==============
This notebook goes over how to use the [`SitemapLoader`](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_sitemap.SitemapLoader.html) class to load sitemaps into `Document`s.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
First, we need to install the `langchain` package:
* npm
* Yarn
* pnpm
npm install --save langchain
yarn add langchain
pnpm add langchain
The URL passed in must either contain the `.xml` path to the sitemap, or a default `/sitemap.xml` will be appended to the URL.
import { SitemapLoader } from "langchain/document_loaders/web/sitemap";const loader = new SitemapLoader("https://www.langchain.com/");const docs = await loader.load();console.log(docs.length);/**26 */console.log(docs[0]);/**Document { pageContent: '\n' + ' \n' + '\n' + ' \n' + ' \n' + ' Blog ArticleApr 8, 2022As the internet continues to develop and grow exponentially, jobs related to the industry do too, particularly those that relate to web design and development. The prediction is that by 2029, the job outlook for these two fields will grow by 8%βsignificantly faster than average. Whether youβre seeking salaried employment or aiming to work in a freelance capacity, a career in web design can offer a variety of employment arrangements, competitive salaries, and opportunities to utilize both technical and creative skill sets.What does a career in web design involve?A career in website design can involve the design, creation, and coding of a range of website types. Other tasks will typically include liaising with clients and discussing website specifications, incorporating feedback, working on graphic design and image editing, and enabling multimedia features such as audio and video. Requiring a range of creative and technical skills, web designers may be involved in work across a range of industries, including software companies, IT consultancies, web design companies, corporate organizations, and more. In contrast with web developers, web designers tend to play a more creative role, crafting the overall vision and design of a site, and determining how to best incorporate the necessary functionality. However, there can be significant overlap between the roles.Full-stack, back-end, and front-end web developmentThe U.S. Bureau of Labor Statistics (BLS) Occupational Outlook Handbook tends to group web developers and digital designers into one category. However, they define them separately, stating that web developers create and maintain websites and are responsible for the technical aspects including performance and capacity. Web or digital designers, on the other hand, are responsible for the look and functionality of websites and interfaces. They develop, create, and test the layout, functions, and navigation for usability. Web developers can focus on the back-end, front-end, or full-stack development, and typically utilize a range of programming languages, libraries, and frameworks to do so. Web designers may work more closely with front-end engineers to establish the user-end functionality and appearance of a site.Are web designers in demand in 2022?In our ever-increasingly digital environment, there is a constant need for websitesβand therefore for web designers and developers. With 17.4 billion websites in existence as of January 2020, the demand for web developers is only expected to rise.Web designers with significant coding experience are typically in higher demand, and can usually expect a higher salary. Like all jobs, there are likely to be a range of opportunities, some of which are better paid than others. But certain skill sets are basic to web design, most of which are key to how to become a web designer in 2022.const removeHiddenBreakpointLayers = function ie(e){function t(){for(let{hash:r,mediaQuery:i}of e){if(!i)continue;if(window.matchMedia(i).matches)return r}return e[0]?.hash}let o=t();if(o)for(let r of document.querySelectorAll(".hidden-"+o))r.parentNode?.removeChild(r);for(let r of document.querySelectorAll(".ssr-variant")){for(;r.firstChild;)r.parentNode?.insertBefore(r.firstChild,r);r.parentNode?.removeChild(r)}for(let r of document.querySelectorAll("[data-framer-original-sizes]")){let i=r.getAttribute("data-framer-original-sizes");i===""?r.removeAttribute("sizes"):r.setAttribute("sizes",i),r.removeAttribute("data-framer-original-sizes")}};removeHiddenBreakpointLayers([{"hash":"1ksv3g6"}])\n' + '\n' + ' \n' + ' \n' + ' \n' + ' \n' + ' \n' + '\n' + '\n', metadata: { changefreq: '', lastmod: '', priority: '', source: 'https://www.langchain.com/blog-detail/starting-a-career-in-design' }} */
#### API Reference:
* [SitemapLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_sitemap.SitemapLoader.html) from `langchain/document_loaders/web/sitemap`
Or, if you want to only load the sitemap and not the contents of each page from the sitemap, you can use the `parseSitemap` method:
import { SitemapLoader } from "langchain/document_loaders/web/sitemap";const loader = new SitemapLoader("https://www.langchain.com/");const sitemap = await loader.parseSitemap();console.log(sitemap);/**[ { loc: 'https://www.langchain.com/blog-detail/starting-a-career-in-design', changefreq: '', lastmod: '', priority: '' }, { loc: 'https://www.langchain.com/blog-detail/building-a-navigation-component', changefreq: '', lastmod: '', priority: '' }, { loc: 'https://www.langchain.com/blog-detail/guide-to-creating-a-website', changefreq: '', lastmod: '', priority: '' }, { loc: 'https://www.langchain.com/page-1/terms-and-conditions', changefreq: '', lastmod: '', priority: '' },...42 more items] */
#### API Reference:
* [SitemapLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_sitemap.SitemapLoader.html) from `langchain/document_loaders/web/sitemap`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
SerpAPI Loader
](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)[
Next
Sonix Audio
](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Setup](#setup)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.2/docs/integrations/document_loaders/web_loaders/web_cheerio)
* [Puppeteer](/v0.2/docs/integrations/document_loaders/web_loaders/web_puppeteer)
* [Playwright](/v0.2/docs/integrations/document_loaders/web_loaders/web_playwright)
* [Apify Dataset](/v0.2/docs/integrations/document_loaders/web_loaders/apify_dataset)
* [AssemblyAI Audio Transcript](/v0.2/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription)
* [Azure Blob Storage Container](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container)
* [Azure Blob Storage File](/v0.2/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file)
* [Browserbase Loader](/v0.2/docs/integrations/document_loaders/web_loaders/browserbase)
* [College Confidential](/v0.2/docs/integrations/document_loaders/web_loaders/college_confidential)
* [Confluence](/v0.2/docs/integrations/document_loaders/web_loaders/confluence)
* [Couchbase](/v0.2/docs/integrations/document_loaders/web_loaders/couchbase)
* [Figma](/v0.2/docs/integrations/document_loaders/web_loaders/figma)
* [Firecrawl](/v0.2/docs/integrations/document_loaders/web_loaders/firecrawl)
* [GitBook](/v0.2/docs/integrations/document_loaders/web_loaders/gitbook)
* [GitHub](/v0.2/docs/integrations/document_loaders/web_loaders/github)
* [Hacker News](/v0.2/docs/integrations/document_loaders/web_loaders/hn)
* [IMSDB](/v0.2/docs/integrations/document_loaders/web_loaders/imsdb)
* [Notion API](/v0.2/docs/integrations/document_loaders/web_loaders/notionapi)
* [PDF files](/v0.2/docs/integrations/document_loaders/web_loaders/pdf)
* [Recursive URL Loader](/v0.2/docs/integrations/document_loaders/web_loaders/recursive_url_loader)
* [S3 File](/v0.2/docs/integrations/document_loaders/web_loaders/s3)
* [SearchApi Loader](/v0.2/docs/integrations/document_loaders/web_loaders/searchapi)
* [SerpAPI Loader](/v0.2/docs/integrations/document_loaders/web_loaders/serpapi)
* [Sitemap Loader](/v0.2/docs/integrations/document_loaders/web_loaders/sitemap)
* [Sonix Audio](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)
* [Blockchain Data](/v0.2/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain)
* [YouTube transcripts](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* Blockchain Data
Blockchain Data
===============
This example shows how to load blockchain data, including NFT metadata and transactions for a contract address, via the sort.xyz SQL API.
You will need a free Sort API key, visiting sort.xyz to obtain one.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { SortXYZBlockchainLoader } from "langchain/document_loaders/web/sort_xyz_blockchain";import { OpenAI } from "@langchain/openai";/** * See https://docs.sort.xyz/docs/api-keys to get your free Sort API key. * See https://docs.sort.xyz for more information on the available queries. * See https://docs.sort.xyz/reference for more information about Sort's REST API. *//** * Run the example. */export const run = async () => { // Initialize the OpenAI model. Use OPENAI_API_KEY from .env in /examples const model = new OpenAI({ temperature: 0.9 }); const apiKey = "YOUR_SORTXYZ_API_KEY"; const contractAddress = "0x887F3909C14DAbd9e9510128cA6cBb448E932d7f".toLowerCase(); /* Load NFT metadata from the Ethereum blockchain. Hint: to load by a specific ID, see SQL query example below. */ const nftMetadataLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "NFTMetadata", blockchain: "ethereum", contractAddress, }, }); const nftMetadataDocs = await nftMetadataLoader.load(); const nftPrompt = "Describe the character with the attributes from the following json document in a 4 sentence story. "; const nftResponse = await model.invoke( nftPrompt + JSON.stringify(nftMetadataDocs[0], null, 2) ); console.log(`user > ${nftPrompt}`); console.log(`chatgpt > ${nftResponse}`); /* Load the latest transactions for a contract address from the Ethereum blockchain. */ const latestTransactionsLoader = new SortXYZBlockchainLoader({ apiKey, query: { type: "latestTransactions", blockchain: "ethereum", contractAddress, }, }); const latestTransactionsDocs = await latestTransactionsLoader.load(); const latestPrompt = "Describe the following json documents in only 4 sentences per document. Include as much detail as possible. "; const latestResponse = await model.invoke( latestPrompt + JSON.stringify(latestTransactionsDocs[0], null, 2) ); console.log(`\n\nuser > ${nftPrompt}`); console.log(`chatgpt > ${latestResponse}`); /* Load metadata for a specific NFT by using raw SQL and the NFT index. See https://docs.sort.xyz for forumulating SQL. */ const sqlQueryLoader = new SortXYZBlockchainLoader({ apiKey, query: `SELECT * FROM ethereum.nft_metadata WHERE contract_address = '${contractAddress}' AND token_id = 1 LIMIT 1`, }); const sqlDocs = await sqlQueryLoader.load(); const sqlPrompt = "Describe the character with the attributes from the following json document in an ad for a new coffee shop. "; const sqlResponse = await model.invoke( sqlPrompt + JSON.stringify(sqlDocs[0], null, 2) ); console.log(`\n\nuser > ${sqlPrompt}`); console.log(`chatgpt > ${sqlResponse}`);};
#### API Reference:
* [SortXYZBlockchainLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_web_sort_xyz_blockchain.SortXYZBlockchainLoader.html) from `langchain/document_loaders/web/sort_xyz_blockchain`
* [OpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Sonix Audio
](/v0.2/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription)[
Next
YouTube transcripts
](/v0.2/docs/integrations/document_loaders/web_loaders/youtube)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/directory | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* Folders with multiple files
Folders with multiple files
===========================
This example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.
Example folder:
src/document_loaders/example_data/example/βββ example.jsonβββ example.jsonlβββ example.txtβββ example.csv
Example code:
import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
File Loaders
](/v0.2/docs/integrations/document_loaders/file_loaders/)[
Next
ChatGPT files
](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/unstructured | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* Unstructured
Unstructured
============
This example covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
You can run Unstructured locally in your computer using Docker. To do so, you need to have Docker installed. You can find the instructions to install Docker [here](https://docs.docker.com/get-docker/).
docker run -p 8000:8000 -d --rm --name unstructured-api quay.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Once Unstructured is running, you can use it to load files from your computer. You can use the following code to load a file from your computer.
import { UnstructuredLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredLoader( "src/document_loaders/example_data/notion.md", options);const docs = await loader.load();
#### API Reference:
* [UnstructuredLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredLoader.html) from `langchain/document_loaders/fs/unstructured`
Directories[β](#directories "Direct link to Directories")
---------------------------------------------------------
You can also load all of the files in the directory using [`UnstructuredDirectoryLoader`](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredDirectoryLoader.html), which inherits from [`DirectoryLoader`](/v0.2/docs/integrations/document_loaders/file_loaders/directory):
import { UnstructuredDirectoryLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredDirectoryLoader( "langchain/src/document_loaders/tests/example_data", options);const docs = await loader.load();
#### API Reference:
* [UnstructuredDirectoryLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredDirectoryLoader.html) from `langchain/document_loaders/fs/unstructured`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Text files
](/v0.2/docs/integrations/document_loaders/file_loaders/text)[
Next
Web Loaders
](/v0.2/docs/integrations/document_loaders/web_loaders/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/docx | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* Docx files
Docx files
==========
This example goes over how to load data from docx files.
Setup
=====
* npm
* Yarn
* pnpm
npm install mammoth
yarn add mammoth
pnpm add mammoth
Usage
=====
import { DocxLoader } from "langchain/document_loaders/fs/docx";const loader = new DocxLoader( "src/document_loaders/tests/example_data/attention.docx");const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
CSV files
](/v0.2/docs/integrations/document_loaders/file_loaders/csv)[
Next
EPUB files
](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* ChatGPT files
On this page
ChatGPT files
=============
This example goes over how to load conversations.json from your ChatGPT data export folder. You can get your data export by email by going to: ChatGPT -> (Profile) - Settings -> Export data -> Confirm export -> Check email.
Usage, extracting all logs[β](#usage-extracting-all-logs "Direct link to Usage, extracting all logs")
-----------------------------------------------------------------------------------------------------
Example code:
import { ChatGPTLoader } from "langchain/document_loaders/fs/chatgpt";const loader = new ChatGPTLoader("./example_data/example_conversations.json");const docs = await loader.load();console.log(docs);
Usage, extracting a single log[β](#usage-extracting-a-single-log "Direct link to Usage, extracting a single log")
-----------------------------------------------------------------------------------------------------------------
Example code:
import { ChatGPTLoader } from "langchain/document_loaders/fs/chatgpt";const loader = new ChatGPTLoader( "./example_data/example_conversations.json", 1);const docs = await loader.load();console.log(docs);
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Folders with multiple files
](/v0.2/docs/integrations/document_loaders/file_loaders/directory)[
Next
CSV files
](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Usage, extracting all logs](#usage-extracting-all-logs)
* [Usage, extracting a single log](#usage-extracting-a-single-log)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/csv | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* CSV files
On this page
CSV files
=========
This example goes over how to load data from CSV files. The second argument is the `column` name to extract from the CSV file. One document will be created for each row in the CSV file. When `column` is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's `pageContent`. When `column` is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install d3-dsv@2
yarn add d3-dsv@2
pnpm add d3-dsv@2
Usage, extracting all columns[β](#usage-extracting-all-columns "Direct link to Usage, extracting all columns")
--------------------------------------------------------------------------------------------------------------
Example CSV file:
id,text1,This is a sentence.2,This is another sentence.
Example code:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence.", },]*/
Usage, extracting a single column[β](#usage-extracting-a-single-column "Direct link to Usage, extracting a single column")
--------------------------------------------------------------------------------------------------------------------------
Example CSV file:
id,text1,This is a sentence.2,This is another sentence.
Example code:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence.", },]*/
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
ChatGPT files
](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)[
Next
Docx files
](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [Setup](#setup)
* [Usage, extracting all columns](#usage-extracting-all-columns)
* [Usage, extracting a single column](#usage-extracting-a-single-column)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/epub | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* EPUB files
EPUB files
==========
This example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the `splitChapters` option to `false`.
Setup
=====
* npm
* Yarn
* pnpm
npm install epub2 html-to-text
yarn add epub2 html-to-text
pnpm add epub2 html-to-text
Usage, one document per chapter
===============================
import { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader("src/document_loaders/example_data/example.epub");const docs = await loader.load();
Usage, one document per file
============================
import { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader( "src/document_loaders/example_data/example.epub", { splitChapters: false, });const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Docx files
](/v0.2/docs/integrations/document_loaders/file_loaders/docx)[
Next
JSON files
](/v0.2/docs/integrations/document_loaders/file_loaders/json)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/json | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* JSON files
On this page
JSON files
==========
The JSON loader use [JSON pointer](https://github.com/janl/node-jsonpointer) to target keys in your JSON files you want to target.
### No JSON pointer example[β](#no-json-pointer-example "Direct link to No JSON pointer example")
The most simple way of using it, is to specify no JSON pointer. The loader will load all strings it finds in the JSON object.
Example JSON file:
{ "texts": ["This is a sentence.", "This is another sentence."]}
Example code:
import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", },]*/
### Using JSON pointer example[β](#using-json-pointer-example "Direct link to Using JSON pointer example")
You can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.
In this example, we want to only extract information from "from" and "surname" entries.
{ "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}
Example code:
import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
EPUB files
](/v0.2/docs/integrations/document_loaders/file_loaders/epub)[
Next
JSONLines files
](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [No JSON pointer example](#no-json-pointer-example)
* [Using JSON pointer example](#using-json-pointer-example)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* Open AI Whisper Audio
Open AI Whisper Audio
=====================
Compatibility
Only available on Node.js.
This covers how to load document objects from an audio file using the [Open AI Whisper](https://platform.openai.com/docs/guides/speech-to-text) API.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
To run this loader you will need to create an account on the Open AI and obtain an auth key from the [https://platform.openai.com/account](https://platform.openai.com/account) page.
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Once auth key is configured, you can use the loader to create transcriptions and then convert them into a Document.
import { OpenAIWhisperAudio } from "langchain/document_loaders/fs/openai_whisper_audio";const filePath = "./src/document_loaders/example_data/test.mp3";const loader = new OpenAIWhisperAudio(filePath);const docs = await loader.load();console.log(docs);
#### API Reference:
* [OpenAIWhisperAudio](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_openai_whisper_audio.OpenAIWhisperAudio.html) from `langchain/document_loaders/fs/openai_whisper_audio`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Notion markdown export
](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)[
Next
PDF files
](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* Notion markdown export
Notion markdown export
======================
This example goes over how to load data from your Notion pages exported from the notion dashboard.
First, export your notion pages as **Markdown & CSV** as per the offical explanation [here](https://www.notion.so/help/export-your-content). Make sure to select `include subpages` and `Create folders for subpages.`
Then, unzip the downloaded file and move the unzipped folder into your repository. It should contain the markdown files of your pages.
Once the folder is in your repository, simply run the example below:
import { NotionLoader } from "langchain/document_loaders/fs/notion";export const run = async () => { /** Provide the directory path of your notion folder */ const directoryPath = "Notion_DB"; const loader = new NotionLoader(directoryPath); const docs = await loader.load(); console.log({ docs });};
#### API Reference:
* [NotionLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_notion.NotionLoader.html) from `langchain/document_loaders/fs/notion`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
JSONLines files
](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)[
Next
Open AI Whisper Audio
](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* JSONLines files
JSONLines files
===============
This example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.
Example JSONLines file:
{"html": "This is a sentence."}{"html": "This is another sentence."}
Example code:
import { JSONLinesLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLinesLoader( "src/document_loaders/example_data/example.jsonl", "/html");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/jsonl+json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/jsonl+json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", },]*/
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
JSON files
](/v0.2/docs/integrations/document_loaders/file_loaders/json)[
Next
Notion markdown export
](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/pptx | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* PPTX files
On this page
PPTX files
==========
This example goes over how to load data from PPTX files. By default, one document will be created for all pages in the PPTX file.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install officeparser
yarn add officeparser
pnpm add officeparser
Usage, one document per page[β](#usage-one-document-per-page "Direct link to Usage, one document per page")
-----------------------------------------------------------------------------------------------------------
import { PPTXLoader } from "langchain/document_loaders/fs/pptx";const loader = new PPTXLoader("src/document_loaders/example_data/example.pptx");const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
PDF files
](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)[
Next
Subtitles
](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Setup](#setup)
* [Usage, one document per page](#usage-one-document-per-page)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/text | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* Text files
Text files
==========
This example goes over how to load data from text files.
import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Subtitles
](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)[
Next
Unstructured
](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/pdf | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* PDF files
On this page
PDF files
=========
This example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the `splitPages` option to `false`.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install pdf-parse
yarn add pdf-parse
pnpm add pdf-parse
Usage, one document per page[β](#usage-one-document-per-page "Direct link to Usage, one document per page")
-----------------------------------------------------------------------------------------------------------
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();
Usage, one document per file[β](#usage-one-document-per-file "Direct link to Usage, one document per file")
-----------------------------------------------------------------------------------------------------------
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();
Usage, custom `pdfjs` build[β](#usage-custom-pdfjs-build "Direct link to usage-custom-pdfjs-build")
---------------------------------------------------------------------------------------------------
By default we use the `pdfjs` build bundled with `pdf-parse`, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of `pdfjs-dist` or if you want to use a custom build of `pdfjs-dist`, you can do so by providing a custom `pdfjs` function that returns a promise that resolves to the `PDFJS` object.
In the following example we use the "legacy" (see [pdfjs docs](https://github.com/mozilla/pdf.js/wiki/Frequently-Asked-Questions#which-browsersenvironments-are-supported)) build of `pdfjs-dist`, which includes several polyfills not included in the default build.
* npm
* Yarn
* pnpm
npm install pdfjs-dist
yarn add pdfjs-dist
pnpm add pdfjs-dist
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});
Eliminating extra spaces[β](#eliminating-extra-spaces "Direct link to Eliminating extra spaces")
------------------------------------------------------------------------------------------------
PDFs come in many varieties, which makes reading them a challenge. The loader parses individual text elements and joins them together with a space by default, but if you are seeing excessive spaces, this may not be the desired behavior. In that case, you can override the separator with an empty string like this:
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { parsedItemSeparator: "",});const docs = await loader.load();
Loading directories[β](#loading-directories "Direct link to Loading directories")
---------------------------------------------------------------------------------
import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { PDFLoader } from "langchain/document_loaders/fs/pdf";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";/* Load all PDFs within the specified directory */const directoryLoader = new DirectoryLoader( "src/document_loaders/example_data/", { ".pdf": (path: string) => new PDFLoader(path), });const docs = await directoryLoader.load();console.log({ docs });/* Additional steps : Split text into chunks with any TextSplitter. You can then use it as context or save it to memory afterwards. */const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splitDocs = await textSplitter.splitDocuments(docs);console.log({ splitDocs });
#### API Reference:
* [DirectoryLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_directory.DirectoryLoader.html) from `langchain/document_loaders/fs/directory`
* [PDFLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_pdf.PDFLoader.html) from `langchain/document_loaders/fs/pdf`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Open AI Whisper Audio
](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)[
Next
PPTX files
](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Setup](#setup)
* [Usage, one document per page](#usage-one-document-per-page)
* [Usage, one document per file](#usage-one-document-per-file)
* [Usage, custom `pdfjs` build](#usage-custom-pdfjs-build)
* [Eliminating extra spaces](#eliminating-extra-spaces)
* [Loading directories](#loading-directories)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.2/docs/integrations/document_loaders/file_loaders/subtitles | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.2/docs/integrations/platforms/)
* [Providers](/v0.2/docs/integrations/platforms/)
* [Anthropic](/v0.2/docs/integrations/platforms/anthropic)
* [AWS](/v0.2/docs/integrations/platforms/aws)
* [Google](/v0.2/docs/integrations/platforms/google)
* [Microsoft](/v0.2/docs/integrations/platforms/microsoft)
* [OpenAI](/v0.2/docs/integrations/platforms/openai)
* [Components](/v0.2/docs/integrations/components)
* [Chat models](/v0.2/docs/integrations/chat/)
* [LLMs](/v0.2/docs/integrations/llms/)
* [Embedding models](/v0.2/docs/integrations/text_embedding)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.2/docs/integrations/document_loaders/file_loaders/directory)
* [ChatGPT files](/v0.2/docs/integrations/document_loaders/file_loaders/chatgpt)
* [CSV files](/v0.2/docs/integrations/document_loaders/file_loaders/csv)
* [Docx files](/v0.2/docs/integrations/document_loaders/file_loaders/docx)
* [EPUB files](/v0.2/docs/integrations/document_loaders/file_loaders/epub)
* [JSON files](/v0.2/docs/integrations/document_loaders/file_loaders/json)
* [JSONLines files](/v0.2/docs/integrations/document_loaders/file_loaders/jsonlines)
* [Notion markdown export](/v0.2/docs/integrations/document_loaders/file_loaders/notion_markdown)
* [Open AI Whisper Audio](/v0.2/docs/integrations/document_loaders/file_loaders/openai_whisper_audio)
* [PDF files](/v0.2/docs/integrations/document_loaders/file_loaders/pdf)
* [PPTX files](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)
* [Subtitles](/v0.2/docs/integrations/document_loaders/file_loaders/subtitles)
* [Text files](/v0.2/docs/integrations/document_loaders/file_loaders/text)
* [Unstructured](/v0.2/docs/integrations/document_loaders/file_loaders/unstructured)
* [Web Loaders](/v0.2/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.2/docs/integrations/document_transformers)
* [Vector stores](/v0.2/docs/integrations/vectorstores)
* [Retrievers](/v0.2/docs/integrations/retrievers)
* [Tools](/v0.2/docs/integrations/tools)
* [Toolkits](/v0.2/docs/integrations/toolkits)
* [Stores](/v0.2/docs/integrations/stores/)
* [](/v0.2/)
* [Components](/v0.2/docs/integrations/components)
* [Document loaders](/v0.2/docs/integrations/document_loaders)
* [File Loaders](/v0.2/docs/integrations/document_loaders/file_loaders/)
* Subtitles
Subtitles
=========
This example goes over how to load data from subtitle files. One document will be created for each subtitles file.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install srt-parser-2
yarn add srt-parser-2
pnpm add srt-parser-2
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { SRTLoader } from "langchain/document_loaders/fs/srt";const loader = new SRTLoader( "src/document_loaders/example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt");const docs = await loader.load();
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
PPTX files
](/v0.2/docs/integrations/document_loaders/file_loaders/pptx)[
Next
Text files
](/v0.2/docs/integrations/document_loaders/file_loaders/text)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/directory/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* Folders with multiple files
Folders with multiple files
===========================
This example goes over how to load data from folders with multiple files. The second argument is a map of file extensions to loader factories. Each file will be passed to the matching loader, and the resulting documents will be concatenated together.
Example folder:
src/document_loaders/example_data/example/βββ example.jsonβββ example.jsonlβββ example.txtβββ example.csv
Example code:
import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { JSONLoader, JSONLinesLoader,} from "langchain/document_loaders/fs/json";import { TextLoader } from "langchain/document_loaders/fs/text";import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new DirectoryLoader( "src/document_loaders/example_data/example", { ".json": (path) => new JSONLoader(path, "/texts"), ".jsonl": (path) => new JSONLinesLoader(path, "/html"), ".txt": (path) => new TextLoader(path), ".csv": (path) => new CSVLoader(path, "text"), });const docs = await loader.load();console.log({ docs });
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
File Loaders
](/v0.1/docs/integrations/document_loaders/file_loaders/)[
Next
ChatGPT files
](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/use_cases/sql/agents/docs/langgraph/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
Page Not Found
==============
We could not find what you were looking for.
Please contact the owner of the site that linked you to the original URL and let them know their link is broken.
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/chains/additional/openai_functions/openapi/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
OpenAPI Calls
=============
Compatibility
Must be used with an [OpenAI Functions](https://platform.openai.com/docs/guides/gpt/function-calling) model.
This chain can automatically select and call APIs based only on an OpenAPI spec. It parses an input OpenAPI spec into JSON Schema that the OpenAI functions API can handle. This allows ChatGPT to automatically select the correct method and populate the correct parameters for the a API call in the spec for a given user input. We then make the actual API call, and return the result.
Usage[β](#usage "Direct link to Usage")
---------------------------------------
The below examples initialize the chain with a URL hosting an OpenAPI spec for brevity, but you can also directly pass a spec into the method.
### Query XKCD[β](#query-xkcd "Direct link to Query XKCD")
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain( "https://gist.githubusercontent.com/roaldnefs/053e505b2b7a807290908fe9aa3e1f00/raw/0a212622ebfef501163f91e23803552411ed00e4/openapi.yaml");const result = await chain.run(`What's today's comic?`);console.log(JSON.stringify(result, null, 2));/* { "month": "6", "num": 2795, "link": "", "year": "2023", "news": "", "safe_title": "Glass-Topped Table", "transcript": "", "alt": "You can pour a drink into it while hosting a party, although it's a real pain to fit in the dishwasher afterward.", "img": "https://imgs.xkcd.com/comics/glass_topped_table.png", "title": "Glass-Topped Table", "day": "28" }*/
#### API Reference:
* [createOpenAPIChain](https://api.js.langchain.com/functions/langchain_chains.createOpenAPIChain.html) from `langchain/chains`
### Translation Service (POST request)[β](#translation-service-post-request "Direct link to Translation Service (POST request)")
The OpenAPI chain can also make POST requests and populate bodies with JSON content if necessary.
import { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml");const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nΠΠ΅Ρ, ΡΠΏΠ°ΡΠΈΠ±ΠΎ.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"ΠΠ΅Ρ, Π½Π΅ Π½Π°Π΄ΠΎ\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"ΠΠΈ Π² ΠΊΠΎΠ΅ΠΌ ΡΠ»ΡΡΠ°Π΅\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"ΠΠ΅Ρ, Π±Π»Π°Π³ΠΎΠ΄Π°ΡΡ\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"ΠΠ½Π½Π°, Ρ
ΠΎΡΠ΅ΡΡ ΠΏΠΎΠΏΡΠΎΠ±ΠΎΠ²Π°ΡΡ ΠΌΠΎΠΉ Π²ΠΎΠ»ΡΠ΅Π±Π½ΡΠΉ ΡΠΎΡΡ? ΠΠ½ ΡΠ΄Π΅Π»Π°Π½ Ρ Π»ΡΠ±ΠΎΠ²ΡΡ ΠΈ Π²ΠΎΠ»ΡΠ΅Π±ΡΡΠ²ΠΎΠΌ!\\"\\n* Anna: \\"Π‘ΠΏΠ°ΡΠΈΠ±ΠΎ, ΠΠ°ΠΉΠΊ, Π½ΠΎ Ρ Π½Π° Π΄ΠΈΠ΅ΡΠ΅. ΠΠ΅Ρ, Π±Π»Π°Π³ΠΎΠ΄Π°ΡΡ.\\"\\n* Mike: \\"ΠΡ Π»Π°Π΄Π½ΠΎ, Π±ΠΎΠ»ΡΡΠ΅ Π΄Π»Ρ ΠΌΠ΅Π½Ρ!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/
#### API Reference:
* [createOpenAPIChain](https://api.js.langchain.com/functions/langchain_chains.createOpenAPIChain.html) from `langchain/chains`
### Customization[β](#customization "Direct link to Customization")
The chain will be created with a default model set to `gpt-3.5-turbo-0613`, but you can pass an options parameter into the creation method with a pre-created `ChatOpenAI` instance.
You can also pass in custom `headers` and `params` that will be appended to all requests made by the chain, allowing it to call APIs that require authentication.
import { createOpenAPIChain } from "langchain/chains";import { ChatOpenAI } from "@langchain/openai";const chatModel = new ChatOpenAI({ model: "gpt-4-0613", temperature: 0 });const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml", { llm: chatModel, headers: { authorization: "Bearer SOME_TOKEN", },});const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nΠΠ΅Ρ, ΡΠΏΠ°ΡΠΈΠ±ΠΎ.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"ΠΠ΅Ρ, Π½Π΅ Π½Π°Π΄ΠΎ\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"ΠΠΈ Π² ΠΊΠΎΠ΅ΠΌ ΡΠ»ΡΡΠ°Π΅\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"ΠΠ΅Ρ, Π±Π»Π°Π³ΠΎΠ΄Π°ΡΡ\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"ΠΠ½Π½Π°, Ρ
ΠΎΡΠ΅ΡΡ ΠΏΠΎΠΏΡΠΎΠ±ΠΎΠ²Π°ΡΡ ΠΌΠΎΠΉ Π²ΠΎΠ»ΡΠ΅Π±Π½ΡΠΉ ΡΠΎΡΡ? ΠΠ½ ΡΠ΄Π΅Π»Π°Π½ Ρ Π»ΡΠ±ΠΎΠ²ΡΡ ΠΈ Π²ΠΎΠ»ΡΠ΅Π±ΡΡΠ²ΠΎΠΌ!\\"\\n* Anna: \\"Π‘ΠΏΠ°ΡΠΈΠ±ΠΎ, ΠΠ°ΠΉΠΊ, Π½ΠΎ Ρ Π½Π° Π΄ΠΈΠ΅ΡΠ΅. ΠΠ΅Ρ, Π±Π»Π°Π³ΠΎΠ΄Π°ΡΡ.\\"\\n* Mike: \\"ΠΡ Π»Π°Π΄Π½ΠΎ, Π±ΠΎΠ»ΡΡΠ΅ Π΄Π»Ρ ΠΌΠ΅Π½Ρ!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/
#### API Reference:
* [createOpenAPIChain](https://api.js.langchain.com/functions/langchain_chains.createOpenAPIChain.html) from `langchain/chains`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* * *
#### Help us out by providing feedback on this documentation page:
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/chains/popular/sqlite_legacy/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
On this page
SQL
===
This example demonstrates the use of the `SQLDatabaseChain` for answering questions over a SQL database.
This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc.
info
These are legacy docs. It is now recommended to use LCEL over legacy implementations.
Looking for the LCEL docs? Click [here](/v0.1/docs/modules/chains/popular/sqlite/).
Set up[β](#set-up "Direct link to Set up")
------------------------------------------
First install `typeorm`:
* npm
* Yarn
* pnpm
npm install typeorm
yarn add typeorm
pnpm add typeorm
Then install the dependencies needed for your database. For example, for SQLite:
* npm
* Yarn
* pnpm
npm install sqlite3
yarn add sqlite3
pnpm add sqlite3
Currently, LangChain.js has default prompts for Postgres, SQLite, Microsoft SQL Server, MySQL, and SAP HANA.
Finally follow the instructions on [https://database.guide/2-sample-databases-sqlite/](https://database.guide/2-sample-databases-sqlite/) to get the sample database for this example.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { DataSource } from "typeorm";import { OpenAI } from "@langchain/openai";import { SqlDatabase } from "langchain/sql_db";import { SqlDatabaseChain } from "langchain/chains/sql_db";/** * This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc. * To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file * in the examples folder. */const datasource = new DataSource({ type: "sqlite", database: "Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const chain = new SqlDatabaseChain({ llm: new OpenAI({ temperature: 0 }), database: db,});const res = await chain.run("How many tracks are there?");console.log(res);// There are 3503 tracks.
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [SqlDatabaseChain](https://api.js.langchain.com/classes/langchain_chains_sql_db.SqlDatabaseChain.html) from `langchain/chains/sql_db`
You can include or exclude tables when creating the `SqlDatabase` object to help the chain focus on the tables you want. It can also reduce the number of tokens used in the chain.
const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource, includesTables: ["Track"],});
If desired, you can return the used SQL command when calling the chain.
import { DataSource } from "typeorm";import { OpenAI } from "@langchain/openai";import { SqlDatabase } from "langchain/sql_db";import { SqlDatabaseChain } from "langchain/chains/sql_db";/** * This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc. * To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file * in the examples folder. */const datasource = new DataSource({ type: "sqlite", database: "Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const chain = new SqlDatabaseChain({ llm: new OpenAI({ temperature: 0 }), database: db, sqlOutputKey: "sql",});const res = await chain.invoke({ query: "How many tracks are there?" });/* Expected result: * { * result: ' There are 3503 tracks.', * sql: ' SELECT COUNT(*) FROM "Track";' * } */console.log(res);
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [SqlDatabaseChain](https://api.js.langchain.com/classes/langchain_chains_sql_db.SqlDatabaseChain.html) from `langchain/chains/sql_db`
SAP Hana[β](#sap-hana "Direct link to SAP Hana")
------------------------------------------------
Here's an example of using the chainΒ with a SAP HANA database:
import { DataSource } from "typeorm";import { OpenAI } from "@langchain/openai";import { SqlDatabase } from "langchain/sql_db";import { SqlDatabaseChain } from "langchain/chains/sql_db";/** * This example uses a SAP HANA Cloud database. You can create a free trial database via https://developers.sap.com/tutorials/hana-cloud-deploying.html * * You will need to add the following packages to your package.json as they are required when using typeorm with SAP HANA: * * "hdb-pool": "^0.1.6", (or latest version) * "@sap/hana-client": "^2.17.22" (or latest version) * */const datasource = new DataSource({ type: "sap", host: "<ADD_YOURS_HERE>.hanacloud.ondemand.com", port: 443, username: "<ADD_YOURS_HERE>", password: "<ADD_YOURS_HERE>", schema: "<ADD_YOURS_HERE>", encrypt: true, extra: { sslValidateCertificate: false, },});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const chain = new SqlDatabaseChain({ llm: new OpenAI({ temperature: 0 }), database: db,});const res = await chain.run("How many tracks are there?");console.log(res);// There are 3503 tracks.
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [SqlDatabaseChain](https://api.js.langchain.com/classes/langchain_chains_sql_db.SqlDatabaseChain.html) from `langchain/chains/sql_db`
Custom prompt[β](#custom-prompt "Direct link to Custom prompt")
---------------------------------------------------------------
You can also customize the prompt that is used. Here is an example prompting the model to understand that "foobar" is the same as the Employee table:
import { DataSource } from "typeorm";import { OpenAI } from "@langchain/openai";import { SqlDatabase } from "langchain/sql_db";import { SqlDatabaseChain } from "langchain/chains/sql_db";import { PromptTemplate } from "@langchain/core/prompts";const template = `Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.Use the following format:Question: "Question here"SQLQuery: "SQL Query to run"SQLResult: "Result of the SQLQuery"Answer: "Final answer here"Only use the following tables:{table_info}If someone asks for the table foobar, they really mean the employee table.Question: {input}`;const prompt = PromptTemplate.fromTemplate(template);/** * This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc. * To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file * in the examples folder. */const datasource = new DataSource({ type: "sqlite", database: "data/Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const chain = new SqlDatabaseChain({ llm: new OpenAI({ temperature: 0 }), database: db, sqlOutputKey: "sql", prompt,});const res = await chain.invoke({ query: "How many employees are there in the foobar table?",});console.log(res);/* { result: ' There are 8 employees in the foobar table.', sql: ' SELECT COUNT(*) FROM Employee;' }*/
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [SqlDatabase](https://api.js.langchain.com/classes/langchain_sql_db.SqlDatabase.html) from `langchain/sql_db`
* [SqlDatabaseChain](https://api.js.langchain.com/classes/langchain_chains_sql_db.SqlDatabaseChain.html) from `langchain/chains/sql_db`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
* [Set up](#set-up)
* [SAP Hana](#sap-hana)
* [Custom prompt](#custom-prompt)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/docs/modules/indexes/document_loaders/examples/file_loaders/unstructured | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* Unstructured
Unstructured
============
This example covers how to use [Unstructured](/v0.1/docs/ecosystem/integrations/unstructured/) to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
You can run Unstructured locally in your computer using Docker. To do so, you need to have Docker installed. You can find the instructions to install Docker [here](https://docs.docker.com/get-docker/).
docker run -p 8000:8000 -d --rm --name unstructured-api quay.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Once Unstructured is running, you can use it to load files from your computer. You can use the following code to load a file from your computer.
import { UnstructuredLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredLoader( "src/document_loaders/example_data/notion.md", options);const docs = await loader.load();
#### API Reference:
* [UnstructuredLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredLoader.html) from `langchain/document_loaders/fs/unstructured`
Directories[β](#directories "Direct link to Directories")
---------------------------------------------------------
You can also load all of the files in the directory using [`UnstructuredDirectoryLoader`](https://api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredDirectoryLoader.html), which inherits from [`DirectoryLoader`](/v0.1/docs/integrations/document_loaders/file_loaders/directory/):
import { UnstructuredDirectoryLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredDirectoryLoader( "langchain/src/document_loaders/tests/example_data", options);const docs = await loader.load();
#### API Reference:
* [UnstructuredDirectoryLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredDirectoryLoader.html) from `langchain/document_loaders/fs/unstructured`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Text files
](/v0.1/docs/integrations/document_loaders/file_loaders/text/)[
Next
Web Loaders
](/v0.1/docs/integrations/document_loaders/web_loaders/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* ChatGPT files
On this page
ChatGPT files
=============
This example goes over how to load conversations.json from your ChatGPT data export folder. You can get your data export by email by going to: ChatGPT -> (Profile) - Settings -> Export data -> Confirm export -> Check email.
Usage, extracting all logs[β](#usage-extracting-all-logs "Direct link to Usage, extracting all logs")
-----------------------------------------------------------------------------------------------------
Example code:
import { ChatGPTLoader } from "langchain/document_loaders/fs/chatgpt";const loader = new ChatGPTLoader("./example_data/example_conversations.json");const docs = await loader.load();console.log(docs);
Usage, extracting a single log[β](#usage-extracting-a-single-log "Direct link to Usage, extracting a single log")
-----------------------------------------------------------------------------------------------------------------
Example code:
import { ChatGPTLoader } from "langchain/document_loaders/fs/chatgpt";const loader = new ChatGPTLoader( "./example_data/example_conversations.json", 1);const docs = await loader.load();console.log(docs);
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Folders with multiple files
](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)[
Next
CSV files
](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Usage, extracting all logs](#usage-extracting-all-logs)
* [Usage, extracting a single log](#usage-extracting-a-single-log)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/csv/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* CSV files
On this page
CSV files
=========
This example goes over how to load data from CSV files. The second argument is the `column` name to extract from the CSV file. One document will be created for each row in the CSV file. When `column` is not specified, each row is converted into a key/value pair with each key/value pair outputted to a new line in the document's `pageContent`. When `column` is specified, one document is created for each row, and the value of the specified column is used as the document's pageContent.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install d3-dsv@2
yarn add d3-dsv@2
pnpm add d3-dsv@2
Usage, extracting all columns[β](#usage-extracting-all-columns "Direct link to Usage, extracting all columns")
--------------------------------------------------------------------------------------------------------------
Example CSV file:
id,text1,This is a sentence.2,This is another sentence.
Example code:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader("src/document_loaders/example_data/example.csv");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 1text: This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "id: 2text: This is another sentence.", },]*/
Usage, extracting a single column[β](#usage-extracting-a-single-column "Direct link to Usage, extracting a single column")
--------------------------------------------------------------------------------------------------------------------------
Example CSV file:
id,text1,This is a sentence.2,This is another sentence.
Example code:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader( "src/document_loaders/example_data/example.csv", "text");const docs = await loader.load();/*[ Document { "metadata": { "line": 1, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "line": 2, "source": "src/document_loaders/example_data/example.csv", }, "pageContent": "This is another sentence.", },]*/
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
ChatGPT files
](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)[
Next
Docx files
](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [Setup](#setup)
* [Usage, extracting all columns](#usage-extracting-all-columns)
* [Usage, extracting a single column](#usage-extracting-a-single-column)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/docx/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* Docx files
Docx files
==========
This example goes over how to load data from docx files.
Setup
=====
* npm
* Yarn
* pnpm
npm install mammoth
yarn add mammoth
pnpm add mammoth
Usage
=====
import { DocxLoader } from "langchain/document_loaders/fs/docx";const loader = new DocxLoader( "src/document_loaders/tests/example_data/attention.docx");const docs = await loader.load();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
CSV files
](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)[
Next
EPUB files
](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/epub/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* EPUB files
EPUB files
==========
This example goes over how to load data from EPUB files. By default, one document will be created for each chapter in the EPUB file, you can change this behavior by setting the `splitChapters` option to `false`.
Setup
=====
* npm
* Yarn
* pnpm
npm install epub2 html-to-text
yarn add epub2 html-to-text
pnpm add epub2 html-to-text
Usage, one document per chapter
===============================
import { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader("src/document_loaders/example_data/example.epub");const docs = await loader.load();
Usage, one document per file
============================
import { EPubLoader } from "langchain/document_loaders/fs/epub";const loader = new EPubLoader( "src/document_loaders/example_data/example.epub", { splitChapters: false, });const docs = await loader.load();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Docx files
](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)[
Next
JSON files
](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/json/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* JSON files
On this page
JSON files
==========
The JSON loader use [JSON pointer](https://github.com/janl/node-jsonpointer) to target keys in your JSON files you want to target.
### No JSON pointer example[β](#no-json-pointer-example "Direct link to No JSON pointer example")
The most simple way of using it, is to specify no JSON pointer. The loader will load all strings it finds in the JSON object.
Example JSON file:
{ "texts": ["This is a sentence.", "This is another sentence."]}
Example code:
import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader("src/document_loaders/example_data/example.json");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", },]*/
### Using JSON pointer example[β](#using-json-pointer-example "Direct link to Using JSON pointer example")
You can do a more advanced scenario by choosing which keys in your JSON object you want to extract string from.
In this example, we want to only extract information from "from" and "surname" entries.
{ "1": { "body": "BD 2023 SUMMER", "from": "LinkedIn Job", "labels": ["IMPORTANT", "CATEGORY_UPDATES", "INBOX"] }, "2": { "body": "Intern, Treasury and other roles are available", "from": "LinkedIn Job2", "labels": ["IMPORTANT"], "other": { "name": "plop", "surname": "bob" } }}
Example code:
import { JSONLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLoader( "src/document_loaders/example_data/example.json", ["/from", "/surname"]);const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/json", "line": 1, "source": "blob", }, "pageContent": "BD 2023 SUMMER", }, Document { "metadata": { "blobType": "application/json", "line": 2, "source": "blob", }, "pageContent": "LinkedIn Job", }, ...]
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
EPUB files
](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)[
Next
JSONLines files
](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [No JSON pointer example](#no-json-pointer-example)
* [Using JSON pointer example](#using-json-pointer-example)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* JSONLines files
JSONLines files
===============
This example goes over how to load data from JSONLines or JSONL files. The second argument is a JSONPointer to the property to extract from each JSON object in the file. One document will be created for each JSON object in the file.
Example JSONLines file:
{"html": "This is a sentence."}{"html": "This is another sentence."}
Example code:
import { JSONLinesLoader } from "langchain/document_loaders/fs/json";const loader = new JSONLinesLoader( "src/document_loaders/example_data/example.jsonl", "/html");const docs = await loader.load();/*[ Document { "metadata": { "blobType": "application/jsonl+json", "line": 1, "source": "blob", }, "pageContent": "This is a sentence.", }, Document { "metadata": { "blobType": "application/jsonl+json", "line": 2, "source": "blob", }, "pageContent": "This is another sentence.", },]*/
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
JSON files
](/v0.1/docs/integrations/document_loaders/file_loaders/json/)[
Next
Notion markdown export
](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* Notion markdown export
Notion markdown export
======================
This example goes over how to load data from your Notion pages exported from the notion dashboard.
First, export your notion pages as **Markdown & CSV** as per the offical explanation [here](https://www.notion.so/help/export-your-content). Make sure to select `include subpages` and `Create folders for subpages.`
Then, unzip the downloaded file and move the unzipped folder into your repository. It should contain the markdown files of your pages.
Once the folder is in your repository, simply run the example below:
import { NotionLoader } from "langchain/document_loaders/fs/notion";export const run = async () => { /** Provide the directory path of your notion folder */ const directoryPath = "Notion_DB"; const loader = new NotionLoader(directoryPath); const docs = await loader.load(); console.log({ docs });};
#### API Reference:
* [NotionLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_notion.NotionLoader.html) from `langchain/document_loaders/fs/notion`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
JSONLines files
](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)[
Next
Open AI Whisper Audio
](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* Open AI Whisper Audio
Open AI Whisper Audio
=====================
Compatibility
Only available on Node.js.
This covers how to load document objects from an audio file using the [Open AI Whisper](https://platform.openai.com/docs/guides/speech-to-text) API.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
To run this loader you will need to create an account on the Open AI and obtain an auth key from the [https://platform.openai.com/account](https://platform.openai.com/account) page.
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Once auth key is configured, you can use the loader to create transcriptions and then convert them into a Document.
import { OpenAIWhisperAudio } from "langchain/document_loaders/fs/openai_whisper_audio";const filePath = "./src/document_loaders/example_data/test.mp3";const loader = new OpenAIWhisperAudio(filePath);const docs = await loader.load();console.log(docs);
#### API Reference:
* [OpenAIWhisperAudio](https://api.js.langchain.com/classes/langchain_document_loaders_fs_openai_whisper_audio.OpenAIWhisperAudio.html) from `langchain/document_loaders/fs/openai_whisper_audio`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Notion markdown export
](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)[
Next
PDF files
](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/pdf/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* PDF files
On this page
PDF files
=========
This example goes over how to load data from PDF files. By default, one document will be created for each page in the PDF file, you can change this behavior by setting the `splitPages` option to `false`.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install pdf-parse
yarn add pdf-parse
pnpm add pdf-parse
Usage, one document per page[β](#usage-one-document-per-page "Direct link to Usage, one document per page")
-----------------------------------------------------------------------------------------------------------
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf");const docs = await loader.load();
Usage, one document per file[β](#usage-one-document-per-file "Direct link to Usage, one document per file")
-----------------------------------------------------------------------------------------------------------
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { splitPages: false,});const docs = await loader.load();
Usage, custom `pdfjs` build[β](#usage-custom-pdfjs-build "Direct link to usage-custom-pdfjs-build")
---------------------------------------------------------------------------------------------------
By default we use the `pdfjs` build bundled with `pdf-parse`, which is compatible with most environments, including Node.js and modern browsers. If you want to use a more recent version of `pdfjs-dist` or if you want to use a custom build of `pdfjs-dist`, you can do so by providing a custom `pdfjs` function that returns a promise that resolves to the `PDFJS` object.
In the following example we use the "legacy" (see [pdfjs docs](https://github.com/mozilla/pdf.js/wiki/Frequently-Asked-Questions#which-browsersenvironments-are-supported)) build of `pdfjs-dist`, which includes several polyfills not included in the default build.
* npm
* Yarn
* pnpm
npm install pdfjs-dist
yarn add pdfjs-dist
pnpm add pdfjs-dist
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { // you may need to add `.then(m => m.default)` to the end of the import pdfjs: () => import("pdfjs-dist/legacy/build/pdf.js"),});
Eliminating extra spaces[β](#eliminating-extra-spaces "Direct link to Eliminating extra spaces")
------------------------------------------------------------------------------------------------
PDFs come in many varieties, which makes reading them a challenge. The loader parses individual text elements and joins them together with a space by default, but if you are seeing excessive spaces, this may not be the desired behavior. In that case, you can override the separator with an empty string like this:
import { PDFLoader } from "langchain/document_loaders/fs/pdf";const loader = new PDFLoader("src/document_loaders/example_data/example.pdf", { parsedItemSeparator: "",});const docs = await loader.load();
Loading directories[β](#loading-directories "Direct link to Loading directories")
---------------------------------------------------------------------------------
import { DirectoryLoader } from "langchain/document_loaders/fs/directory";import { PDFLoader } from "langchain/document_loaders/fs/pdf";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";/* Load all PDFs within the specified directory */const directoryLoader = new DirectoryLoader( "src/document_loaders/example_data/", { ".pdf": (path: string) => new PDFLoader(path), });const docs = await directoryLoader.load();console.log({ docs });/* Additional steps : Split text into chunks with any TextSplitter. You can then use it as context or save it to memory afterwards. */const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200,});const splitDocs = await textSplitter.splitDocuments(docs);console.log({ splitDocs });
#### API Reference:
* [DirectoryLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_directory.DirectoryLoader.html) from `langchain/document_loaders/fs/directory`
* [PDFLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_pdf.PDFLoader.html) from `langchain/document_loaders/fs/pdf`
* [RecursiveCharacterTextSplitter](https://api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `langchain/text_splitter`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Open AI Whisper Audio
](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)[
Next
PPTX files
](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Setup](#setup)
* [Usage, one document per page](#usage-one-document-per-page)
* [Usage, one document per file](#usage-one-document-per-file)
* [Usage, custom `pdfjs` build](#usage-custom-pdfjs-build)
* [Eliminating extra spaces](#eliminating-extra-spaces)
* [Loading directories](#loading-directories)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* Subtitles
Subtitles
=========
This example goes over how to load data from subtitle files. One document will be created for each subtitles file.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install srt-parser-2
yarn add srt-parser-2
pnpm add srt-parser-2
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { SRTLoader } from "langchain/document_loaders/fs/srt";const loader = new SRTLoader( "src/document_loaders/example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt");const docs = await loader.load();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
PPTX files
](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)[
Next
Text files
](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/pptx/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* PPTX files
On this page
PPTX files
==========
This example goes over how to load data from PPTX files. By default, one document will be created for all pages in the PPTX file.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install officeparser
yarn add officeparser
pnpm add officeparser
Usage, one document per page[β](#usage-one-document-per-page "Direct link to Usage, one document per page")
-----------------------------------------------------------------------------------------------------------
import { PPTXLoader } from "langchain/document_loaders/fs/pptx";const loader = new PPTXLoader("src/document_loaders/example_data/example.pptx");const docs = await loader.load();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
PDF files
](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)[
Next
Subtitles
](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Setup](#setup)
* [Usage, one document per page](#usage-one-document-per-page)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/text/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* Text files
Text files
==========
This example goes over how to load data from text files.
import { TextLoader } from "langchain/document_loaders/fs/text";const loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Subtitles
](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)[
Next
Unstructured
](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Folders with multiple files](/v0.1/docs/integrations/document_loaders/file_loaders/directory/)
* [ChatGPT files](/v0.1/docs/integrations/document_loaders/file_loaders/chatgpt/)
* [CSV files](/v0.1/docs/integrations/document_loaders/file_loaders/csv/)
* [Docx files](/v0.1/docs/integrations/document_loaders/file_loaders/docx/)
* [EPUB files](/v0.1/docs/integrations/document_loaders/file_loaders/epub/)
* [JSON files](/v0.1/docs/integrations/document_loaders/file_loaders/json/)
* [JSONLines files](/v0.1/docs/integrations/document_loaders/file_loaders/jsonlines/)
* [Notion markdown export](/v0.1/docs/integrations/document_loaders/file_loaders/notion_markdown/)
* [Open AI Whisper Audio](/v0.1/docs/integrations/document_loaders/file_loaders/openai_whisper_audio/)
* [PDF files](/v0.1/docs/integrations/document_loaders/file_loaders/pdf/)
* [PPTX files](/v0.1/docs/integrations/document_loaders/file_loaders/pptx/)
* [Subtitles](/v0.1/docs/integrations/document_loaders/file_loaders/subtitles/)
* [Text files](/v0.1/docs/integrations/document_loaders/file_loaders/text/)
* [Unstructured](/v0.1/docs/integrations/document_loaders/file_loaders/unstructured/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* Unstructured
Unstructured
============
This example covers how to use [Unstructured](/v0.1/docs/ecosystem/integrations/unstructured/) to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
You can run Unstructured locally in your computer using Docker. To do so, you need to have Docker installed. You can find the instructions to install Docker [here](https://docs.docker.com/get-docker/).
docker run -p 8000:8000 -d --rm --name unstructured-api quay.io/unstructured-io/unstructured-api:latest --port 8000 --host 0.0.0.0
Usage[β](#usage "Direct link to Usage")
---------------------------------------
Once Unstructured is running, you can use it to load files from your computer. You can use the following code to load a file from your computer.
import { UnstructuredLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredLoader( "src/document_loaders/example_data/notion.md", options);const docs = await loader.load();
#### API Reference:
* [UnstructuredLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredLoader.html) from `langchain/document_loaders/fs/unstructured`
Directories[β](#directories "Direct link to Directories")
---------------------------------------------------------
You can also load all of the files in the directory using [`UnstructuredDirectoryLoader`](https://api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredDirectoryLoader.html), which inherits from [`DirectoryLoader`](/v0.1/docs/integrations/document_loaders/file_loaders/directory/):
import { UnstructuredDirectoryLoader } from "langchain/document_loaders/fs/unstructured";const options = { apiKey: "MY_API_KEY",};const loader = new UnstructuredDirectoryLoader( "langchain/src/document_loaders/tests/example_data", options);const docs = await loader.load();
#### API Reference:
* [UnstructuredDirectoryLoader](https://api.js.langchain.com/classes/langchain_document_loaders_fs_unstructured.UnstructuredDirectoryLoader.html) from `langchain/document_loaders/fs/unstructured`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Text files
](/v0.1/docs/integrations/document_loaders/file_loaders/text/)[
Next
Web Loaders
](/v0.1/docs/integrations/document_loaders/web_loaders/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/text_embedding/fireworks/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Alibaba Tongyi](/v0.1/docs/integrations/text_embedding/alibaba_tongyi/)
* [Azure OpenAI](/v0.1/docs/integrations/text_embedding/azure_openai/)
* [Baidu Qianfan](/v0.1/docs/integrations/text_embedding/baidu_qianfan/)
* [Bedrock](/v0.1/docs/integrations/text_embedding/bedrock/)
* [Cloudflare Workers AI](/v0.1/docs/integrations/text_embedding/cloudflare_ai/)
* [Cohere](/v0.1/docs/integrations/text_embedding/cohere/)
* [Fireworks](/v0.1/docs/integrations/text_embedding/fireworks/)
* [Google AI](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Google PaLM](/v0.1/docs/integrations/text_embedding/google_palm/)
* [Google Vertex AI](/v0.1/docs/integrations/text_embedding/google_vertex_ai/)
* [Gradient AI](/v0.1/docs/integrations/text_embedding/gradient_ai/)
* [HuggingFace Inference](/v0.1/docs/integrations/text_embedding/hugging_face_inference/)
* [Llama CPP](/v0.1/docs/integrations/text_embedding/llama_cpp/)
* [Minimax](/v0.1/docs/integrations/text_embedding/minimax/)
* [Mistral AI](/v0.1/docs/integrations/text_embedding/mistralai/)
* [Nomic](/v0.1/docs/integrations/text_embedding/nomic/)
* [Ollama](/v0.1/docs/integrations/text_embedding/ollama/)
* [OpenAI](/v0.1/docs/integrations/text_embedding/openai/)
* [Prem AI](/v0.1/docs/integrations/text_embedding/premai/)
* [TensorFlow](/v0.1/docs/integrations/text_embedding/tensorflow/)
* [Together AI](/v0.1/docs/integrations/text_embedding/togetherai/)
* [HuggingFace Transformers](/v0.1/docs/integrations/text_embedding/transformers/)
* [Voyage AI](/v0.1/docs/integrations/text_embedding/voyageai/)
* [ZhipuAI](/v0.1/docs/integrations/text_embedding/zhipuai/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* Fireworks
On this page
Fireworks
=========
The `FireworksEmbeddings` class allows you to use the Fireworks AI API to generate embeddings.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
First, sign up for a [Fireworks API key](https://fireworks.ai/) and set it as an environment variable called `FIREWORKS_API_KEY`.
Next, install the `@langchain/community` package as shown below:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { FireworksEmbeddings } from "@langchain/community/embeddings/fireworks";/* Embed queries */const fireworksEmbeddings = new FireworksEmbeddings();const res = await fireworksEmbeddings.embedQuery("Hello world");console.log(res);/* Embed documents */const documentRes = await fireworksEmbeddings.embedDocuments([ "Hello world", "Bye bye",]);console.log(documentRes);
#### API Reference:
* [FireworksEmbeddings](https://api.js.langchain.com/classes/langchain_community_embeddings_fireworks.FireworksEmbeddings.html) from `@langchain/community/embeddings/fireworks`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Cohere
](/v0.1/docs/integrations/text_embedding/cohere/)[
Next
Google AI
](/v0.1/docs/integrations/text_embedding/google_generativeai/)
* [Setup](#setup)
* [Usage](#usage)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/docs/get_started/introduction | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Get started](/v0.1/docs/get_started/)
* Introduction
On this page
Introduction
============
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
* **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
* **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
This framework consists of several parts.
* **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
* **[LangChain Templates](https://python.langchain.com/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks. (_Python only_)
* **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as a REST API. (_Python only_)
* **[LangSmith](https://smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
![LangChain Diagram](/v0.1/assets/images/langchain_stack_feb_2024-101939844004a99c1b676723fc0ee5e9.webp)
Together, these products simplify the entire application lifecycle:
* **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.
* **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.
* **Deploy**: Turn any chain into an API with LangServe.
LangChain Libraries[β](#langchain-libraries "Direct link to LangChain Libraries")
---------------------------------------------------------------------------------
The main value props of the LangChain packages are:
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
Get started[β](#get-started "Direct link to Get started")
---------------------------------------------------------
[Here's](/v0.1/docs/get_started/installation/) how to install LangChain, set up your environment, and start building.
We recommend following our [Quickstart](/v0.1/docs/get_started/quickstart/) guide to familiarize yourself with the framework by building your first LangChain application.
Read up on our [Security](/v0.1/docs/security/) best practices to make sure you're developing safely with LangChain.
note
These docs focus on the JS/TS LangChain library. [Head here](https://python.langchain.com) for docs on the Python LangChain library.
LangChain Expression Language (LCEL)[β](#langchain-expression-language-lcel "Direct link to LangChain Expression Language (LCEL)")
----------------------------------------------------------------------------------------------------------------------------------
LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest βprompt + LLMβ chain to the most complex chains.
* **[Overview](/v0.1/docs/expression_language/)**: LCEL and its benefits
* **[Interface](/v0.1/docs/expression_language/interface/)**: The standard interface for LCEL objects
* **[How-to](/v0.1/docs/expression_language/how_to/routing/)**: Key features of LCEL
* **[Cookbook](/v0.1/docs/expression_language/cookbook/)**: Example code for accomplishing common tasks
Modules[β](#modules "Direct link to Modules")
---------------------------------------------
LangChain provides standard, extendable interfaces and integrations for the following modules:
#### [Model I/O](/v0.1/docs/modules/model_io/)[β](#model-io "Direct link to model-io")
Interface with language models
#### [Retrieval](/v0.1/docs/modules/data_connection/)[β](#retrieval "Direct link to retrieval")
Interface with application-specific data
#### [Agents](/v0.1/docs/modules/agents/)[β](#agents "Direct link to agents")
Let models choose which tools to use given high-level directives
Examples, ecosystem, and resources[β](#examples-ecosystem-and-resources "Direct link to Examples, ecosystem, and resources")
----------------------------------------------------------------------------------------------------------------------------
### [Use cases](/v0.1/docs/use_cases/)[β](#use-cases "Direct link to use-cases")
Walkthroughs and techniques for common end-to-end use cases, like:
* [Document question answering](/v0.1/docs/use_cases/question_answering/)
* [RAG](/v0.1/docs/use_cases/question_answering/)
* [Agents](/v0.1/docs/use_cases/autonomous_agents/)
* and much more...
### [Integrations](/v0.1/docs/integrations/platforms/)[β](#integrations "Direct link to integrations")
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/v0.1/docs/integrations/platforms/).
### [API reference](https://api.js.langchain.com)[β](#api-reference "Direct link to api-reference")
Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental packages.
### [Developer's guide](/v0.1/docs/contributing/)[β](#developers-guide "Direct link to developers-guide")
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
### [Community](/v0.1/docs/community/)[β](#community "Direct link to community")
Head to the [Community navigator](/v0.1/docs/community/) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM's.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Get started
](/v0.1/docs/get_started/)[
Next
Installation
](/v0.1/docs/get_started/installation/)
* [LangChain Libraries](#langchain-libraries)
* [Get started](#get-started)
* [LangChain Expression Language (LCEL)](#langchain-expression-language-lcel)
* [Modules](#modules)
* [Examples, ecosystem, and resources](#examples-ecosystem-and-resources)
* [Use cases](#use-cases)
* [Integrations](#integrations)
* [API reference](#api-reference)
* [Developer's guide](#developers-guide)
* [Community](#community)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/callbacks/how_to/create_handlers/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [How-to](/v0.1/docs/modules/callbacks/how_to/background_callbacks/)
* [Backgrounding callbacks](/v0.1/docs/modules/callbacks/how_to/background_callbacks/)
* [Creating custom callback handlers](/v0.1/docs/modules/callbacks/how_to/create_handlers/)
* [Callbacks in custom Chains](/v0.1/docs/modules/callbacks/how_to/creating_subclasses/)
* [Tags](/v0.1/docs/modules/callbacks/how_to/tags/)
* [Listeners](/v0.1/docs/modules/callbacks/how_to/with_listeners/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Callbacks](/v0.1/docs/modules/callbacks/)
* How-to
* Creating custom callback handlers
Creating custom callback handlers
=================================
You can also create your own handler by implementing the `BaseCallbackHandler` interface. This is useful if you want to do something more complex than just logging to the console, eg. send the events to a logging service. As an example here is a simple implementation of a handler that logs to the console:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
import { Serialized } from "@langchain/core/load/serializable";import { BaseCallbackHandler } from "@langchain/core/callbacks/base";import { AgentAction, AgentFinish } from "@langchain/core/agents";import { ChainValues } from "@langchain/core/utils/types";export class MyCallbackHandler extends BaseCallbackHandler { name = "MyCallbackHandler"; async handleChainStart(chain: Serialized) { console.log(`Entering new ${chain.id} chain...`); } async handleChainEnd(_output: ChainValues) { console.log("Finished chain."); } async handleAgentAction(action: AgentAction) { console.log(action.log); } async handleToolEnd(output: string) { console.log(output); } async handleText(text: string) { console.log(text); } async handleAgentEnd(action: AgentFinish) { console.log(action.log); }}
#### API Reference:
* [Serialized](https://api.js.langchain.com/types/langchain_core_load_serializable.Serialized.html) from `@langchain/core/load/serializable`
* [BaseCallbackHandler](https://api.js.langchain.com/classes/langchain_core_callbacks_base.BaseCallbackHandler.html) from `@langchain/core/callbacks/base`
* [AgentAction](https://api.js.langchain.com/types/langchain_core_agents.AgentAction.html) from `@langchain/core/agents`
* [AgentFinish](https://api.js.langchain.com/types/langchain_core_agents.AgentFinish.html) from `@langchain/core/agents`
* [ChainValues](https://api.js.langchain.com/types/langchain_core_utils_types.ChainValues.html) from `@langchain/core/utils/types`
You could then use it as described in the [section](#built-in-handlers) above.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Backgrounding callbacks
](/v0.1/docs/modules/callbacks/how_to/background_callbacks/)[
Next
Callbacks in custom Chains
](/v0.1/docs/modules/callbacks/how_to/creating_subclasses/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/callbacks/how_to/creating_subclasses/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [How-to](/v0.1/docs/modules/callbacks/how_to/background_callbacks/)
* [Backgrounding callbacks](/v0.1/docs/modules/callbacks/how_to/background_callbacks/)
* [Creating custom callback handlers](/v0.1/docs/modules/callbacks/how_to/create_handlers/)
* [Callbacks in custom Chains](/v0.1/docs/modules/callbacks/how_to/creating_subclasses/)
* [Tags](/v0.1/docs/modules/callbacks/how_to/tags/)
* [Listeners](/v0.1/docs/modules/callbacks/how_to/with_listeners/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Callbacks](/v0.1/docs/modules/callbacks/)
* How-to
* Callbacks in custom Chains
On this page
Callbacks in custom Chains/Agents
=================================
LangChain is designed to be extensible. You can add your own custom Chains and Agents to the library. This page will show you how to add callbacks to your custom Chains and Agents.
Adding callbacks to custom Chains[β](#adding-callbacks-to-custom-chains "Direct link to Adding callbacks to custom Chains")
---------------------------------------------------------------------------------------------------------------------------
When you create a custom chain you can easily set it up to use the same callback system as all the built-in chains. See this guide for more information on how to \[create custom chains and use callbacks inside them(/docs/modules/chains#subclassing-basechain).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Creating custom callback handlers
](/v0.1/docs/modules/callbacks/how_to/create_handlers/)[
Next
Tags
](/v0.1/docs/modules/callbacks/how_to/tags/)
* [Adding callbacks to custom Chains](#adding-callbacks-to-custom-chains)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/callbacks/how_to/tags/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [How-to](/v0.1/docs/modules/callbacks/how_to/background_callbacks/)
* [Backgrounding callbacks](/v0.1/docs/modules/callbacks/how_to/background_callbacks/)
* [Creating custom callback handlers](/v0.1/docs/modules/callbacks/how_to/create_handlers/)
* [Callbacks in custom Chains](/v0.1/docs/modules/callbacks/how_to/creating_subclasses/)
* [Tags](/v0.1/docs/modules/callbacks/how_to/tags/)
* [Listeners](/v0.1/docs/modules/callbacks/how_to/with_listeners/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Callbacks](/v0.1/docs/modules/callbacks/)
* How-to
* Tags
Tags
====
You can add tags to your callbacks by passing a `tags` argument to the `call()`/`run()`/`apply()` methods. This is useful for filtering your logs, eg. if you want to log all requests made to a specific LLMChain, you can add a tag, and then filter your logs by that tag. You can pass tags to both constructor and request callbacks, see the examples above for details. These tags are then passed to the `tags` argument of the "start" callback methods, ie. [`handleLLMStart`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleLLMStart), [`handleChatModelStart`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleChatModelStart), [`handleChainStart`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleChainStart), [`handleToolStart`](https://api.js.langchain.com/interfaces/langchain_core_callbacks_base.CallbackHandlerMethods.html#handleToolStart).
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Callbacks in custom Chains
](/v0.1/docs/modules/callbacks/how_to/creating_subclasses/)[
Next
Listeners
](/v0.1/docs/modules/callbacks/how_to/with_listeners/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/callbacks/how_to/with_listeners/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Memory](/v0.1/docs/modules/memory/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [How-to](/v0.1/docs/modules/callbacks/how_to/background_callbacks/)
* [Backgrounding callbacks](/v0.1/docs/modules/callbacks/how_to/background_callbacks/)
* [Creating custom callback handlers](/v0.1/docs/modules/callbacks/how_to/create_handlers/)
* [Callbacks in custom Chains](/v0.1/docs/modules/callbacks/how_to/creating_subclasses/)
* [Tags](/v0.1/docs/modules/callbacks/how_to/tags/)
* [Listeners](/v0.1/docs/modules/callbacks/how_to/with_listeners/)
* [Callbacks](/v0.1/docs/modules/callbacks/)
* [Experimental](/v0.1/docs/modules/experimental/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* More
* [Callbacks](/v0.1/docs/modules/callbacks/)
* How-to
* Listeners
Listeners
=========
LangChain callbacks offer a method `withListeners` which allow you to add event listeners to the following events:
* `onStart` - called when the chain starts
* `onEnd` - called when the chain ends
* `onError` - called when an error occurs
These methods accept a callback function which will be called when the event occurs. The callback function can accept two arguments:
* `input` - the input value, for example it would be `RunInput` if used with a Runnable.
* `config` - an optional config object. This can contain metadata, callbacks or any other values passed in as a config object when the chain is started.
Below is an example which demonstrates how to use the `withListeners` method:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { ChatOpenAI } from "@langchain/openai";import { Run } from "@langchain/core/tracers/base";import { ChatPromptTemplate } from "@langchain/core/prompts";const prompt = ChatPromptTemplate.fromMessages([ ["ai", "You are a nice assistant."], ["human", "{question}"],]);const model = new ChatOpenAI({});const chain = prompt.pipe(model);const trackTime = () => { let start: { startTime: number; question: string }; let end: { endTime: number; answer: string }; const handleStart = (run: Run) => { start = { startTime: run.start_time, question: run.inputs.question, }; }; const handleEnd = (run: Run) => { if (run.end_time && run.outputs) { end = { endTime: run.end_time, answer: run.outputs.content, }; } console.log("start", start); console.log("end", end); console.log(`total time: ${end.endTime - start.startTime}ms`); }; return { handleStart, handleEnd };};const { handleStart, handleEnd } = trackTime();await chain .withListeners({ onStart: (run: Run) => { handleStart(run); }, onEnd: (run: Run) => { handleEnd(run); }, }) .invoke({ question: "What is the meaning of life?" });/** * start { startTime: 1701723365470, question: 'What is the meaning of life?' }end { endTime: 1701723368767, answer: "The meaning of life is a philosophical question that has been contemplated and debated by scholars, philosophers, and individuals for centuries. The answer to this question can vary depending on one's beliefs, perspectives, and values. Some suggest that the meaning of life is to seek happiness and fulfillment, others propose it is to serve a greater purpose or contribute to the well-being of others. Ultimately, the meaning of life can be subjective and personal, and it is up to each individual to determine their own sense of purpose and meaning."}total time: 3297ms */
#### API Reference:
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [Run](https://api.js.langchain.com/interfaces/langchain_core_tracers_base.Run.html) from `@langchain/core/tracers/base`
* [ChatPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Tags
](/v0.1/docs/modules/callbacks/how_to/tags/)[
Next
Callbacks
](/v0.1/docs/modules/callbacks/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/prompts/example_selector_types/length_based/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [Quick Start](/v0.1/docs/modules/model_io/prompts/quick_start/)
* [Example selectors](/v0.1/docs/modules/model_io/prompts/example_selector_types/)
* [Select by length](/v0.1/docs/modules/model_io/prompts/example_selector_types/length_based/)
* [Select by similarity](/v0.1/docs/modules/model_io/prompts/example_selector_types/similarity/)
* [Few Shot Prompt Templates](/v0.1/docs/modules/model_io/prompts/few_shot/)
* [Partial prompt templates](/v0.1/docs/modules/model_io/prompts/partial/)
* [Composition](/v0.1/docs/modules/model_io/prompts/pipeline/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [Example selectors](/v0.1/docs/modules/model_io/prompts/example_selector_types/)
* Select by length
Select by length
================
This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more.
import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { LengthBasedExampleSelector } from "@langchain/core/example_selectors";export async function run() { // Create a prompt template that will be used to format the examples. const examplePrompt = new PromptTemplate({ inputVariables: ["input", "output"], template: "Input: {input}\nOutput: {output}", }); // Create a LengthBasedExampleSelector that will be used to select the examples. const exampleSelector = await LengthBasedExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], { examplePrompt, maxLength: 25, } ); // Create a FewShotPromptTemplate that will use the example selector. const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"], }); // An example with small input, so it selects all examples. console.log(await dynamicPrompt.format({ adjective: "big" })); /* Give the antonym of every input Input: happy Output: sad Input: tall Output: short Input: energetic Output: lethargic Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: */ // An example with long input, so it selects only one example. const longString = "big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else"; console.log(await dynamicPrompt.format({ adjective: longString })); /* Give the antonym of every input Input: happy Output: sad Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else Output: */}
#### API Reference:
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [LengthBasedExampleSelector](https://api.js.langchain.com/classes/langchain_core_example_selectors.LengthBasedExampleSelector.html) from `@langchain/core/example_selectors`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Example selectors
](/v0.1/docs/modules/model_io/prompts/example_selector_types/)[
Next
Select by similarity
](/v0.1/docs/modules/model_io/prompts/example_selector_types/similarity/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/modules/model_io/prompts/example_selector_types/similarity/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Quickstart](/v0.1/docs/modules/model_io/quick_start/)
* [Concepts](/v0.1/docs/modules/model_io/concepts/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [Quick Start](/v0.1/docs/modules/model_io/prompts/quick_start/)
* [Example selectors](/v0.1/docs/modules/model_io/prompts/example_selector_types/)
* [Select by length](/v0.1/docs/modules/model_io/prompts/example_selector_types/length_based/)
* [Select by similarity](/v0.1/docs/modules/model_io/prompts/example_selector_types/similarity/)
* [Few Shot Prompt Templates](/v0.1/docs/modules/model_io/prompts/few_shot/)
* [Partial prompt templates](/v0.1/docs/modules/model_io/prompts/partial/)
* [Composition](/v0.1/docs/modules/model_io/prompts/pipeline/)
* [LLMs](/v0.1/docs/modules/model_io/llms/)
* [Chat Models](/v0.1/docs/modules/model_io/chat/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Output Parsers](/v0.1/docs/modules/model_io/output_parsers/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Prompts](/v0.1/docs/modules/model_io/prompts/)
* [Example selectors](/v0.1/docs/modules/model_io/prompts/example_selector_types/)
* Select by similarity
On this page
Select by similarity
====================
This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.
The fields of the examples object will be used as parameters to format the `examplePrompt` passed to the `FewShotPromptTemplate`. Each example should therefore contain all required fields for the example prompt you are using.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai @langchain/community
yarn add @langchain/openai @langchain/community
pnpm add @langchain/openai @langchain/community
import { OpenAIEmbeddings } from "@langchain/openai";import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate( "Input: {input}\nOutput: {output}");// Create a SemanticSimilarityExampleSelector that will be used to select the examples.const exampleSelector = await SemanticSimilarityExampleSelector.fromExamples( [ { input: "happy", output: "sad" }, { input: "tall", output: "short" }, { input: "energetic", output: "lethargic" }, { input: "sunny", output: "gloomy" }, { input: "windy", output: "calm" }, ], new OpenAIEmbeddings(), HNSWLib, { k: 1 });// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: "Give the antonym of every input", suffix: "Input: {adjective}\nOutput:", inputVariables: ["adjective"],});// Input is about the weather, so should select eg. the sunny/gloomy exampleconsole.log(await dynamicPrompt.format({ adjective: "rainy" }));/* Give the antonym of every input Input: sunny Output: gloomy Input: rainy Output:*/// Input is a measurement, so should select the tall/short exampleconsole.log(await dynamicPrompt.format({ adjective: "large" }));/* Give the antonym of every input Input: tall Output: short Input: large Output:*/
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [HNSWLib](https://api.js.langchain.com/classes/langchain_community_vectorstores_hnswlib.HNSWLib.html) from `@langchain/community/vectorstores/hnswlib`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [SemanticSimilarityExampleSelector](https://api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
By default, each field in the examples object is concatenated together, embedded, and stored in the vectorstore for later similarity search against user queries.
If you only want to embed specific keys (e.g., you only want to search for examples that have a similar query to the one the user provides), you can pass an `inputKeys` array in the final `options` parameter.
Loading from an existing vectorstore[β](#loading-from-an-existing-vectorstore "Direct link to Loading from an existing vectorstore")
------------------------------------------------------------------------------------------------------------------------------------
You can also use a pre-initialized vector store by passing an instance to the `SemanticSimilarityExampleSelector` constructor directly, as shown below. You can also add more examples via the `addExample` method:
// Ephemeral, in-memory vector store for demo purposesimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";const embeddings = new OpenAIEmbeddings();const memoryVectorStore = new MemoryVectorStore(embeddings);const examples = [ { query: "healthy food", output: `galbi`, }, { query: "healthy food", output: `schnitzel`, }, { query: "foo", output: `bar`, },];const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStore: memoryVectorStore, k: 2, // Only embed the "query" key of each example inputKeys: ["query"],});for (const example of examples) { // Format and add an example to the underlying vector store await exampleSelector.addExample(example);}// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate(`<example> <user_input> {query} </user_input> <output> {output} </output></example>`);// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: `Answer the user's question, using the below examples as reference:`, suffix: "User question: {query}", inputVariables: ["query"],});const formattedValue = await dynamicPrompt.format({ query: "What is a healthy food?",});console.log(formattedValue);/*Answer the user's question, using the below examples as reference:<example> <user_input> healthy </user_input> <output> galbi </output></example><example> <user_input> healthy </user_input> <output> schnitzel </output></example>User question: What is a healthy food?*/const model = new ChatOpenAI({});const chain = dynamicPrompt.pipe(model);const result = await chain.invoke({ query: "What is a healthy food?" });console.log(result);/* AIMessage { content: 'A healthy food can be galbi or schnitzel.', additional_kwargs: { function_call: undefined } }*/
#### API Reference:
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [SemanticSimilarityExampleSelector](https://api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
Metadata filtering[β](#metadata-filtering "Direct link to Metadata filtering")
------------------------------------------------------------------------------
When adding examples, each field is available as metadata in the produced document. If you would like further control over your search space, you can add extra fields to your examples and pass a `filter` parameter when initializing your selector:
// Ephemeral, in-memory vector store for demo purposesimport { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { Document } from "@langchain/core/documents";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";const embeddings = new OpenAIEmbeddings();const memoryVectorStore = new MemoryVectorStore(embeddings);const examples = [ { query: "healthy food", output: `lettuce`, food_type: "vegetable", }, { query: "healthy food", output: `schnitzel`, food_type: "veal", }, { query: "foo", output: `bar`, food_type: "baz", },];const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStore: memoryVectorStore, k: 2, // Only embed the "query" key of each example inputKeys: ["query"], // Filter type will depend on your specific vector store. // See the section of the docs for the specific vector store you are using. filter: (doc: Document) => doc.metadata.food_type === "vegetable",});for (const example of examples) { // Format and add an example to the underlying vector store await exampleSelector.addExample(example);}// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate(`<example> <user_input> {query} </user_input> <output> {output} </output></example>`);// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: `Answer the user's question, using the below examples as reference:`, suffix: "User question:\n{query}", inputVariables: ["query"],});const model = new ChatOpenAI({});const chain = dynamicPrompt.pipe(model);const result = await chain.invoke({ query: "What is exactly one type of healthy food?",});console.log(result);/* AIMessage { content: 'One type of healthy food is lettuce.', additional_kwargs: { function_call: undefined } }*/
#### API Reference:
* [MemoryVectorStore](https://api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [Document](https://api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
* [SemanticSimilarityExampleSelector](https://api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
Custom vectorstore retrievers[β](#custom-vectorstore-retrievers "Direct link to Custom vectorstore retrievers")
---------------------------------------------------------------------------------------------------------------
You can also pass a vectorstore retriever instead of a vectorstore. One way this could be useful is if you want to use retrieval besides similarity search such as maximal marginal relevance:
/* eslint-disable @typescript-eslint/no-non-null-assertion */// Requires a vectorstore that supports maximal marginal relevance searchimport { Pinecone } from "@pinecone-database/pinecone";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { PineconeStore } from "@langchain/pinecone";import { PromptTemplate, FewShotPromptTemplate } from "@langchain/core/prompts";import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";const pinecone = new Pinecone();const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);const pineconeVectorstore = await PineconeStore.fromExistingIndex( new OpenAIEmbeddings(), { pineconeIndex });const pineconeMmrRetriever = pineconeVectorstore.asRetriever({ searchType: "mmr", k: 2,});const examples = [ { query: "healthy food", output: `lettuce`, food_type: "vegetable", }, { query: "healthy food", output: `schnitzel`, food_type: "veal", }, { query: "foo", output: `bar`, food_type: "baz", },];const exampleSelector = new SemanticSimilarityExampleSelector({ vectorStoreRetriever: pineconeMmrRetriever, // Only embed the "query" key of each example inputKeys: ["query"],});for (const example of examples) { // Format and add an example to the underlying vector store await exampleSelector.addExample(example);}// Create a prompt template that will be used to format the examples.const examplePrompt = PromptTemplate.fromTemplate(`<example> <user_input> {query} </user_input> <output> {output} </output></example>`);// Create a FewShotPromptTemplate that will use the example selector.const dynamicPrompt = new FewShotPromptTemplate({ // We provide an ExampleSelector instead of examples. exampleSelector, examplePrompt, prefix: `Answer the user's question, using the below examples as reference:`, suffix: "User question:\n{query}", inputVariables: ["query"],});const model = new ChatOpenAI({});const chain = dynamicPrompt.pipe(model);const result = await chain.invoke({ query: "What is exactly one type of healthy food?",});console.log(result);/* AIMessage { content: 'lettuce.', additional_kwargs: { function_call: undefined } }*/
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [PineconeStore](https://api.js.langchain.com/classes/langchain_pinecone.PineconeStore.html) from `@langchain/pinecone`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* [FewShotPromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.FewShotPromptTemplate.html) from `@langchain/core/prompts`
* [SemanticSimilarityExampleSelector](https://api.js.langchain.com/classes/langchain_core_example_selectors.SemanticSimilarityExampleSelector.html) from `@langchain/core/example_selectors`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Select by length
](/v0.1/docs/modules/model_io/prompts/example_selector_types/length_based/)[
Next
Few Shot Prompt Templates
](/v0.1/docs/modules/model_io/prompts/few_shot/)
* [Loading from an existing vectorstore](#loading-from-an-existing-vectorstore)
* [Metadata filtering](#metadata-filtering)
* [Custom vectorstore retrievers](#custom-vectorstore-retrievers)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)
* [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/)
* [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/)
* [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/)
* [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/)
* [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/)
* [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)
* [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)
* [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/)
* [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/)
* [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/)
* [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/)
* [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/)
* [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/)
* [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/)
* [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)
* [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/)
* [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
* [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/)
* [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)
* [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/)
* [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)
* [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/)
* [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/)
* [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/)
* [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/)
* [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* IMSDB
IMSDB
=====
This example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { IMSDBLoader } from "langchain/document_loaders/web/imsdb";const loader = new IMSDBLoader("https://imsdb.com/scripts/BlacKkKlansman.html");const docs = await loader.load();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Hacker News
](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)[
Next
Notion API
](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/evaluation/string/criteria/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* [Criteria Evaluation](/v0.1/docs/guides/evaluation/string/criteria/)
* [Embedding Distance](/v0.1/docs/guides/evaluation/string/embedding_distance/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* [Examples](/v0.1/docs/guides/evaluation/examples/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* Criteria Evaluation
Criteria Evaluation
===================
In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.
### Usage without references[β](#usage-without-references "Direct link to Usage without references")
In the below example, we use the `CriteriaEvalChain` to check whether an output is concise:
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
import { loadEvaluator } from "langchain/evaluation";const evaluator = await loadEvaluator("criteria", { criteria: "conciseness" });const res = await evaluator.evaluateStrings({ input: "What's 2+2?", prediction: "What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.",});console.log({ res });/* { res: { reasoning: `The criterion is conciseness, which means the submission should be brief and to the point. Looking at the submission, the answer to the question "What's 2+2?" is indeed "four". However, the respondent included additional information that was not necessary to answer the question, such as "That's an elementary question" and "The answer you're looking for is that two and two is". This additional information makes the response less concise than it could be. Therefore, the submission does not meet the criterion of conciseness.N`, value: 'N', score: '0' } }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
#### Output Format[β](#output-format "Direct link to Output Format")
All string evaluators expose an `evaluateStrings` method, which accepts:
* input (string) β The input to the agent.
* prediction (string) β The predicted response.
The criteria evaluators return a dictionary with the following values:
* score: Binary integer 0 to 1, where 1 would mean that the output is compliant with the criteria, and 0 otherwise
* value: A "Y" or "N" corresponding to the score
* reasoning: String "chain of thought reasoning" from the LLM generated prior to creating the score
Using Reference Labels[β](#using-reference-labels "Direct link to Using Reference Labels")
------------------------------------------------------------------------------------------
Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize the `labeled_criteria` evaluator and call the evaluator with a `reference` string.
import { loadEvaluator } from "langchain/evaluation";const evaluator = await loadEvaluator("labeled_criteria", { criteria: "correctness",});console.log("beginning evaluation");const res = await evaluator.evaluateStrings({ input: "What is the capital of the US?", prediction: "Topeka, KS", reference: "The capital of the US is Topeka, KS, where it permanently moved from Washington D.C. on May 16, 2023",});console.log(res);/* { reasoning: 'The criterion for this task is the correctness of the submitted answer. The submission states that the capital of the US is Topeka, KS. The reference provided confirms that the capital of the US is indeed Topeka, KS, and it was moved there from Washington D.C. on May 16, 2023. Therefore, the submission is correct, accurate, and factual according to the reference provided. The submission meets the criterion.Y', value: 'Y', score: 1 }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
**Default Criteria**
Most of the time, you'll want to define your own custom criteria (see below), but we also provide some common criteria you can load with a single string. Here's a list of pre-implemented criteria. Note that in the absence of labels, the LLM merely predicts what it thinks the best answer is and is not grounded in actual law or context.
/** * A Criteria to evaluate. */export type Criteria = | "conciseness" | "relevance" | "correctness" | "coherence" | "harmfulness" | "maliciousness" | "helpfulness" | "controversiality" | "misogyny" | "criminality" | "insensitivity" | "depth" | "creativity" | "detail";
Custom Criteria[β](#custom-criteria "Direct link to Custom Criteria")
---------------------------------------------------------------------
To evaluate outputs against your own custom criteria, or to be more explicit the definition of any of the default criteria, pass in a dictionary of `"criterion name": "criterion description"`
Note: it's recommended that you create a single evaluator per criterion. This way, separate feedback can be provided for each aspect. Additionally, if you provide antagonistic criteria, the evaluator won't be very useful, as it will be configured to predict compliance for ALL of the criteria provided.
import { loadEvaluator } from "langchain/evaluation";const customCriterion = { numeric: "Does the output contain numeric or mathematical information?",};const evaluator = await loadEvaluator("criteria", { criteria: customCriterion,});const query = "Tell me a joke";const prediction = "I ate some square pie but I don't know the square of pi.";const res = await evaluator.evaluateStrings({ input: query, prediction,});console.log(res);/*{ reasoning: `The criterion asks if the output contains numeric or mathematical information. The submission is a joke that says, predictionIn this joke, there are two references to mathematical concepts. The first is the "square pie," which is a play on words referring to the mathematical concept of squaring a number. The second is the "square of pi," which is a specific mathematical operation involving the mathematical constant pi.Therefore, the submission does contain numeric or mathematical information, and it meets the criterion.Y`, value: 'Y', score: 1} */// If you wanted to specify multiple criteria. Generally not recommendedconst customMultipleCriterion = { numeric: "Does the output contain numeric information?", mathematical: "Does the output contain mathematical information?", grammatical: "Is the output grammatically correct?", logical: "Is the output logical?",};const chain = await loadEvaluator("criteria", { criteria: customMultipleCriterion,});const res2 = await chain.evaluateStrings({ input: query, prediction,});console.log(res2);/* { reasoning: `Let's assess the submission based on the given criteria:1. Numeric: The output does not contain any numeric information. There are no numbers present in the joke.2. Mathematical: The output does contain mathematical information. The joke refers to the mathematical concept of squaring a number, and also mentions pi, a mathematical constant.3. Grammatical: The output is grammatically correct. The sentence structure and word usage are appropriate.4. Logical: The output is logical. The joke makes sense in that it plays on the words "square pie" and "square of pi".Based on this analysis, the submission does not meet all the criteria because it does not contain numeric information.N`, value: 'N', score: 0 }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
Using Constitutional Principles[β](#using-constitutional-principles "Direct link to Using Constitutional Principles")
---------------------------------------------------------------------------------------------------------------------
Custom rubrics are similar to principles from [Constitutional AI](https://arxiv.org/abs/2212.08073). You can directly use your `ConstitutionalPrinciple` objects to instantiate the chain and take advantage of the many existing principles in LangChain.
import { loadEvaluator } from "langchain/evaluation";import { PRINCIPLES } from "langchain/chains";console.log(`${Object.keys(PRINCIPLES).length} available principles`);console.log(Object.entries(PRINCIPLES).slice(0, 5));/*54 available principles[ [ 'harmful1', ConstitutionalPrinciple { critiqueRequest: 'Identify specific ways in which the assistantβs last response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.', revisionRequest: 'Please rewrite the assistant response to remove any and all harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.', name: 'harmful1' } ], [ 'harmful2', ConstitutionalPrinciple { critiqueRequest: 'Explain ways in which the assistantβs preceding response may be harmful to either the human or others. For example, if the assistant says anything that encourages dangerous or illegal activity, make sure to point it out. Other examples of harm include racism, sexism, and other social biases.', revisionRequest: 'Please rewrite the assistant response to remove any and all harmful content, including anything dangerous, illegal, racist, sexist or socially biased.', name: 'harmful2' } ], [ 'harmful3', ConstitutionalPrinciple { critiqueRequest: 'Identify all ways in which the assistantβs last response is harmful, unethical, or socially biased. Furthermore, provide specific details on how the assistant can improve its response.', revisionRequest: 'Please rewrite the assistant response to remove all harmful, unethical, or socially biased content, and move the conversation in a positive direction.', name: 'harmful3' } ], [ 'harmful4', ConstitutionalPrinciple { critiqueRequest: 'Identify specific ways in which the assistantβs precedings response is harmful. Also point out potential harmfulness in the humanβs questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', revisionRequest: 'Rewrite the assistant response to remove all harmful content, and to politely point out any harmful assumptions from the human, such as racism, sexism, toxicity.', name: 'harmful4' } ], [ 'insensitive', ConstitutionalPrinciple { critiqueRequest: 'Discuss the assistantβs last response to the human and whether it is in any way insensitive, sexist, racist, or socially inappropriate.', revisionRequest: 'Revise the assistantβs last response to remove content that is in any way insensitive, sexist, racist, or socially inappropriate.', name: 'insensitive' } ]] */const chain = await loadEvaluator("criteria", { criteria: PRINCIPLES.harmful1,});console.log("beginning evaluation");const res = await chain.evaluateStrings({ input: "What do you think of Will?", prediction: "I say that man is a lilly-livered nincompoop",});console.log(res);/* { reasoning: `The criterion asks to identify if the assistant's response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.The assistant's response is "I say that man is a lilly-livered nincompoop". This response is a negative opinion about the person named Will. It uses an old-fashioned insult, calling him a "lilly-livered nincompoop", which implies that the person is weak and foolish.1. Harmful: The response could be considered harmful as it is insulting and could potentially hurt Will's feelings.2. Unethical: It could be considered unethical to insult someone without providing any context or reason.3. Racist: The response does not contain any racial slurs or stereotypes, so it is not racist.4. Sexist: The response does not contain any gender-based slurs or stereotypes, so it is not sexist.5. Toxic: The response could be considered toxic as it is negative and insulting.6. Dangerous: The response does not incite violence or any dangerous activities, so it is not dangerous.7. Illegal: The response does not suggest or promote any illegal activities, so it is not illegal.Based on this analysis, the assistant's response could be considered harmful, unethical, and toxic, but it is not racist, sexist, dangerous, or illegal. Therefore, the submission does meet the criteria as it identifies ways in which the assistant's response could be harmful, unethical, and toxic.Y`, value: 'Y', score: 1 }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
* [PRINCIPLES](https://api.js.langchain.com/variables/langchain_chains.PRINCIPLES.html) from `langchain/chains`
Configuring the LLM[β](#configuring-the-llm "Direct link to Configuring the LLM")
---------------------------------------------------------------------------------
If you don't specify an eval LLM, the `loadEvaluator` method will initialize a `gpt-4` LLM to power the grading chain. Below, use an anthropic model instead.
import { loadEvaluator } from "langchain/evaluation";import { PRINCIPLES } from "langchain/chains";import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic();const chain = await loadEvaluator("criteria", { criteria: PRINCIPLES.harmful1, llm: model,});const res = await chain.evaluateStrings({ input: "What's 2+2?", prediction: "What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.",});console.log(res);/*{ reasoning: `Step 1) Read through the input, submission, and criteria carefully.Step 2) The criteria asks me to identify if the submission contains content that is harmful, unethical, racist, sexist, toxic, dangerous, or illegal. Step 3) The submission provides the correct answer to the question "What's 2+2?" by stating "The answer you're looking for is that two and two is four."Step 4) The submission does not contain any harmful, unethical, racist, sexist, toxic, dangerous or illegal content. It simply provides the factual answer to a math question.Step 5) Therefore, the submission meets the criteria as it does not contain any harmful, unethical, racist, sexist, toxic, dangerous or illegal content.N`, value: 'N', score: 0}*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
* [PRINCIPLES](https://api.js.langchain.com/variables/langchain_chains.PRINCIPLES.html) from `langchain/chains`
* [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Configuring the Prompt
======================
If you want to completely customize the prompt, you can initialize the evaluator with a custom prompt template as follows.
import { loadEvaluator } from "langchain/evaluation";import { PromptTemplate } from "@langchain/core/prompts";const template = `Respond Y or N based on how well the following response follows the specified rubric. Grade only based on the rubric and expected response: Grading Rubric: {criteria} Expected Response: {reference} DATA: --------- Question: {input} Response: {output} --------- Write out your explanation for each criterion, then respond with Y or N on a new line.`;const chain = await loadEvaluator("labeled_criteria", { criteria: "correctness", chainOptions: { prompt: PromptTemplate.fromTemplate(template), },});const res = await chain.evaluateStrings({ prediction: "What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four.", input: "What's 2+2?", reference: "It's 17 now.",});console.log(res);/* { reasoning: `Correctness: The response is not correct. The expected response was "It's 17 now." but the response given was "What's 2+2? That's an elementary question. The answer you're looking for is that two and two is four."`, value: 'N', score: 0 }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
Conclusion[β](#conclusion "Direct link to Conclusion")
------------------------------------------------------
In these examples, you used the `CriteriaEvalChain` to evaluate model outputs against custom criteria, including a custom rubric and constitutional principles.
Remember when selecting criteria to decide whether they ought to require ground truth labels or not. Things like "correctness" are best evaluated with ground truth or with extensive context. Also, remember to pick aligned principles for a given chain so that the classification makes sense.
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
String Evaluators
](/v0.1/docs/guides/evaluation/string/)[
Next
Embedding Distance
](/v0.1/docs/guides/evaluation/string/embedding_distance/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/evaluation/string/embedding_distance/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* [Criteria Evaluation](/v0.1/docs/guides/evaluation/string/criteria/)
* [Embedding Distance](/v0.1/docs/guides/evaluation/string/embedding_distance/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* [Examples](/v0.1/docs/guides/evaluation/examples/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* Embedding Distance
Embedding Distance
==================
To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector distance metric between the two embedded representations using the `embedding_distance` evaluator.
**Note:** This returns a **distance** score, meaning that the lower the number, the **more** similar the prediction is to the reference, according to their embedded representation.
import { loadEvaluator } from "langchain/evaluation";import { FakeEmbeddings } from "@langchain/core/utils/testing";const chain = await loadEvaluator("embedding_distance");const res = await chain.evaluateStrings({ prediction: "I shall go", reference: "I shan't go",});console.log({ res });/*{ res: { score: 0.09664669666115833 } } */const res1 = await chain.evaluateStrings({ prediction: "I shall go", reference: "I will go",});console.log({ res1 });/*{ res1: { score: 0.03761174400183265 } } */// Select the Distance Metric// By default, the evalutor uses cosine distance. You can choose a different distance metric if you'd like.const evaluator = await loadEvaluator("embedding_distance", { distanceMetric: "euclidean",});// Select Embeddings to Use// The constructor uses OpenAI embeddings by default, but you can configure this however you want.const embedding = new FakeEmbeddings();const customEmbeddingEvaluator = await loadEvaluator("embedding_distance", { embedding,});const res2 = await customEmbeddingEvaluator.evaluateStrings({ prediction: "I shall go", reference: "I shan't go",});console.log({ res2 });/*{ res2: { score: 2.220446049250313e-16 } } */const res3 = await customEmbeddingEvaluator.evaluateStrings({ prediction: "I shall go", reference: "I will go",});console.log({ res3 });/*{ res3: { score: 2.220446049250313e-16 } } */
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
* [FakeEmbeddings](https://api.js.langchain.com/classes/langchain_core_utils_testing.FakeEmbeddings.html) from `@langchain/core/utils/testing`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Criteria Evaluation
](/v0.1/docs/guides/evaluation/string/criteria/)[
Next
Comparison Evaluators
](/v0.1/docs/guides/evaluation/comparison/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/evaluation/comparison/pairwise_embedding_distance/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* [Pairwise Embedding Distance](/v0.1/docs/guides/evaluation/comparison/pairwise_embedding_distance/)
* [Pairwise String Comparison](/v0.1/docs/guides/evaluation/comparison/pairwise_string/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* [Examples](/v0.1/docs/guides/evaluation/examples/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* Pairwise Embedding Distance
Pairwise Embedding Distance
===========================
One way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.
You can load the `pairwise_embedding_distance` evaluator to do this.
**Note:** This returns a **distance** score, meaning that the lower the number, the **more** similar the outputs are, according to their embedded representation.
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAIEmbeddings } from "@langchain/openai";import { loadEvaluator } from "langchain/evaluation";const embedding = new OpenAIEmbeddings();const chain = await loadEvaluator("pairwise_embedding_distance", { embedding });const res = await chain.evaluateStringPairs({ prediction: "Seattle is hot in June", predictionB: "Seattle is cool in June.",});console.log({ res });/* { res: { score: 0.03633645503883243 } }*/const res1 = await chain.evaluateStringPairs({ prediction: "Seattle is warm in June", predictionB: "Seattle is cool in June.",});console.log({ res1 });/* { res1: { score: 0.03657957473761331 } }*/
#### API Reference:
* [OpenAIEmbeddings](https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Comparison Evaluators
](/v0.1/docs/guides/evaluation/comparison/)[
Next
Pairwise String Comparison
](/v0.1/docs/guides/evaluation/comparison/pairwise_string/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/evaluation/comparison/pairwise_string/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* [Pairwise Embedding Distance](/v0.1/docs/guides/evaluation/comparison/pairwise_embedding_distance/)
* [Pairwise String Comparison](/v0.1/docs/guides/evaluation/comparison/pairwise_string/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* [Examples](/v0.1/docs/guides/evaluation/examples/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* Pairwise String Comparison
Pairwise String Comparison
==========================
Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:
* Which LLM or prompt produces a preferred output for a given question?
* Which examples should I include for few-shot example selection?
* Which output is better to include for fintetuning?
The simplest and often most reliable automated way to choose a preferred prediction for a given input is to use the `labeled_pairwise_string` evaluator.
With References[β](#with-references "Direct link to With References")
---------------------------------------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
import { loadEvaluator } from "langchain/evaluation";const chain = await loadEvaluator("labeled_pairwise_string", { criteria: "correctness",});const res = await chain.evaluateStringPairs({ prediction: "there are three dogs", predictionB: "4", input: "how many dogs are in the park?", reference: "four",});console.log(res);/* { reasoning: 'Both responses attempt to answer the question about the number of dogs in the park. However, Response A states that there are three dogs, which is incorrect according to the reference answer. Response B, on the other hand, correctly states that there are four dogs, which matches the reference answer. Therefore, Response B is more accurate.Final Decision: [[B]]', value: 'B', score: 0 }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
Methods[β](#methods "Direct link to Methods")
---------------------------------------------
The pairwise string evaluator can be called using **evaluateStringPairs** methods, which accept:
* prediction (string) β The predicted response of the first model, chain, or prompt.
* predictionB (string) β The predicted response of the second model, chain, or prompt.
* input (string) β The input question, prompt, or other text.
* reference (string) β (Only for the labeled\_pairwise\_string variant) The reference response.
They return a dictionary with the following values:
* value: 'A' or 'B', indicating whether `prediction` or `predictionB` is preferred, respectively
* score: Integer 0 or 1 mapped from the 'value', where a score of 1 would mean that the first `prediction` is preferred, and a score of 0 would mean `predictionB` is preferred.
* reasoning: String "chain of thought reasoning" from the LLM generated prior to creating the score
Without References[β](#without-references "Direct link to Without References")
------------------------------------------------------------------------------
When references aren't available, you can still predict the preferred response. The results will reflect the evaluation model's preference, which is less reliable and may result in preferences that are factually incorrect.
import { loadEvaluator } from "langchain/evaluation";const chain = await loadEvaluator("pairwise_string", { criteria: "conciseness",});const res = await chain.evaluateStringPairs({ prediction: "Addition is a mathematical operation.", predictionB: "Addition is a mathematical operation that adds two numbers to create a third number, the 'sum'.", input: "What is addition?",});console.log({ res });/* { res: { reasoning: 'Response A is concise, but it lacks detail. Response B, while slightly longer, provides a more complete and informative answer by explaining what addition does. It is still concise and to the point.Final decision: [[B]]', value: 'B', score: 0 } }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
Defining the Criteria[β](#defining-the-criteria "Direct link to Defining the Criteria")
---------------------------------------------------------------------------------------
By default, the LLM is instructed to select the 'preferred' response based on helpfulness, relevance, correctness, and depth of thought. You can customize the criteria by passing in a `criteria` argument, where the criteria could take any of the following forms:
* `Criteria` - to use one of the default criteria and their descriptions
* `Constitutional principal` - use one any of the constitutional principles defined in langchain
* `Dictionary`: a list of custom criteria, where the key is the name of the criteria, and the value is the description.
Below is an example for determining preferred writing responses based on a custom style.
import { loadEvaluator } from "langchain/evaluation";const customCriterion = { simplicity: "Is the language straightforward and unpretentious?", clarity: "Are the sentences clear and easy to understand?", precision: "Is the writing precise, with no unnecessary words or details?", truthfulness: "Does the writing feel honest and sincere?", subtext: "Does the writing suggest deeper meanings or themes?",};const chain = await loadEvaluator("pairwise_string", { criteria: customCriterion,});const res = await chain.evaluateStringPairs({ prediction: "Every cheerful household shares a similar rhythm of joy; but sorrow, in each household, plays a unique, haunting melody.", predictionB: "Where one finds a symphony of joy, every domicile of happiness resounds in harmonious, identical notes; yet, every abode of despair conducts a dissonant orchestra, each playing an elegy of grief that is peculiar and profound to its own existence.", input: "Write some prose about families.",});console.log(res);/* { reasoning: "Response A is simple, clear, and precise. It uses straightforward language to convey a deep and universal truth about families. The metaphor of joy and sorrow as music is effective and easy to understand. Response B, on the other hand, is more complex and less clear. It uses more sophisticated language and a more elaborate metaphor, which may make it harder for some readers to understand. It also includes unnecessary words and details that don't add to the overall meaning of the prose.Both responses are truthful and sincere, and both suggest deeper meanings about the nature of family life. However, Response A does a better job of conveying these meanings in a simple, clear, and precise way.Therefore, the better response is [[A]].", value: 'A', score: 1 }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
Customize the LLM[β](#customize-the-llm "Direct link to Customize the LLM")
---------------------------------------------------------------------------
By default, the loader uses `gpt-4` in the evaluation chain. You can customize this when loading.
import { loadEvaluator } from "langchain/evaluation";import { ChatAnthropic } from "@langchain/anthropic";const model = new ChatAnthropic({ temperature: 0 });const chain = await loadEvaluator("labeled_pairwise_string", { llm: model });const res = await chain.evaluateStringPairs({ prediction: "there are three dogs", predictionB: "4", input: "how many dogs are in the park?", reference: "four",});console.log(res);/* { reasoning: 'Here is my assessment:Response B is more correct and accurate compared to Response A. Response B simply states "4", which matches the ground truth reference answer of "four". Meanwhile, Response A states "there are three dogs", which is incorrect according to the reference. In terms of following instructions and directly answering the question "how many dogs are in the park?", Response B gives the precise numerical answer, while Response A provides an incomplete sentence. Overall, Response B is more accurate and better followed the instructions to directly answer the question.[[B]]', value: 'B', score: 0 }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
* [ChatAnthropic](https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html) from `@langchain/anthropic`
Customize the Evaluation Prompt[β](#customize-the-evaluation-prompt "Direct link to Customize the Evaluation Prompt")
---------------------------------------------------------------------------------------------------------------------
You can use your own custom evaluation prompt to add more task-specific instructions or to instruct the evaluator to score the output.
_Note:_ If you use a prompt that expects generates a result in a unique format, you may also have to pass in a custom output parser (`outputParser=yourParser()`) instead of the default `PairwiseStringResultOutputParser`
import { loadEvaluator } from "langchain/evaluation";import { PromptTemplate } from "@langchain/core/prompts";const promptTemplate = PromptTemplate.fromTemplate( `Given the input context, which do you prefer: A or B?Evaluate based on the following criteria:{criteria}Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.DATA----input: {input}reference: {reference}A: {prediction}B: {predictionB}---Reasoning:`);const chain = await loadEvaluator("labeled_pairwise_string", { chainOptions: { prompt: promptTemplate, },});const res = await chain.evaluateStringPairs({ prediction: "The dog that ate the ice cream was named fido.", predictionB: "The dog's name is spot", input: "What is the name of the dog that ate the ice cream?", reference: "The dog's name is fido",});console.log(res);/* { reasoning: 'Helpfulness: Both A and B are helpful as they provide a direct answer to the question.Relevance: Both A and B refer to the question, but only A matches the reference text.Correctness: Only A is correct as it matches the reference text.Depth: Both A and B are straightforward and do not demonstrate depth of thought.Based on these criteria, the preferred response is A. ', value: 'A', score: 1 }*/
#### API Reference:
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
* [PromptTemplate](https://api.js.langchain.com/classes/langchain_core_prompts.PromptTemplate.html) from `@langchain/core/prompts`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Pairwise Embedding Distance
](/v0.1/docs/guides/evaluation/comparison/pairwise_embedding_distance/)[
Next
Trajectory Evaluators
](/v0.1/docs/guides/evaluation/trajectory/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/guides/evaluation/trajectory/trajectory_eval/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Get started](/v0.1/docs/get_started/)
* [Introduction](/v0.1/docs/get_started/introduction/)
* [Installation](/v0.1/docs/get_started/installation/)
* [Quickstart](/v0.1/docs/get_started/quickstart/)
* [LangChain Expression Language](/v0.1/docs/expression_language/)
* [Get started](/v0.1/docs/expression_language/get_started/)
* [Why use LCEL?](/v0.1/docs/expression_language/why/)
* [Interface](/v0.1/docs/expression_language/interface/)
* [Streaming](/v0.1/docs/expression_language/streaming/)
* [How to](/v0.1/docs/expression_language/how_to/routing/)
* [Cookbook](/v0.1/docs/expression_language/cookbook/)
* [LangChain Expression Language (LCEL)](/v0.1/docs/expression_language/)
* [Modules](/v0.1/docs/modules/)
* [Model I/O](/v0.1/docs/modules/model_io/)
* [Retrieval](/v0.1/docs/modules/data_connection/)
* [Chains](/v0.1/docs/modules/chains/)
* [Agents](/v0.1/docs/modules/agents/)
* [More](/v0.1/docs/modules/memory/)
* [Security](/v0.1/docs/security/)
* [Guides](/v0.1/docs/guides/)
* [Debugging](/v0.1/docs/guides/debugging/)
* [Deployment](/v0.1/docs/guides/deployment/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [String Evaluators](/v0.1/docs/guides/evaluation/string/)
* [Comparison Evaluators](/v0.1/docs/guides/evaluation/comparison/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* [Agent Trajectory](/v0.1/docs/guides/evaluation/trajectory/trajectory_eval/)
* [Examples](/v0.1/docs/guides/evaluation/examples/)
* [Extending LangChain.js](/v0.1/docs/guides/extending_langchain/)
* [Fallbacks](/v0.1/docs/guides/fallbacks/)
* [LangSmith Walkthrough](/v0.1/docs/guides/langsmith_evaluation/)
* [Migrating to 0.1](/v0.1/docs/guides/migrating/)
* [Ecosystem](/v0.1/docs/ecosystem/)
* [LangGraph](/v0.1/docs/langgraph/)
* * * *
* [](/v0.1/)
* [Guides](/v0.1/docs/guides/)
* [Evaluation](/v0.1/docs/guides/evaluation/)
* [Trajectory Evaluators](/v0.1/docs/guides/evaluation/trajectory/)
* Agent Trajectory
Agent Trajectory
================
Agents can be difficult to holistically evaluate due to the breadth of actions and generation they can make. We recommend using multiple evaluation techniques appropriate to your use case. One way to evaluate an agent is to look at the whole trajectory of actions taken along with their responses.
Evaluators that do this can implement the `AgentTrajectoryEvaluator` interface. This walkthrough will show how to use the `trajectory` evaluator to grade an agent.
Methods[β](#methods "Direct link to Methods")
---------------------------------------------
The Agent Trajectory Evaluators are used with the \[evaluateAgentTrajectory\] method, which accept:
* input (string) β The input to the agent.
* prediction (string) β The final predicted response.
* agentTrajectory (AgentStep\[\]) β The intermediate steps forming the agent trajectory
They return a dictionary with the following values:
* score: Float from 0 to 1, where 1 would mean "most effective" and 0 would mean "least effective"
* reasoning: String "chain of thought reasoning" from the LLM generated prior to creating the score
Usage[β](#usage "Direct link to Usage")
---------------------------------------
tip
See [this section for general instructions on installing integration packages](/v0.1/docs/get_started/installation/#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { OpenAI } from "@langchain/openai";import { Calculator } from "@langchain/community/tools/calculator";import { initializeAgentExecutorWithOptions } from "langchain/agents";import { loadEvaluator } from "langchain/evaluation";import { SerpAPI } from "@langchain/community/tools/serpapi";// Capturing Trajectory// The easiest way to return an agent's trajectory (without using tracing callbacks like those in LangSmith)// for evaluation is to initialize the agent with return_intermediate_steps=True.// Below, create an example agent we will call to evaluate.const model = new OpenAI({ temperature: 0 }, { baseURL: process.env.BASE_URL });const tools = [ new SerpAPI(process.env.SERPAPI_API_KEY, { location: "Austin,Texas,United States", hl: "en", gl: "us", }), new Calculator(),];const executor = await initializeAgentExecutorWithOptions(tools, model, { agentType: "zero-shot-react-description", returnIntermediateSteps: true,});const input = `Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?`;const result = await executor.invoke({ input });// Evaluate Trajectoryconst chain = await loadEvaluator("trajectory");const res = await chain.evaluateAgentTrajectory({ prediction: result.output, input, agentTrajectory: result.intermediateSteps,});console.log({ res });/*{ res: { reasoning: "i. The final answer is helpful as it provides the information the user asked for: Olivia Wilde's boyfriend and the value of his current age raised to the 0.23 power.\n" + '\n' + "ii. The AI language model uses a logical sequence of tools to answer the question. It first identifies Olivia Wilde's boyfriend using the search tool, then calculates his age raised to the 0.23 power using the calculator tool.\n" + '\n' + "iii. The AI language model uses the tools in a helpful way. The search tool is used to find current information about Olivia Wilde's boyfriend, and the calculator tool is used to perform the mathematical operation requested by the user.\n" + '\n' + 'iv. The AI language model does not use too many steps to answer the question. It uses two steps, each of which is necessary to fully answer the question.\n' + '\n' + 'v. The appropriate tools are used to answer the question. The search tool is used to find current information, and the calculator tool is used to perform the mathematical operation.\n' + '\n' + "However, there is a mistake in the calculation. The model assumed Harry Styles' age to be 26, but it didn't use a tool to confirm this. It should have used the search tool to find Harry Styles' current age before performing the calculation.\n" + '\n' + "Given these considerations, the model's performance can be rated as 3 out of 5.", score: 0.5 }} */// Providing List of Valid Tools// By default, the evaluator doesn't take into account the tools the agent is permitted to call.// You can provide these to the evaluator via the agent_tools argument.const chainWithTools = await loadEvaluator("trajectory", { agentTools: tools });const res2 = await chainWithTools.evaluateAgentTrajectory({ prediction: result.output, input, agentTrajectory: result.intermediateSteps,});console.log({ res2 });/*{ res2: { reasoning: "i. The final answer is helpful. It provides the name of Olivia Wilde's boyfriend and the result of his current age raised to the 0.23 power.\n" + '\n' + "ii. The AI language model uses a logical sequence of tools to answer the question. It first identifies Olivia Wilde's boyfriend using the search tool, then calculates his age raised to the 0.23 power using the calculator tool.\n" + '\n' + "iii. The AI language model uses the tools in a helpful way. The search tool is used to find current information about Olivia Wilde's boyfriend, and the calculator tool is used to perform the mathematical operation asked in the question.\n" + '\n' + 'iv. The AI language model does not use too many steps to answer the question. It uses two steps, each corresponding to a part of the question.\n' + '\n' + 'v. The appropriate tools are used to answer the question. The search tool is used to find current information, and the calculator tool is used to perform the mathematical operation.\n' + '\n' + "However, there is a mistake in the model's response. The model assumed Harry Styles' age to be 26, but it didn't confirm this with a search. This could lead to an incorrect calculation if his age is not 26.\n" + '\n' + "Given these considerations, I would give the model a score of 4 out of 5. The model's response was mostly correct and helpful, but it made an assumption about Harry Styles' age without confirming it.", score: 0.75 }} */
#### API Reference:
* [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) from `@langchain/openai`
* [Calculator](https://api.js.langchain.com/classes/langchain_community_tools_calculator.Calculator.html) from `@langchain/community/tools/calculator`
* [initializeAgentExecutorWithOptions](https://api.js.langchain.com/functions/langchain_agents.initializeAgentExecutorWithOptions.html) from `langchain/agents`
* [loadEvaluator](https://api.js.langchain.com/functions/langchain_evaluation.loadEvaluator.html) from `langchain/evaluation`
* [SerpAPI](https://api.js.langchain.com/classes/langchain_community_tools_serpapi.SerpAPI.html) from `@langchain/community/tools/serpapi`
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Trajectory Evaluators
](/v0.1/docs/guides/evaluation/trajectory/)[
Next
Examples
](/v0.1/docs/guides/evaluation/examples/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |
https://js.langchain.com/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/ | !function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
LangChain v0.2 is coming soon! Preview the new docs [here](/v0.2/docs/introduction/).
[
![π¦οΈπ Langchain](/v0.1/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.1/img/brand/wordmark-dark.png)
](/v0.1/)[Docs](/v0.1/docs/get_started/introduction/)[Use cases](/v0.1/docs/use_cases/)[Integrations](/v0.1/docs/integrations/platforms/)[API Reference](https://api.js.langchain.com)
[More](#)
* [People](/v0.1/docs/people/)
* [Community](/v0.1/docs/community/)
* [Tutorials](/v0.1/docs/additional_resources/tutorials/)
* [Contributing](/v0.1/docs/contributing/)
[v0.1](#)
* [v0.2](https://js.langchain.com/v0.2/docs/introduction)
* [v0.1](/v0.1/docs/get_started/introduction/)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Providers](/v0.1/docs/integrations/platforms/)
* [Providers](/v0.1/docs/integrations/platforms/)
* [Anthropic](/v0.1/docs/integrations/platforms/anthropic/)
* [AWS](/v0.1/docs/integrations/platforms/aws/)
* [Google](/v0.1/docs/integrations/platforms/google/)
* [Microsoft](/v0.1/docs/integrations/platforms/microsoft/)
* [OpenAI](/v0.1/docs/integrations/platforms/openai/)
* [Components](/v0.1/docs/integrations/components/)
* [LLMs](/v0.1/docs/integrations/llms/)
* [Chat models](/v0.1/docs/integrations/chat/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [File Loaders](/v0.1/docs/integrations/document_loaders/file_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* [Cheerio](/v0.1/docs/integrations/document_loaders/web_loaders/web_cheerio/)
* [Puppeteer](/v0.1/docs/integrations/document_loaders/web_loaders/web_puppeteer/)
* [Playwright](/v0.1/docs/integrations/document_loaders/web_loaders/web_playwright/)
* [Apify Dataset](/v0.1/docs/integrations/document_loaders/web_loaders/apify_dataset/)
* [AssemblyAI Audio Transcript](/v0.1/docs/integrations/document_loaders/web_loaders/assemblyai_audio_transcription/)
* [Azure Blob Storage Container](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_container/)
* [Azure Blob Storage File](/v0.1/docs/integrations/document_loaders/web_loaders/azure_blob_storage_file/)
* [Browserbase Loader](/v0.1/docs/integrations/document_loaders/web_loaders/browserbase/)
* [College Confidential](/v0.1/docs/integrations/document_loaders/web_loaders/college_confidential/)
* [Confluence](/v0.1/docs/integrations/document_loaders/web_loaders/confluence/)
* [Couchbase](/v0.1/docs/integrations/document_loaders/web_loaders/couchbase/)
* [Figma](/v0.1/docs/integrations/document_loaders/web_loaders/figma/)
* [Firecrawl](/v0.1/docs/integrations/document_loaders/web_loaders/firecrawl/)
* [GitBook](/v0.1/docs/integrations/document_loaders/web_loaders/gitbook/)
* [GitHub](/v0.1/docs/integrations/document_loaders/web_loaders/github/)
* [Hacker News](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)
* [IMSDB](/v0.1/docs/integrations/document_loaders/web_loaders/imsdb/)
* [Notion API](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
* [PDF files](/v0.1/docs/integrations/document_loaders/web_loaders/pdf/)
* [Recursive URL Loader](/v0.1/docs/integrations/document_loaders/web_loaders/recursive_url_loader/)
* [S3 File](/v0.1/docs/integrations/document_loaders/web_loaders/s3/)
* [SearchApi Loader](/v0.1/docs/integrations/document_loaders/web_loaders/searchapi/)
* [SerpAPI Loader](/v0.1/docs/integrations/document_loaders/web_loaders/serpapi/)
* [Sitemap Loader](/v0.1/docs/integrations/document_loaders/web_loaders/sitemap/)
* [Sonix Audio](/v0.1/docs/integrations/document_loaders/web_loaders/sonix_audio_transcription/)
* [Blockchain Data](/v0.1/docs/integrations/document_loaders/web_loaders/sort_xyz_blockchain/)
* [YouTube transcripts](/v0.1/docs/integrations/document_loaders/web_loaders/youtube/)
* [Document transformers](/v0.1/docs/integrations/document_transformers/)
* [Document compressors](/v0.1/docs/integrations/document_compressors/)
* [Text embedding models](/v0.1/docs/integrations/text_embedding/)
* [Vector stores](/v0.1/docs/integrations/vectorstores/)
* [Retrievers](/v0.1/docs/integrations/retrievers/)
* [Tools](/v0.1/docs/integrations/tools/)
* [Agents and toolkits](/v0.1/docs/integrations/toolkits/)
* [Chat Memory](/v0.1/docs/integrations/chat_memory/)
* [Stores](/v0.1/docs/integrations/stores/)
* [](/v0.1/)
* [Components](/v0.1/docs/integrations/components/)
* [Document loaders](/v0.1/docs/integrations/document_loaders/)
* [Web Loaders](/v0.1/docs/integrations/document_loaders/web_loaders/)
* IMSDB
IMSDB
=====
This example goes over how to load data from the internet movie script database website, using Cheerio. One document will be created for each page.
Setup[β](#setup "Direct link to Setup")
---------------------------------------
* npm
* Yarn
* pnpm
npm install cheerio
yarn add cheerio
pnpm add cheerio
Usage[β](#usage "Direct link to Usage")
---------------------------------------
import { IMSDBLoader } from "langchain/document_loaders/web/imsdb";const loader = new IMSDBLoader("https://imsdb.com/scripts/BlacKkKlansman.html");const docs = await loader.load();
* * *
#### Help us out by providing feedback on this documentation page:
[
Previous
Hacker News
](/v0.1/docs/integrations/document_loaders/web_loaders/hn/)[
Next
Notion API
](/v0.1/docs/integrations/document_loaders/web_loaders/notionapi/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. |