Unnamed: 0
stringlengths 1
178
| link
stringlengths 31
163
| text
stringlengths 18
32.8k
⌀ |
---|---|---|
795 | https://python.langchain.com/docs/integrations/callbacks/labelstudio | ComponentsCallbacksLabel StudioOn this pageLabel StudioLabel Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.In this guide, you will learn how to connect a LangChain pipeline to Label Studio to:Aggregate all input prompts, conversations, and responses in a single LabelStudio project. This consolidates all the data in one place for easier labeling and analysis.Refine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.Evaluate model responses through human feedback. LabelStudio provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration.Installation and setupFirst install latest versions of Label Studio and Label Studio API client:pip install -U label-studio label-studio-sdk openaiNext, run label-studio on the command line to start the local LabelStudio instance at http://localhost:8080. See the Label Studio installation guide for more options.You'll need a token to make API calls.Open your LabelStudio instance in your browser, go to Account & Settings > Access Token and copy the key.Set environment variables with your LabelStudio URL, API key and OpenAI API key:import osos.environ['LABEL_STUDIO_URL'] = '<YOUR-LABEL-STUDIO-URL>' # e.g. http://localhost:8080os.environ['LABEL_STUDIO_API_KEY'] = '<YOUR-LABEL-STUDIO-API-KEY>'os.environ['OPENAI_API_KEY'] = '<YOUR-OPENAI-API-KEY>'Collecting LLMs prompts and responsesThe data used for labeling is stored in projects within Label Studio. Every project is identified by an XML configuration that details the specifications for input and output data. Create a project that takes human input in text format and outputs an editable LLM response in a text area:<View><Style> .prompt-box { background-color: white; border-radius: 10px; box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.1); padding: 20px; }</Style><View className="root"> <View className="prompt-box"> <Text name="prompt" value="$prompt"/> </View> <TextArea name="response" toName="prompt" maxSubmissions="1" editable="true" required="true"/></View><Header value="Rate the response:"/><Rating name="rating" toName="prompt"/></View>To create a project in Label Studio, click on the "Create" button. Enter a name for your project in the "Project Name" field, such as My Project.Navigate to Labeling Setup > Custom Template and paste the XML configuration provided above.You can collect input LLM prompts and output responses in a LabelStudio project, connecting it via LabelStudioCallbackHandler:from langchain.llms import OpenAIfrom langchain.callbacks import LabelStudioCallbackHandlerllm = OpenAI( temperature=0, callbacks=[ LabelStudioCallbackHandler( project_name="My Project" )])print(llm("Tell me a joke"))In the Label Studio, open My Project. You will see the prompts, responses, and metadata like the model name. Collecting Chat model DialoguesYou can also track and display full chat dialogues in LabelStudio, with the ability to rate and modify the last response:Open Label Studio and click on the "Create" button.Enter a name for your project in the "Project Name" field, such as New Project with Chat.Navigate to Labeling Setup > Custom Template and paste the following XML configuration:<View><View className="root"> <Paragraphs name="dialogue" value="$prompt" layout="dialogue" textKey="content" nameKey="role" granularity="sentence"/> <Header value="Final response:"/> <TextArea name="response" toName="dialogue" maxSubmissions="1" editable="true" required="true"/></View><Header value="Rate the response:"/><Rating name="rating" toName="dialogue"/></View>from langchain.chat_models import ChatOpenAIfrom langchain.schema import HumanMessage, SystemMessagefrom langchain.callbacks import LabelStudioCallbackHandlerchat_llm = ChatOpenAI(callbacks=[ LabelStudioCallbackHandler( mode="chat", project_name="New Project with Chat", )])llm_results = chat_llm([ SystemMessage(content="Always use a lot of emojis"), HumanMessage(content="Tell me a joke")])In Label Studio, open "New Project with Chat". Click on a created task to view dialog history and edit/annotate responses.Custom Labeling ConfigurationYou can modify the default labeling configuration in LabelStudio to add more target labels like response sentiment, relevance, and many other types annotator's feedback.New labeling configuration can be added from UI: go to Settings > Labeling Interface and set up a custom configuration with additional tags like Choices for sentiment or Rating for relevance. Keep in mind that TextArea tag should be presented in any configuration to display the LLM responses.Alternatively, you can specify the labeling configuration on the initial call before project creation:ls = LabelStudioCallbackHandler(project_config='''<View><Text name="prompt" value="$prompt"/><TextArea name="response" toName="prompt"/><TextArea name="user_feedback" toName="prompt"/><Rating name="rating" toName="prompt"/><Choices name="sentiment" toName="prompt"> <Choice value="Positive"/> <Choice value="Negative"/></Choices></View>''')Note that if the project doesn't exist, it will be created with the specified labeling configuration.Other parametersThe LabelStudioCallbackHandler accepts several optional parameters:api_key - Label Studio API key. Overrides environmental variable LABEL_STUDIO_API_KEY.url - Label Studio URL. Overrides LABEL_STUDIO_URL, default http://localhost:8080.project_id - Existing Label Studio project ID. Overrides LABEL_STUDIO_PROJECT_ID. Stores data in this project.project_name - Project name if project ID not specified. Creates a new project. Default is "LangChain-%Y-%m-%d" formatted with the current date.project_config - custom labeling configurationmode: use this shortcut to create target configuration from scratch:"prompt" - Single prompt, single response. Default."chat" - Multi-turn chat mode.PreviousInfinoNextLLMonitorInstallation and setupCollecting LLMs prompts and responsesCollecting Chat model DialoguesCustom Labeling ConfigurationOther parameters |
796 | https://python.langchain.com/docs/integrations/callbacks/llmonitor | ComponentsCallbacksLLMonitorOn this pageLLMonitorLLMonitor is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.SetupCreate an account on llmonitor.com, then copy your new app's tracking id.Once you have it, set it as an environment variable by running:export LLMONITOR_APP_ID="..."If you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler:from langchain.callbacks import LLMonitorCallbackHandlerhandler = LLMonitorCallbackHandler(app_id="...")Usage with LLM/Chat modelsfrom langchain.llms import OpenAIfrom langchain.chat_models import ChatOpenAIfrom langchain.callbacks import LLMonitorCallbackHandlerhandler = LLMonitorCallbackHandler()llm = OpenAI( callbacks=[handler],)chat = ChatOpenAI(callbacks=[handler])llm("Tell me a joke")Usage with chains and agentsMake sure to pass the callback handler to the run method so that all related chains and llm calls are correctly tracked.It is also recommended to pass agent_name in the metadata to be able to distinguish between agents in the dashboard.Example:from langchain.chat_models import ChatOpenAIfrom langchain.schema import SystemMessage, HumanMessagefrom langchain.agents import OpenAIFunctionsAgent, AgentExecutor, toolfrom langchain.callbacks import LLMonitorCallbackHandlerllm = ChatOpenAI(temperature=0)handler = LLMonitorCallbackHandler()@tooldef get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word)tools = [get_word_length]prompt = OpenAIFunctionsAgent.create_prompt( system_message=SystemMessage( content="You are very powerful assistant, but bad at calculating lengths of words." ))agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, verbose=True)agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, metadata={"agent_name": "WordCount"} # <- recommended, assign a custom name)agent_executor.run("how many letters in the word educa?", callbacks=[handler])Another example:from langchain.agents import load_tools, initialize_agent, AgentTypefrom langchain.llms import OpenAIfrom langchain.callbacks import LLMonitorCallbackHandlerhandler = LLMonitorCallbackHandler()llm = OpenAI(temperature=0)tools = load_tools(["serpapi", "llm-math"], llm=llm)agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, metadata={ "agent_name": "GirlfriendAgeFinder" }) # <- recommended, assign a custom nameagent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", callbacks=[handler],)User TrackingUser tracking allows you to identify your users, track their cost, conversations and more.from langchain.callbacks.llmonitor_callback import LLMonitorCallbackHandler, identifywith identify("user-123"): llm("Tell me a joke")with identify("user-456", user_props={"email": "user456@test.com"}): agen.run("Who is Leo DiCaprio's girlfriend?")SupportFor any question or issue with integration you can reach out to the LLMonitor team on Discord or via email.PreviousLabel StudioNextPromptLayerSetupUsage with LLM/Chat modelsUsage with chains and agentsUser TrackingSupport |
797 | https://python.langchain.com/docs/integrations/callbacks/promptlayer | ComponentsCallbacksPromptLayerOn this pagePromptLayerPromptLayer is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the PromptLayerCallbackHandler. While PromptLayer does have LLMs that integrate directly with LangChain (e.g. PromptLayerOpenAI), this callback is the recommended way to integrate PromptLayer with LangChain.See our docs for more information.Installation and Setuppip install promptlayer --upgradeGetting API CredentialsIf you do not have a PromptLayer account, create one on promptlayer.com. Then get an API key by clicking on the settings cog in the navbar and
set it as an environment variabled called PROMPTLAYER_API_KEYUsageGetting started with PromptLayerCallbackHandler is fairly simple, it takes two optional arguments:pl_tags - an optional list of strings that will be tracked as tags on PromptLayer.pl_id_callback - an optional function that will take promptlayer_request_id as an argument. This ID can be used with all of PromptLayer's tracking features to track, metadata, scores, and prompt usage.Simple OpenAI ExampleIn this simple example we use PromptLayerCallbackHandler with ChatOpenAI. We add a PromptLayer tag named chatopenaiimport promptlayer # Don't forget this 🍰from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage,)chat_llm = ChatOpenAI( temperature=0, callbacks=[PromptLayerCallbackHandler(pl_tags=["chatopenai"])],)llm_results = chat_llm( [ HumanMessage(content="What comes after 1,2,3 ?"), HumanMessage(content="Tell me another joke?"), ])print(llm_results)GPT4All Exampleimport promptlayer # Don't forget this 🍰from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import GPT4Allmodel = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)response = model( "Once upon a time, ", callbacks=[PromptLayerCallbackHandler(pl_tags=["langchain", "gpt4all"])],)Full Featured ExampleIn this example we unlock more of the power of PromptLayer.PromptLayer allows you to visually create, version, and track prompt templates. Using the Prompt Registry, we can programatically fetch the prompt template called example.We also define a pl_id_callback function which takes in the promptlayer_request_id and logs a score, metadata and links the prompt template used. Read more about tracking on our docs.import promptlayer # Don't forget this 🍰from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import OpenAIdef pl_id_callback(promptlayer_request_id): print("prompt layer id ", promptlayer_request_id) promptlayer.track.score( request_id=promptlayer_request_id, score=100 ) # score is an integer 0-100 promptlayer.track.metadata( request_id=promptlayer_request_id, metadata={"foo": "bar"} ) # metadata is a dictionary of key value pairs that is tracked on PromptLayer promptlayer.track.prompt( request_id=promptlayer_request_id, prompt_name="example", prompt_input_variables={"product": "toasters"}, version=1, ) # link the request to a prompt templateopenai_llm = OpenAI( model_name="text-davinci-002", callbacks=[PromptLayerCallbackHandler(pl_id_callback=pl_id_callback)],)example_prompt = promptlayer.prompts.get("example", version=1, langchain=True)openai_llm(example_prompt.format(product="toasters"))That is all it takes! After setup all your requests will show up on the PromptLayer dashboard.
This callback also works with any LLM implemented on LangChain.PreviousLLMonitorNextSageMaker TrackingInstallation and SetupGetting API CredentialsUsageSimple OpenAI ExampleGPT4All ExampleFull Featured Example |
798 | https://python.langchain.com/docs/integrations/callbacks/sagemaker_tracking | ComponentsCallbacksSageMaker TrackingOn this pageSageMaker TrackingThis notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability:Scenario 1: Single LLM - A case where a single LLM model is used to generate output based on a given prompt.Scenario 2: Sequential Chain - A case where a sequential chain of two LLM models is used.Scenario 3: Agent with Tools (Chain of Thought) - A case where multiple tools (search and math) are used in addition to an LLM.Amazon SageMaker is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. Amazon SageMaker Experiments is a capability of Amazon SageMaker that lets you organize, track, compare and evaluate ML experiments and model versions.In this notebook, we will create a single experiment to log the prompts from each scenario.Installation and Setuppip install sagemakerpip install openaipip install google-search-resultsFirst, setup the required API keysOpenAI: https://platform.openai.com/account/api-keys (For OpenAI LLM model)Google SERP API: https://serpapi.com/manage-api-key (For Google Search Tool)import os## Add your API keys belowos.environ["OPENAI_API_KEY"] = "<ADD-KEY-HERE>"os.environ["SERPAPI_API_KEY"] = "<ADD-KEY-HERE>"from langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain, SimpleSequentialChainfrom langchain.agents import initialize_agent, load_toolsfrom langchain.agents import Toolfrom langchain.callbacks import SageMakerCallbackHandlerfrom sagemaker.analytics import ExperimentAnalyticsfrom sagemaker.session import Sessionfrom sagemaker.experiments.run import RunLLM Prompt Tracking#LLM HyperparametersHPARAMS = { "temperature": 0.1, "model_name": "text-davinci-003",}#Bucket used to save prompt logs (Use `None` is used to save the default bucket or otherwise change it)BUCKET_NAME = None#Experiment nameEXPERIMENT_NAME = "langchain-sagemaker-tracker"#Create SageMaker Session with the given bucketsession = Session(default_bucket=BUCKET_NAME)Scenario 1 - LLMRUN_NAME = "run-scenario-1"PROMPT_TEMPLATE = "tell me a joke about {topic}"INPUT_VARIABLES = {"topic": "fish"}with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run: # Create SageMaker Callback sagemaker_callback = SageMakerCallbackHandler(run) # Define LLM model with callback llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS) # Create prompt template prompt = PromptTemplate.from_template(template=PROMPT_TEMPLATE) # Create LLM Chain chain = LLMChain(llm=llm, prompt=prompt, callbacks=[sagemaker_callback]) # Run chain chain.run(**INPUT_VARIABLES) # Reset the callback sagemaker_callback.flush_tracker()Scenario 2 - Sequential ChainRUN_NAME = "run-scenario-2"PROMPT_TEMPLATE_1 = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""PROMPT_TEMPLATE_2 = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.Play Synopsis: {synopsis}Review from a New York Times play critic of the above play:"""INPUT_VARIABLES = { "input": "documentary about good video games that push the boundary of game design"}with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run: # Create SageMaker Callback sagemaker_callback = SageMakerCallbackHandler(run) # Create prompt templates for the chain prompt_template1 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_1) prompt_template2 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_2) # Define LLM model with callback llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS) # Create chain1 chain1 = LLMChain(llm=llm, prompt=prompt_template1, callbacks=[sagemaker_callback]) # Create chain2 chain2 = LLMChain(llm=llm, prompt=prompt_template2, callbacks=[sagemaker_callback]) # Create Sequential chain overall_chain = SimpleSequentialChain(chains=[chain1, chain2], callbacks=[sagemaker_callback]) # Run overall sequential chain overall_chain.run(**INPUT_VARIABLES) # Reset the callback sagemaker_callback.flush_tracker()Scenario 3 - Agent with ToolsRUN_NAME = "run-scenario-3"PROMPT_TEMPLATE = "Who is the oldest person alive? And what is their current age raised to the power of 1.51?"with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run: # Create SageMaker Callback sagemaker_callback = SageMakerCallbackHandler(run) # Define LLM model with callback llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS) # Define tools tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=[sagemaker_callback]) # Initialize agent with all the tools agent = initialize_agent(tools, llm, agent="zero-shot-react-description", callbacks=[sagemaker_callback]) # Run agent agent.run(input=PROMPT_TEMPLATE) # Reset the callback sagemaker_callback.flush_tracker()Load Log DataOnce the prompts are logged, we can easily load and convert them to Pandas DataFrame as follows.#Loadlogs = ExperimentAnalytics(experiment_name=EXPERIMENT_NAME)#Convert as pandas dataframedf = logs.dataframe(force_refresh=True)print(df.shape)df.head()As can be seen above, there are three runs (rows) in the experiment corresponding to each scenario. Each run logs the prompts and related LLM settings/hyperparameters as json and are saved in s3 bucket. Feel free to load and explore the log data from each json path.PreviousPromptLayerNextStreamlitInstallation and SetupLLM Prompt TrackingScenario 1 - LLMScenario 2 - Sequential ChainScenario 3 - Agent with ToolsLoad Log Data |
799 | https://python.langchain.com/docs/integrations/callbacks/streamlit | ComponentsCallbacksStreamlitOn this pageStreamlitStreamlit is a faster way to build and share data apps.
Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No front‑end experience required.
See more examples at streamlit.io/generative-ai.In this guide we will demonstrate how to use StreamlitCallbackHandler to display the thoughts and actions of an agent in an
interactive Streamlit app. Try it out with the running app below using the MRKL agent:Installation and Setuppip install langchain streamlitYou can run streamlit hello to load a sample app and validate your install succeeded. See full instructions in Streamlit's
Getting started documentation.Display thoughts and actionsTo create a StreamlitCallbackHandler, you just need to provide a parent container to render the output.from langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stst_callback = StreamlitCallbackHandler(st.container())Additional keyword arguments to customize the display behavior are described in the
API reference.Scenario 1: Using an Agent with ToolsThe primary supported use case today is visualizing the actions of an Agent with Tools (or Agent Executor). You can create an
agent in your Streamlit app and simply pass the StreamlitCallbackHandler to agent.run() in order to visualize the
thoughts and actions live in your app.from langchain.llms import OpenAIfrom langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stllm = OpenAI(temperature=0, streaming=True)tools = load_tools(["ddg-search"])agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)if prompt := st.chat_input(): st.chat_message("user").write(prompt) with st.chat_message("assistant"): st_callback = StreamlitCallbackHandler(st.container()) response = agent.run(prompt, callbacks=[st_callback]) st.write(response)Note: You will need to set OPENAI_API_KEY for the above app code to run successfully.
The easiest way to do this is via Streamlit secrets.toml,
or any other local ENV management tool.Additional scenariosCurrently StreamlitCallbackHandler is geared towards use with a LangChain Agent Executor. Support for additional agent types,
use directly with Chains, etc will be added in the future.You may also be interested in using
StreamlitChatMessageHistory for LangChain.PreviousSageMaker TrackingNextTrubricsInstallation and SetupDisplay thoughts and actionsScenario 1: Using an Agent with ToolsAdditional scenarios |
800 | https://python.langchain.com/docs/integrations/callbacks/trubrics | ComponentsCallbacksTrubricsOn this pageTrubricsTrubrics is an LLM user analytics platform that lets you collect, analyse and manage user
prompts & feedback on AI models. In this guide we will go over how to setup the TrubricsCallbackHandler. Check out our repo for more information on Trubrics.Installation and Setuppip install trubricsGetting Trubrics CredentialsIf you do not have a Trubrics account, create one on here. In this tutorial, we will use the default project that is built upon account creation.Now set your credentials as environment variables:import osos.environ["TRUBRICS_EMAIL"] = "***@***"os.environ["TRUBRICS_PASSWORD"] = "***"UsageThe TrubricsCallbackHandler can receive various optional arguments. See here for kwargs that can be passed to Trubrics prompts.class TrubricsCallbackHandler(BaseCallbackHandler): """ Callback handler for Trubrics. Args: project: a trubrics project, default project is "default" email: a trubrics account email, can equally be set in env variables password: a trubrics account password, can equally be set in env variables **kwargs: all other kwargs are parsed and set to trubrics prompt variables, or added to the `metadata` dict """ExamplesHere are two examples of how to use the TrubricsCallbackHandler with Langchain LLMs or Chat Models. We will use OpenAI models, so set your OPENAI_API_KEY key here:os.environ["OPENAI_API_KEY"] = "sk-***"1. With an LLMfrom langchain.llms import OpenAIfrom langchain.callbacks import TrubricsCallbackHandlerllm = OpenAI(callbacks=[TrubricsCallbackHandler()]) [32m2023-09-26 11:30:02.149[0m | [1mINFO [0m | [36mtrubrics.platform.auth[0m:[36mget_trubrics_auth_token[0m:[36m61[0m - [1mUser jeff.kayne@trubrics.com has been authenticated.[0mres = llm.generate(["Tell me a joke", "Write me a poem"]) [32m2023-09-26 11:30:07.760[0m | [1mINFO [0m | [36mtrubrics.platform[0m:[36mlog_prompt[0m:[36m102[0m - [1mUser prompt saved to Trubrics.[0m [32m2023-09-26 11:30:08.042[0m | [1mINFO [0m | [36mtrubrics.platform[0m:[36mlog_prompt[0m:[36m102[0m - [1mUser prompt saved to Trubrics.[0mprint("--> GPT's joke: ", res.generations[0][0].text)print()print("--> GPT's poem: ", res.generations[1][0].text) --> GPT's joke: Q: What did the fish say when it hit the wall? A: Dam! --> GPT's poem: A Poem of Reflection I stand here in the night, The stars above me filling my sight. I feel such a deep connection, To the world and all its perfection. A moment of clarity, The calmness in the air so serene. My mind is filled with peace, And I am released. The past and the present, My thoughts create a pleasant sentiment. My heart is full of joy, My soul soars like a toy. I reflect on my life, And the choices I have made. My struggles and my strife, The lessons I have paid. The future is a mystery, But I am ready to take the leap. I am ready to take the lead, And to create my own destiny.2. With a chat modelfrom langchain.chat_models import ChatOpenAIfrom langchain.prompts import PromptTemplatefrom langchain.schema import HumanMessage, SystemMessagefrom langchain.callbacks import TrubricsCallbackHandlerchat_llm = ChatOpenAI( callbacks=[ TrubricsCallbackHandler( project="default", tags=["chat model"], user_id="user-id-1234", some_metadata={"hello": [1, 2]} ) ])chat_res = chat_llm( [ SystemMessage(content="Every answer of yours must be about OpenAI."), HumanMessage(content="Tell me a joke"), ]) [32m2023-09-26 11:30:10.550[0m | [1mINFO [0m | [36mtrubrics.platform[0m:[36mlog_prompt[0m:[36m102[0m - [1mUser prompt saved to Trubrics.[0mprint(chat_res.content) Why did the OpenAI computer go to the party? Because it wanted to meet its AI friends and have a byte of fun!PreviousStreamlitNextChat loadersInstallation and SetupGetting Trubrics CredentialsUsageExamples1. With an LLM2. With a chat model |
801 | https://python.langchain.com/docs/integrations/chat_loaders | ComponentsChat loadersChat loaders📄️ DiscordThis notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.📄️ Facebook MessengerThis notebook shows how to load data from Facebook in a format you can finetune on. The overall steps are:📄️ GMailThis loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.📄️ iMessageThis notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.📄️ Fine-Tuning on LangSmith Chat DatasetsThis notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.📄️ Fine-Tuning on LangSmith LLM RunsThis notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data.📄️ SlackThis notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages.📄️ TelegramThis notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages.📄️ Twitter (via Apify)This notebook shows how to load chat messages from Twitter to finetune on. We do this by utilizing Apify.📄️ WeChatThere is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundrudes of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.📄️ WhatsAppThis notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages.PreviousTrubricsNextDiscord |
802 | https://python.langchain.com/docs/integrations/chat_loaders/discord | ComponentsChat loadersDiscordOn this pageDiscordThis notebook shows how to create your own chat loader that works on copy-pasted messages (from dms) to a list of LangChain messages.The process has four steps:Create the chat .txt file by copying chats from the Discord app and pasting them in a file on your local computerCopy the chat loader definition from below to a local file.Initialize the DiscordChatLoader with the file path pointed to the text file.Call loader.load() (or loader.lazy_load()) to perform the conversion.1. Creat message dumpCurrently (2023/08/23) this loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example.discord_chats.txttalkingtower — 08/15/2023 11:10 AMLove music! Do you like jazz?reporterbob — 08/15/2023 9:27 PMYes! Jazz is fantastic. Ever heard this one?WebsiteListen to classic jazz track...talkingtower — Yesterday at 5:03 AMIndeed! Great choice. 🎷reporterbob — Yesterday at 5:23 AMThanks! How about some virtual sightseeing?WebsiteVirtual tour of famous landmarks...talkingtower — Today at 2:38 PMSounds fun! Let's explore.reporterbob — Today at 2:56 PMEnjoy the tour! See you around.talkingtower — Today at 3:00 PMThank you! Goodbye! 👋reporterbob — Today at 3:02 PMFarewell! Happy exploring.2. Define chat loaderLangChain currently does not support import loggingimport refrom typing import Iterator, Listfrom langchain.schema import BaseMessage, HumanMessagefrom langchain.chat_loaders import base as chat_loaderslogger = logging.getLogger()class DiscordChatLoader(chat_loaders.BaseChatLoader): def __init__(self, path: str): """ Initialize the Discord chat loader. Args: path: Path to the exported Discord chat text file. """ self.path = path self._message_line_regex = re.compile( r"(.+?) — (\w{3,9} \d{1,2}(?:st|nd|rd|th)?(?:, \d{4})? \d{1,2}:\d{2} (?:AM|PM)|Today at \d{1,2}:\d{2} (?:AM|PM)|Yesterday at \d{1,2}:\d{2} (?:AM|PM))", # noqa flags=re.DOTALL, ) def _load_single_chat_session_from_txt( self, file_path: str ) -> chat_loaders.ChatSession: """ Load a single chat session from a text file. Args: file_path: Path to the text file containing the chat messages. Returns: A `ChatSession` object containing the loaded chat messages. """ with open(file_path, "r", encoding="utf-8") as file: lines = file.readlines() results: List[BaseMessage] = [] current_sender = None current_timestamp = None current_content = [] for line in lines: if re.match( r".+? — (\d{2}/\d{2}/\d{4} \d{1,2}:\d{2} (?:AM|PM)|Today at \d{1,2}:\d{2} (?:AM|PM)|Yesterday at \d{1,2}:\d{2} (?:AM|PM))", # noqa line, ): if current_sender and current_content: results.append( HumanMessage( content="".join(current_content).strip(), additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) current_sender, current_timestamp = line.split(" — ")[:2] current_content = [ line[len(current_sender) + len(current_timestamp) + 4 :].strip() ] elif re.match(r"\[\d{1,2}:\d{2} (?:AM|PM)\]", line.strip()): results.append( HumanMessage( content="".join(current_content).strip(), additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) current_timestamp = line.strip()[1:-1] current_content = [] else: current_content.append("\n" + line.strip()) if current_sender and current_content: results.append( HumanMessage( content="".join(current_content).strip(), additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) return chat_loaders.ChatSession(messages=results) def lazy_load(self) -> Iterator[chat_loaders.ChatSession]: """ Lazy load the messages from the chat file and yield them in the required format. Yields: A `ChatSession` object containing the loaded chat messages. """ yield self._load_single_chat_session_from_txt(self.path)2. Create loaderWe will point to the file we just wrote to disk.loader = DiscordChatLoader( path="./discord_chats.txt",)3. Load MessagesAssuming the format is correct, the loader will convert the chats to langchain messages.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "talkingtower" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="talkingtower"))messages [{'messages': [AIMessage(content='Love music! Do you like jazz?', additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': '08/15/2023 11:10 AM\n'}]}, example=False), HumanMessage(content='Yes! Jazz is fantastic. Ever heard this one?\nWebsite\nListen to classic jazz track...', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': '08/15/2023 9:27 PM\n'}]}, example=False), AIMessage(content='Indeed! Great choice. 🎷', additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': 'Yesterday at 5:03 AM\n'}]}, example=False), HumanMessage(content='Thanks! How about some virtual sightseeing?\nWebsite\nVirtual tour of famous landmarks...', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': 'Yesterday at 5:23 AM\n'}]}, example=False), AIMessage(content="Sounds fun! Let's explore.", additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': 'Today at 2:38 PM\n'}]}, example=False), HumanMessage(content='Enjoy the tour! See you around.', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': 'Today at 2:56 PM\n'}]}, example=False), AIMessage(content='Thank you! Goodbye! 👋', additional_kwargs={'sender': 'talkingtower', 'events': [{'message_time': 'Today at 3:00 PM\n'}]}, example=False), HumanMessage(content='Farewell! Happy exploring.', additional_kwargs={'sender': 'reporterbob', 'events': [{'message_time': 'Today at 3:02 PM\n'}]}, example=False)]}]Next StepsYou can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True) Thank you! Have a wonderful day! 🌟PreviousChat loadersNextFacebook Messenger1. Creat message dump2. Define chat loader2. Create loader3. Load MessagesNext Steps |
803 | https://python.langchain.com/docs/integrations/chat_loaders/facebook | ComponentsChat loadersFacebook MessengerOn this pageFacebook MessengerThis notebook shows how to load data from Facebook in a format you can finetune on. The overall steps are:Download your messenger data to disk.Create the Chat Loader and call loader.load() (or loader.lazy_load()) to perform the conversion.Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class. Once you've done this, call convert_messages_for_finetuning to prepare your data for fine-tuning.Once this has been done, you can fine-tune your model. To do so you would complete the following steps:Upload your messages to OpenAI and run a fine-tuning job.Use the resulting model in your LangChain app!Let's begin.1. Download DataTo download your own messenger data, following instructions here. IMPORTANT - make sure to download them in JSON format (not HTML).We are hosting an example dump at this google drive link that we will use in this walkthrough.# This uses some example dataimport requestsimport zipfiledef download_and_unzip(url: str, output_path: str = 'file.zip') -> None: file_id = url.split('/')[-2] download_url = f'https://drive.google.com/uc?export=download&id={file_id}' response = requests.get(download_url) if response.status_code != 200: print('Failed to download the file.') return with open(output_path, 'wb') as file: file.write(response.content) print(f'File {output_path} downloaded.') with zipfile.ZipFile(output_path, 'r') as zip_ref: zip_ref.extractall() print(f'File {output_path} has been unzipped.')# URL of the file to downloadurl = 'https://drive.google.com/file/d/1rh1s1o2i7B-Sk1v9o8KNgivLVGwJ-osV/view?usp=sharing'# Download and unzipdownload_and_unzip(url) File file.zip downloaded. File file.zip has been unzipped.2. Create Chat LoaderWe have 2 different FacebookMessengerChatLoader classes, one for an entire directory of chats, and one to load individual files. Wedirectory_path = "./hogwarts"from langchain.chat_loaders.facebook_messenger import ( SingleFileFacebookMessengerChatLoader, FolderFacebookMessengerChatLoader,)loader = SingleFileFacebookMessengerChatLoader( path="./hogwarts/inbox/HermioneGranger/messages_Hermione_Granger.json",)chat_session = loader.load()[0]chat_session["messages"][:3] [HumanMessage(content="Hi Hermione! How's your summer going so far?", additional_kwargs={'sender': 'Harry Potter'}, example=False), HumanMessage(content="Harry! Lovely to hear from you. My summer is going well, though I do miss everyone. I'm spending most of my time going through my books and researching fascinating new topics. How about you?", additional_kwargs={'sender': 'Hermione Granger'}, example=False), HumanMessage(content="I miss you all too. The Dursleys are being their usual unpleasant selves but I'm getting by. At least I can practice some spells in my room without them knowing. Let me know if you find anything good in your researching!", additional_kwargs={'sender': 'Harry Potter'}, example=False)]loader = FolderFacebookMessengerChatLoader( path="./hogwarts",)chat_sessions = loader.load()len(chat_sessions) 93. Prepare for fine-tuningCalling load() returns all the chat messages we could extract as human messages. When conversing with chat bots, conversations typically follow a more strict alternating dialogue pattern relative to real conversations. You can choose to merge message "runs" (consecutive messages from the same sender) and select a sender to represent the "AI". The fine-tuned LLM will learn to generate these AI messages.from langchain.chat_loaders.utils import ( merge_chat_runs, map_ai_messages,)merged_sessions = merge_chat_runs(chat_sessions)alternating_sessions = list(map_ai_messages(merged_sessions, "Harry Potter"))# Now all of Harry Potter's messages will take the AI message class# which maps to the 'assistant' role in OpenAI's training formatalternating_sessions[0]['messages'][:3] [AIMessage(content="Professor Snape, I was hoping I could speak with you for a moment about something that's been concerning me lately.", additional_kwargs={'sender': 'Harry Potter'}, example=False), HumanMessage(content="What is it, Potter? I'm quite busy at the moment.", additional_kwargs={'sender': 'Severus Snape'}, example=False), AIMessage(content="I apologize for the interruption, sir. I'll be brief. I've noticed some strange activity around the school grounds at night. I saw a cloaked figure lurking near the Forbidden Forest last night. I'm worried someone may be plotting something sinister.", additional_kwargs={'sender': 'Harry Potter'}, example=False)]Now we can convert to OpenAI format dictionariesfrom langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(alternating_sessions)print(f"Prepared {len(training_data)} dialogues for training") Prepared 9 dialogues for trainingtraining_data[0][:3] [{'role': 'assistant', 'content': "Professor Snape, I was hoping I could speak with you for a moment about something that's been concerning me lately."}, {'role': 'user', 'content': "What is it, Potter? I'm quite busy at the moment."}, {'role': 'assistant', 'content': "I apologize for the interruption, sir. I'll be brief. I've noticed some strange activity around the school grounds at night. I saw a cloaked figure lurking near the Forbidden Forest last night. I'm worried someone may be plotting something sinister."}]OpenAI currently requires at least 10 training examples for a fine-tuning job, though they recommend between 50-100 for most tasks. Since we only have 9 chat sessions, we can subdivide them (optionally with some overlap) so that each training example is comprised of a portion of a whole conversation.Facebook chat sessions (1 per person) often span multiple days and conversations,
so the long-range dependencies may not be that important to model anyhow.# Our chat is alternating, we will make each datapoint a group of 8 messages,# with 2 messages overlappingchunk_size = 8overlap = 2training_examples = [ conversation_messages[i: i + chunk_size] for conversation_messages in training_data for i in range( 0, len(conversation_messages) - chunk_size + 1, chunk_size - overlap)]len(training_examples) 1004. Fine-tune the modelIt's time to fine-tune the model. Make sure you have openai installed
and have set your OPENAI_API_KEY appropriately# %pip install -U openai --quietimport jsonfrom io import BytesIOimport timeimport openai# We will write the jsonl file in memorymy_file = BytesIO()for m in training_examples: my_file.write((json.dumps({"messages": m}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')# OpenAI audits each training file for compliance reasons.# This make take a few minutesstatus = openai.File.retrieve(training_file.id).statusstart_time = time.time()while status != "processed": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.File.retrieve(training_file.id).statusprint(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.") File file-zCyNBeg4snpbBL7VkvsuhCz8 ready afer 30.55 seconds.With the file ready, it's time to kick off a training job.job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)Grab a cup of tea while your model is being prepared. This may take some time!status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) job = openai.FineTuningJob.retrieve(job.id) status = job.status Status=[running]... 908.87sprint(job.fine_tuned_model) ft:gpt-3.5-turbo-0613:personal::7rDwkaOq5. Use in LangChainYou can use the resulting model ID directly the ChatOpenAI model class.from langchain.chat_models import ChatOpenAImodel = ChatOpenAI( model=job.fine_tuned_model, temperature=1,)from langchain.prompts import ChatPromptTemplatefrom langchain.schema.output_parser import StrOutputParserprompt = ChatPromptTemplate.from_messages( [ ("human", "{input}"), ])chain = prompt | model | StrOutputParser()for tok in chain.stream({"input": "What classes are you taking?"}): print(tok, end="", flush=True) The usual - Potions, Transfiguration, Defense Against the Dark Arts. What about you?PreviousDiscordNextGMail1. Download Data2. Create Chat Loader3. Prepare for fine-tuning4. Fine-tune the model5. Use in LangChain |
804 | https://python.langchain.com/docs/integrations/chat_loaders/gmail | ComponentsChat loadersGMailGMailThis loader goes over how to load data from GMail. There are many ways you could want to load data from GMail. This loader is currently fairly opionated in how to do so. The way it does it is it first looks for all messages that you have sent. It then looks for messages where you are responding to a previous email. It then fetches that previous email, and creates a training example of that email, followed by your email.Note that there are clear limitations here. For example, all examples created are only looking at the previous email for context.To use:Set up a Google Developer Account: Go to the Google Developer Console, create a project, and enable the Gmail API for that project. This will give you a credentials.json file that you'll need later.Install the Google Client Library: Run the following command to install the Google Client Library:pip install --upgrade google-auth google-auth-oauthlib google-auth-httplib2 google-api-python-clientimport os.pathimport base64import jsonimport reimport timefrom google.auth.transport.requests import Requestfrom google.oauth2.credentials import Credentialsfrom google_auth_oauthlib.flow import InstalledAppFlowfrom googleapiclient.discovery import buildimport loggingimport requestsSCOPES = ['https://www.googleapis.com/auth/gmail.readonly']creds = None# The file token.json stores the user's access and refresh tokens, and is# created automatically when the authorization flow completes for the first# time.if os.path.exists('email_token.json'): creds = Credentials.from_authorized_user_file('email_token.json', SCOPES)# If there are no (valid) credentials available, let the user log in.if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( # your creds file here. Please create json file as here https://cloud.google.com/docs/authentication/getting-started 'creds.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('email_token.json', 'w') as token: token.write(creds.to_json())from langchain.chat_loaders.gmail import GMailLoaderloader = GMailLoader(creds=creds, n=3)data = loader.load()# Sometimes there can be errors which we silently ignorelen(data) 2from langchain.chat_loaders.utils import ( map_ai_messages,)# This makes messages sent by hchase@langchain.com the AI Messages# This means you will train an LLM to predict as if it's responding as hchasetraining_data = list(map_ai_messages(data, sender="Harrison Chase <hchase@langchain.com>"))PreviousFacebook MessengerNextiMessage |
805 | https://python.langchain.com/docs/integrations/chat_loaders/imessage | ComponentsChat loadersiMessageOn this pageiMessageThis notebook shows how to use the iMessage chat loader. This class helps convert iMessage conversations to LangChain chat messages.On MacOS, iMessage stores conversations in a sqlite database at ~/Library/Messages/chat.db (at least for macOS Ventura 13.4).
The IMessageChatLoader loads from this database file. Create the IMessageChatLoader with the file path pointed to chat.db database you'd like to process.Call loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class.1. Access Chat DBIt's likely that your terminal is denied access to ~/Library/Messages. To use this class, you can copy the DB to an accessible directory (e.g., Documents) and load from there. Alternatively (and not recommended), you can grant full disk access for your terminal emulator in System Settings > Securityand Privacy > Full Disk Access.We have created an example database you can use at this linked drive file.# This uses some example dataimport requestsdef download_drive_file(url: str, output_path: str = 'chat.db') -> None: file_id = url.split('/')[-2] download_url = f'https://drive.google.com/uc?export=download&id={file_id}' response = requests.get(download_url) if response.status_code != 200: print('Failed to download the file.') return with open(output_path, 'wb') as file: file.write(response.content) print(f'File {output_path} downloaded.')url = 'https://drive.google.com/file/d/1NebNKqTA2NXApCmeH6mu0unJD2tANZzo/view?usp=sharing'# Download file to chat.dbdownload_drive_file(url) File chat.db downloaded.2. Create the Chat LoaderProvide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs.from langchain.chat_loaders.imessage import IMessageChatLoaderloader = IMessageChatLoader( path="./chat.db",)3. Load messagesThe load() (or lazy_load) methods return a list of "ChatSessions" that currently just contain a list of messages per loaded conversation. All messages are mapped to "HumanMessage" objects to start. You can optionally choose to merge message "runs" (consecutive messages from the same sender) and select a sender to represent the "AI". The fine-tuned LLM will learn to generate these AI messages.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Tortoise" to AI messages. Do you have a guess who these conversations are between?chat_sessions: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Tortoise"))# Now all of the Tortoise's messages will take the AI message class# which maps to the 'assistant' role in OpenAI's training formatalternating_sessions[0]['messages'][:3] [AIMessage(content="Slow and steady, that's my motto.", additional_kwargs={'message_time': 1693182723, 'sender': 'Tortoise'}, example=False), HumanMessage(content='Speed is key!', additional_kwargs={'message_time': 1693182753, 'sender': 'Hare'}, example=False), AIMessage(content='A balanced approach is more reliable.', additional_kwargs={'message_time': 1693182783, 'sender': 'Tortoise'}, example=False)]3. Prepare for fine-tuningNow it's time to convert our chat messages to OpenAI dictionaries. We can use the convert_messages_for_finetuning utility to do so.from langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(alternating_sessions)print(f"Prepared {len(training_data)} dialogues for training") Prepared 10 dialogues for training4. Fine-tune the modelIt's time to fine-tune the model. Make sure you have openai installed
and have set your OPENAI_API_KEY appropriately# %pip install -U openai --quietimport jsonfrom io import BytesIOimport timeimport openai# We will write the jsonl file in memorymy_file = BytesIO()for m in training_data: my_file.write((json.dumps({"messages": m}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')# OpenAI audits each training file for compliance reasons.# This make take a few minutesstatus = openai.File.retrieve(training_file.id).statusstart_time = time.time()while status != "processed": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.File.retrieve(training_file.id).statusprint(f"File {training_file.id} ready after {time.time() - start_time:.2f} seconds.") File file-zHIgf4r8LltZG3RFpkGd4Sjf ready after 10.19 seconds.With the file ready, it's time to kick off a training job.job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)Grab a cup of tea while your model is being prepared. This may take some time!status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) job = openai.FineTuningJob.retrieve(job.id) status = job.status Status=[running]... 524.95sprint(job.fine_tuned_model) ft:gpt-3.5-turbo-0613:personal::7sKoRdlz5. Use in LangChainYou can use the resulting model ID directly the ChatOpenAI model class.from langchain.chat_models import ChatOpenAImodel = ChatOpenAI( model=job.fine_tuned_model, temperature=1,)from langchain.prompts import ChatPromptTemplatefrom langchain.schema.output_parser import StrOutputParserprompt = ChatPromptTemplate.from_messages( [ ("system", "You are speaking to hare."), ("human", "{input}"), ])chain = prompt | model | StrOutputParser()for tok in chain.stream({"input": "What's the golden thread?"}): print(tok, end="", flush=True) A symbol of interconnectedness.PreviousGMailNextFine-Tuning on LangSmith Chat Datasets1. Access Chat DB2. Create the Chat Loader3. Load messages3. Prepare for fine-tuning4. Fine-tune the model5. Use in LangChain |
806 | https://python.langchain.com/docs/integrations/chat_loaders/langsmith_dataset | ComponentsChat loadersFine-Tuning on LangSmith Chat DatasetsOn this pageFine-Tuning on LangSmith Chat DatasetsThis notebook demonstrates an easy way to load a LangSmith chat dataset fine-tune a model on that data.
The process is simple and comprises 3 steps.Create the chat dataset.Use the LangSmithDatasetChatLoader to load examples.Fine-tune your model.Then you can use the fine-tuned model in your LangChain app.Before diving in, let's install our prerequisites.PrerequisitesEnsure you've installed langchain >= 0.0.311 and have configured your environment with your LangSmith API key.%pip install -U langchain openaiimport osimport uuiduid = uuid.uuid4().hex[:6]os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = "YOUR API KEY"1. Select datasetThis notebook fine-tunes a model directly on a selecting which runs to fine-tune on. You will often curate these from traced runs. You can learn more about LangSmith datasets in the docs docs.For the sake of this tutorial, we will upload an existing dataset here that you can use.from langsmith.client import Clientclient = Client()import requestsurl = "https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/docs/integrations/chat_loaders/example_data/langsmith_chat_dataset.json"response = requests.get(url)response.raise_for_status()data = response.json()dataset_name = f"Extraction Fine-tuning Dataset {uid}"ds = client.create_dataset(dataset_name=dataset_name, data_type="chat")_ = client.create_examples( inputs = [e['inputs'] for e in data], outputs = [e['outputs'] for e in data], dataset_id=ds.id,)2. Prepare DataNow we can create an instance of LangSmithRunChatLoader and load the chat sessions using its lazy_load() method.from langchain.chat_loaders.langsmith import LangSmithDatasetChatLoaderloader = LangSmithDatasetChatLoader(dataset_name=dataset_name)chat_sessions = loader.lazy_load()With the chat sessions loaded, convert them into a format suitable for fine-tuning.from langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(chat_sessions)3. Fine-tune the ModelNow, initiate the fine-tuning process using the OpenAI library.import openaiimport timeimport jsonfrom io import BytesIOmy_file = BytesIO()for dialog in training_data: my_file.write((json.dumps({"messages": dialog}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)# Wait for the fine-tuning to complete (this may take some time)status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.FineTuningJob.retrieve(job.id).status# Now your model is fine-tuned! Status=[running]... 302.42s. 143.85s4. Use in LangChainAfter fine-tuning, use the resulting model ID with the ChatOpenAI model class in your LangChain app.# Get the fine-tuned model IDjob = openai.FineTuningJob.retrieve(job.id)model_id = job.fine_tuned_model# Use the fine-tuned model in LangChainmodel = ChatOpenAI( model=model_id, temperature=1,)model.invoke("There were three ravens sat on a tree.")Now you have successfully fine-tuned a model using data from LangSmith LLM runs!PreviousiMessageNextFine-Tuning on LangSmith LLM RunsPrerequisites1. Select dataset2. Prepare Data3. Fine-tune the Model4. Use in LangChain |
807 | https://python.langchain.com/docs/integrations/chat_loaders/langsmith_llm_runs | ComponentsChat loadersFine-Tuning on LangSmith LLM RunsOn this pageFine-Tuning on LangSmith LLM RunsThis notebook demonstrates how to directly load data from LangSmith's LLM runs and fine-tune a model on that data.
The process is simple and comprises 3 steps.Select the LLM runs to train on.Use the LangSmithRunChatLoader to load runs as chat sessions.Fine-tune your model.Then you can use the fine-tuned model in your LangChain app.Before diving in, let's install our prerequisites.PrerequisitesEnsure you've installed langchain >= 0.0.311 and have configured your environment with your LangSmith API key.%pip install -U langchain openaiimport osimport uuiduid = uuid.uuid4().hex[:6]project_name = f"Run Fine-tuning Walkthrough {uid}"os.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = "YOUR API KEY"os.environ["LANGCHAIN_PROJECT"] = project_name1. Select RunsThe first step is selecting which runs to fine-tune on. A common case would be to select LLM runs within
traces that have received positive user feedback. You can find examples of this in theLangSmith Cookbook and in the docs.For the sake of this tutorial, we will generate some runs for you to use here. Let's try fine-tuning a
simple function-calling chain.from langchain.pydantic_v1 import BaseModel, Fieldfrom enum import Enumclass Operation(Enum): add = "+" subtract = "-" multiply = "*" divide = "/"class Calculator(BaseModel): """A calculator function""" num1: float num2: float operation: Operation = Field(..., description="+,-,*,/") def calculate(self): if self.operation == Operation.add: return self.num1 + self.num2 elif self.operation == Operation.subtract: return self.num1 - self.num2 elif self.operation == Operation.multiply: return self.num1 * self.num2 elif self.operation == Operation.divide: if self.num2 != 0: return self.num1 / self.num2 else: return "Cannot divide by zero"from langchain.utils.openai_functions import convert_pydantic_to_openai_functionfrom langchain.pydantic_v1 import BaseModelfrom pprint import pprintopenai_function_def = convert_pydantic_to_openai_function(Calculator)pprint(openai_function_def) {'description': 'A calculator function', 'name': 'Calculator', 'parameters': {'description': 'A calculator function', 'properties': {'num1': {'title': 'Num1', 'type': 'number'}, 'num2': {'title': 'Num2', 'type': 'number'}, 'operation': {'allOf': [{'description': 'An ' 'enumeration.', 'enum': ['+', '-', '*', '/'], 'title': 'Operation'}], 'description': '+,-,*,/'}}, 'required': ['num1', 'num2', 'operation'], 'title': 'Calculator', 'type': 'object'}}from langchain.prompts import ChatPromptTemplatefrom langchain.chat_models import ChatOpenAIfrom langchain.output_parsers.openai_functions import PydanticOutputFunctionsParserprompt = ChatPromptTemplate.from_messages( [ ("system", "You are an accounting assistant."), ("user", "{input}"), ])chain = ( prompt | ChatOpenAI().bind(functions=[openai_function_def]) | PydanticOutputFunctionsParser(pydantic_schema=Calculator) | (lambda x: x.calculate()))math_questions = [ "What's 45/9?", "What's 81/9?", "What's 72/8?", "What's 56/7?", "What's 36/6?", "What's 64/8?", "What's 12*6?", "What's 8*8?", "What's 10*10?", "What's 11*11?", "What's 13*13?", "What's 45+30?", "What's 72+28?", "What's 56+44?", "What's 63+37?", "What's 70-35?", "What's 60-30?", "What's 50-25?", "What's 40-20?", "What's 30-15?"]results = chain.batch([{"input": q} for q in math_questions], return_exceptions=True) Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..Load runs that did not errorNow we can select the successful runs to fine-tune on.from langsmith.client import Clientclient = Client()successful_traces = { run.trace_id for run in client.list_runs( project_name=project_name, execution_order=1, error=False, )} llm_runs = [ run for run in client.list_runs( project_name=project_name, run_type="llm", ) if run.trace_id in successful_traces]2. Prepare dataNow we can create an instance of LangSmithRunChatLoader and load the chat sessions using its lazy_load() method.from langchain.chat_loaders.langsmith import LangSmithRunChatLoaderloader = LangSmithRunChatLoader(runs=llm_runs)chat_sessions = loader.lazy_load()With the chat sessions loaded, convert them into a format suitable for fine-tuning.from langchain.adapters.openai import convert_messages_for_finetuningtraining_data = convert_messages_for_finetuning(chat_sessions)3. Fine-tune the modelNow, initiate the fine-tuning process using the OpenAI library.import openaiimport timeimport jsonfrom io import BytesIOmy_file = BytesIO()for dialog in training_data: my_file.write((json.dumps({"messages": dialog}) + "\n").encode('utf-8'))my_file.seek(0)training_file = openai.File.create( file=my_file, purpose='fine-tune')job = openai.FineTuningJob.create( training_file=training_file.id, model="gpt-3.5-turbo",)# Wait for the fine-tuning to complete (this may take some time)status = openai.FineTuningJob.retrieve(job.id).statusstart_time = time.time()while status != "succeeded": print(f"Status=[{status}]... {time.time() - start_time:.2f}s", end="\r", flush=True) time.sleep(5) status = openai.FineTuningJob.retrieve(job.id).status# Now your model is fine-tuned! Status=[running]... 346.26s. 31.70s4. Use in LangChainAfter fine-tuning, use the resulting model ID with the ChatOpenAI model class in your LangChain app.# Get the fine-tuned model IDjob = openai.FineTuningJob.retrieve(job.id)model_id = job.fine_tuned_model# Use the fine-tuned model in LangChainmodel = ChatOpenAI( model=model_id, temperature=1,)(prompt | model).invoke({"input": "What's 56/7?"}) AIMessage(content='{\n "num1": 56,\n "num2": 7,\n "operation": "/"\n}')Now you have successfully fine-tuned a model using data from LangSmith LLM runs!PreviousFine-Tuning on LangSmith Chat DatasetsNextSlackPrerequisites1. Select Runs2. Prepare data3. Fine-tune the model4. Use in LangChain |
808 | https://python.langchain.com/docs/integrations/chat_loaders/slack | ComponentsChat loadersSlackOn this pageSlackThis notebook shows how to use the Slack chat loader. This class helps map exported slack conversations to LangChain chat messages.The process has three steps:Export the desired conversation thread by following the instructions here.Create the SlackChatLoader with the file path pointed to the json file or directory of JSON filesCall loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class.1. Creat message dumpCurrently (2023/08/23) this loader best supports a zip directory of files in the format generated by exporting your a direct message converstion from Slack. Follow up-to-date instructions from slack on how to do so.We have an example in the LangChain repo.import requestspermalink = "https://raw.githubusercontent.com/langchain-ai/langchain/342087bdfa3ac31d622385d0f2d09cf5e06c8db3/libs/langchain/tests/integration_tests/examples/slack_export.zip"response = requests.get(permalink)with open("slack_dump.zip", "wb") as f: f.write(response.content)2. Create the Chat LoaderProvide the loader with the file path to the zip directory. You can optionally specify the user id that maps to an ai message as well an configure whether to merge message runs.from langchain.chat_loaders.slack import SlackChatLoaderloader = SlackChatLoader( path="slack_dump.zip",)3. Load messagesThe load() (or lazy_load) methods return a list of "ChatSessions" that currently just contain a list of messages per loaded conversation.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "U0500003428" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="U0500003428"))Next StepsYou can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message. from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[1]['messages']): print(chunk.content, end="", flush=True) Hi, I hope you're doing well. I wanted to reach out and ask if you'd be available to meet up for coffee sometime next week. I'd love to catch up and hear about what's been going on in your life. Let me know if you're interested and we can find a time that works for both of us. Looking forward to hearing from you! Best, [Your Name]PreviousFine-Tuning on LangSmith LLM RunsNextTelegram1. Creat message dump2. Create the Chat Loader3. Load messagesNext Steps |
809 | https://python.langchain.com/docs/integrations/chat_loaders/telegram | ComponentsChat loadersTelegramOn this pageTelegramThis notebook shows how to use the Telegram chat loader. This class helps map exported Telegram conversations to LangChain chat messages.The process has three steps:Export the chat .txt file by copying chats from the Discord app and pasting them in a file on your local computerCreate the TelegramChatLoader with the file path pointed to the json file or directory of JSON filesCall loader.load() (or loader.lazy_load()) to perform the conversion. Optionally use merge_chat_runs to combine message from the same sender in sequence, and/or map_ai_messages to convert messages from the specified sender to the "AIMessage" class.1. Creat message dumpCurrently (2023/08/23) this loader best supports json files in the format generated by exporting your chat history from the Telegram Desktop App.Important: There are 'lite' versions of telegram such as "Telegram for MacOS" that lack the export functionality. Please make sure you use the correct app to export the file.To make the export:Download and open telegram desktopSelect a conversationNavigate to the conversation settings (currently the three dots in the top right corner)Click "Export Chat History"Unselect photos and other media. Select "Machine-readable JSON" format to export.An example is below: telegram_conversation.json{ "name": "Jiminy", "type": "personal_chat", "id": 5965280513, "messages": [ { "id": 1, "type": "message", "date": "2023-08-23T13:11:23", "date_unixtime": "1692821483", "from": "Jiminy Cricket", "from_id": "user123450513", "text": "You better trust your conscience", "text_entities": [ { "type": "plain", "text": "You better trust your conscience" } ] }, { "id": 2, "type": "message", "date": "2023-08-23T13:13:20", "date_unixtime": "1692821600", "from": "Batman & Robin", "from_id": "user6565661032", "text": "What did you just say?", "text_entities": [ { "type": "plain", "text": "What did you just say?" } ] } ]}2. Create the Chat LoaderAll that's required is the file path. You can optionally specify the user name that maps to an ai message as well an configure whether to merge message runs.from langchain.chat_loaders.telegram import TelegramChatLoaderloader = TelegramChatLoader( path="./telegram_conversation.json", )3. Load messagesThe load() (or lazy_load) methods return a list of "ChatSessions" that currently just contain a list of messages per loaded conversation.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Jiminy Cricket" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Jiminy Cricket"))Next StepsYou can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True) I said, "You better trust your conscience."PreviousSlackNextTwitter (via Apify)1. Creat message dump2. Create the Chat Loader3. Load messagesNext Steps |
810 | https://python.langchain.com/docs/integrations/chat_loaders/twitter | ComponentsChat loadersTwitter (via Apify)Twitter (via Apify)This notebook shows how to load chat messages from Twitter to finetune on. We do this by utilizing Apify. First, use Apify to export tweets. An exampleimport jsonfrom langchain.schema import AIMessagefrom langchain.adapters.openai import convert_message_to_dictwith open('example_data/dataset_twitter-scraper_2023-08-23_22-13-19-740.json') as f: data = json.load(f)# Filter out tweets that reference other tweets, because it's a bit weirdtweets = [d["full_text"] for d in data if "t.co" not in d['full_text']]# Create them as AI messagesmessages = [AIMessage(content=t) for t in tweets]# Add in a system message at the start# TODO: we could try to extract the subject from the tweets, and put that in the system message.system_message = {"role": "system", "content": "write a tweet"}data = [[system_message, convert_message_to_dict(m)] for m in messages]PreviousTelegramNextWeChat |
811 | https://python.langchain.com/docs/integrations/chat_loaders/wechat | ComponentsChat loadersWeChatOn this pageWeChatThere is not yet a straightforward way to export personal WeChat messages. However if you just need no more than few hundrudes of messages for model fine-tuning or few-shot examples, this notebook shows how to create your own chat loader that works on copy-pasted WeChat messages to a list of LangChain messages.Highly inspired by https://python.langchain.com/docs/integrations/chat_loaders/discordThe process has five steps:Open your chat in the WeChat desktop app. Select messages you need by mouse-dragging or right-click. Due to restrictions, you can select up to 100 messages once a time. CMD/Ctrl + C to copy.Create the chat .txt file by pasting selected messages in a file on your local computer.Copy the chat loader definition from below to a local file.Initialize the WeChatChatLoader with the file path pointed to the text file.Call loader.load() (or loader.lazy_load()) to perform the conversion.1. Creat message dumpThis loader only supports .txt files in the format generated by copying messages in the app to your clipboard and pasting in a file. Below is an example.wechat_chats.txt女朋友 2023/09/16 2:51 PM天气有点凉男朋友 2023/09/16 2:51 PM珍簟凉风著,瑶琴寄恨生。嵇君懒书札,底物慰秋情。女朋友 2023/09/16 3:06 PM忙什么呢男朋友 2023/09/16 3:06 PM今天只干成了一件像样的事那就是想你女朋友 2023/09/16 3:06 PM[动画表情]2. Define chat loaderLangChain currently does not support import loggingimport refrom typing import Iterator, Listfrom langchain.schema import HumanMessage, BaseMessagefrom langchain.chat_loaders import base as chat_loaderslogger = logging.getLogger()class WeChatChatLoader(chat_loaders.BaseChatLoader): def __init__(self, path: str): """ Initialize the Discord chat loader. Args: path: Path to the exported Discord chat text file. """ self.path = path self._message_line_regex = re.compile( r"(?P<sender>.+?) (?P<timestamp>\d{4}/\d{2}/\d{2} \d{1,2}:\d{2} (?:AM|PM))", # noqa # flags=re.DOTALL, ) def _append_message_to_results( self, results: List, current_sender: str, current_timestamp: str, current_content: List[str], ): content = "\n".join(current_content).strip() # skip non-text messages like stickers, images, etc. if not re.match(r"\[.*\]", content): results.append( HumanMessage( content=content, additional_kwargs={ "sender": current_sender, "events": [{"message_time": current_timestamp}], }, ) ) return results def _load_single_chat_session_from_txt( self, file_path: str ) -> chat_loaders.ChatSession: """ Load a single chat session from a text file. Args: file_path: Path to the text file containing the chat messages. Returns: A `ChatSession` object containing the loaded chat messages. """ with open(file_path, "r", encoding="utf-8") as file: lines = file.readlines() results: List[BaseMessage] = [] current_sender = None current_timestamp = None current_content = [] for line in lines: if re.match(self._message_line_regex, line): if current_sender and current_content: results = self._append_message_to_results( results, current_sender, current_timestamp, current_content) current_sender, current_timestamp = re.match(self._message_line_regex, line).groups() current_content = [] else: current_content.append(line.strip()) if current_sender and current_content: results = self._append_message_to_results( results, current_sender, current_timestamp, current_content) return chat_loaders.ChatSession(messages=results) def lazy_load(self) -> Iterator[chat_loaders.ChatSession]: """ Lazy load the messages from the chat file and yield them in the required format. Yields: A `ChatSession` object containing the loaded chat messages. """ yield self._load_single_chat_session_from_txt(self.path)2. Create loaderWe will point to the file we just wrote to disk.loader = WeChatChatLoader( path="./wechat_chats.txt",)3. Load MessagesAssuming the format is correct, the loader will convert the chats to langchain messages.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "男朋友" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="男朋友"))messages [{'messages': [HumanMessage(content='天气有点凉', additional_kwargs={'sender': '女朋友', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False), AIMessage(content='珍簟凉风著,瑶琴寄恨生。嵇君懒书札,底物慰秋情。', additional_kwargs={'sender': '男朋友', 'events': [{'message_time': '2023/09/16 2:51 PM'}]}, example=False), HumanMessage(content='忙什么呢', additional_kwargs={'sender': '女朋友', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False), AIMessage(content='今天只干成了一件像样的事\n那就是想你', additional_kwargs={'sender': '男朋友', 'events': [{'message_time': '2023/09/16 3:06 PM'}]}, example=False)]}]Next StepsYou can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True)PreviousTwitter (via Apify)NextWhatsApp1. Creat message dump2. Define chat loader2. Create loader3. Load MessagesNext Steps |
812 | https://python.langchain.com/docs/integrations/chat_loaders/whatsapp | ComponentsChat loadersWhatsAppOn this pageWhatsAppThis notebook shows how to use the WhatsApp chat loader. This class helps map exported Telegram conversations to LangChain chat messages.The process has three steps:Export the chat conversations to computerCreate the WhatsAppChatLoader with the file path pointed to the json file or directory of JSON filesCall loader.load() (or loader.lazy_load()) to perform the conversion.1. Creat message dumpTo make the export of your WhatsApp conversation(s), complete the following steps:Open the target conversationClick the three dots in the top right corner and select "More".Then select "Export chat" and choose "Without media".An example of the data format for each converation is below: whatsapp_chat.txt[8/15/23, 9:12:33 AM] Dr. Feather: Messages and calls are end-to-end encrypted. No one outside of this chat, not even WhatsApp, can read or listen to them.[8/15/23, 9:12:43 AM] Dr. Feather: I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature![8/15/23, 9:12:48 AM] Dr. Feather: image omitted[8/15/23, 9:13:15 AM] Jungle Jane: That's stunning! Were you able to observe its behavior?[8/15/23, 9:13:23 AM] Dr. Feather: image omitted[8/15/23, 9:14:02 AM] Dr. Feather: Yes, it seemed quite social with other macaws. They're known for their playful nature.[8/15/23, 9:14:15 AM] Jungle Jane: How's the research going on parrot communication?[8/15/23, 9:14:30 AM] Dr. Feather: image omitted[8/15/23, 9:14:50 AM] Dr. Feather: It's progressing well. We're learning so much about how they use sound and color to communicate.[8/15/23, 9:15:10 AM] Jungle Jane: That's fascinating! Can't wait to read your paper on it.[8/15/23, 9:15:20 AM] Dr. Feather: Thank you! I'll send you a draft soon.[8/15/23, 9:25:16 PM] Jungle Jane: Looking forward to it! Keep up the great work.2. Create the Chat LoaderThe WhatsAppChatLoader accepts the resulting zip file, unzipped directory, or the path to any of the chat .txt files therein.Provide that as well as the user name you want to take on the role of "AI" when finetuning.from langchain.chat_loaders.whatsapp import WhatsAppChatLoaderloader = WhatsAppChatLoader( path="./whatsapp_chat.txt", )3. Load messagesThe load() (or lazy_load) methods return a list of "ChatSessions" that currently store the list of messages per loaded conversation.from typing import Listfrom langchain.chat_loaders.base import ChatSessionfrom langchain.chat_loaders.utils import ( map_ai_messages, merge_chat_runs,)raw_messages = loader.lazy_load()# Merge consecutive messages from the same sender into a single messagemerged_messages = merge_chat_runs(raw_messages)# Convert messages from "Dr. Feather" to AI messagesmessages: List[ChatSession] = list(map_ai_messages(merged_messages, sender="Dr. Feather")) [{'messages': [AIMessage(content='I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!', additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:12:43 AM'}]}, example=False), HumanMessage(content="That's stunning! Were you able to observe its behavior?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:13:15 AM'}]}, example=False), AIMessage(content="Yes, it seemed quite social with other macaws. They're known for their playful nature.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:02 AM'}]}, example=False), HumanMessage(content="How's the research going on parrot communication?", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:14:15 AM'}]}, example=False), AIMessage(content="It's progressing well. We're learning so much about how they use sound and color to communicate.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:14:50 AM'}]}, example=False), HumanMessage(content="That's fascinating! Can't wait to read your paper on it.", additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:15:10 AM'}]}, example=False), AIMessage(content="Thank you! I'll send you a draft soon.", additional_kwargs={'sender': 'Dr. Feather', 'events': [{'message_time': '8/15/23, 9:15:20 AM'}]}, example=False), HumanMessage(content='Looking forward to it! Keep up the great work.', additional_kwargs={'sender': 'Jungle Jane', 'events': [{'message_time': '8/15/23, 9:25:16 PM'}]}, example=False)]}]Next StepsYou can then use these messages how you see fit, such as finetuning a model, few-shot example selection, or directly make predictions for the next message.from langchain.chat_models import ChatOpenAIllm = ChatOpenAI()for chunk in llm.stream(messages[0]['messages']): print(chunk.content, end="", flush=True) Thank you for the encouragement! I'll do my best to continue studying and sharing fascinating insights about parrot communication.PreviousWeChat1. Creat message dump2. Create the Chat Loader3. Load messagesNext Steps |
812 | https://api.python.langchain.com/en/latest/api_reference.html# |
langchain API Reference¶
langchain.adapters¶
Classes¶
adapters.openai.ChatCompletion()
Functions¶
adapters.openai.aenumerate(iterable[, start])
Async version of enumerate.
adapters.openai.convert_dict_to_message(_dict)
adapters.openai.convert_message_to_dict(message)
adapters.openai.convert_messages_for_finetuning(...)
Convert messages to a list of lists of dictionaries for fine-tuning.
adapters.openai.convert_openai_messages(messages)
Convert dictionaries representing OpenAI messages to LangChain format.
langchain.agents¶
Agent is a class that uses an LLM to choose a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents,
a language model is used as a reasoning engine to determine which actions
to take and in which order.
Agents select and use Tools and Toolkits for actions.
Class hierarchy:
BaseSingleActionAgent --> LLMSingleActionAgent
OpenAIFunctionsAgent
XMLAgent
Agent --> <name>Agent # Examples: ZeroShotAgent, ChatAgent
BaseMultiActionAgent --> OpenAIMultiFunctionsAgent
Main helpers:
AgentType, AgentExecutor, AgentOutputParser, AgentExecutorIterator,
AgentAction, AgentFinish
Classes¶
agents.agent.Agent
Agent that calls the language model and deciding the action.
agents.agent.AgentExecutor
Agent that is using tools.
agents.agent.AgentOutputParser
Base class for parsing agent output into agent action/finish.
agents.agent.BaseMultiActionAgent
Base Multi Action Agent class.
agents.agent.BaseSingleActionAgent
Base Single Action Agent class.
agents.agent.ExceptionTool
Tool that just returns the query.
agents.agent.LLMSingleActionAgent
Base class for single action agents.
agents.agent.RunnableAgent
Agent powered by runnables.
agents.agent_iterator.AgentExecutorIterator(...)
Iterator for AgentExecutor.
agents.agent_iterator.BaseAgentExecutorIterator()
Base class for AgentExecutorIterator.
agents.agent_toolkits.ainetwork.toolkit.AINetworkToolkit
Toolkit for interacting with AINetwork Blockchain.
agents.agent_toolkits.amadeus.toolkit.AmadeusToolkit
Toolkit for interacting with Office365.
agents.agent_toolkits.azure_cognitive_services.AzureCognitiveServicesToolkit
Toolkit for Azure Cognitive Services.
agents.agent_toolkits.base.BaseToolkit
Base Toolkit representing a collection of related tools.
agents.agent_toolkits.file_management.toolkit.FileManagementToolkit
Toolkit for interacting with a Local Files.
agents.agent_toolkits.github.toolkit.GitHubToolkit
GitHub Toolkit.
agents.agent_toolkits.gitlab.toolkit.GitLabToolkit
GitLab Toolkit.
agents.agent_toolkits.gmail.toolkit.GmailToolkit
Toolkit for interacting with Gmail.
agents.agent_toolkits.jira.toolkit.JiraToolkit
Jira Toolkit.
agents.agent_toolkits.json.toolkit.JsonToolkit
Toolkit for interacting with a JSON spec.
agents.agent_toolkits.multion.toolkit.MultionToolkit
Toolkit for interacting with the Browser Agent
agents.agent_toolkits.nla.tool.NLATool
Natural Language API Tool.
agents.agent_toolkits.nla.toolkit.NLAToolkit
Natural Language API Toolkit.
agents.agent_toolkits.office365.toolkit.O365Toolkit
Toolkit for interacting with Office 365.
agents.agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing
A tool that sends a DELETE request and parses the response.
agents.agent_toolkits.openapi.planner.RequestsGetToolWithParsing
Requests GET tool with LLM-instructed extraction of truncated responses.
agents.agent_toolkits.openapi.planner.RequestsPatchToolWithParsing
Requests PATCH tool with LLM-instructed extraction of truncated responses.
agents.agent_toolkits.openapi.planner.RequestsPostToolWithParsing
Requests POST tool with LLM-instructed extraction of truncated responses.
agents.agent_toolkits.openapi.planner.RequestsPutToolWithParsing
Requests PUT tool with LLM-instructed extraction of truncated responses.
agents.agent_toolkits.openapi.spec.ReducedOpenAPISpec(...)
A reduced OpenAPI spec.
agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit
Toolkit for interacting with an OpenAPI API.
agents.agent_toolkits.openapi.toolkit.RequestsToolkit
Toolkit for making REST requests.
agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit
Toolkit for PlayWright browser tools.
agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit
Toolkit for interacting with Power BI dataset.
agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit
Toolkit for interacting with Spark SQL.
agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit
Toolkit for interacting with SQL databases.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo
Information about a VectorStore.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit
Toolkit for routing between Vector Stores.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit
Toolkit for interacting with a Vector Store.
agents.agent_toolkits.zapier.toolkit.ZapierToolkit
Zapier Toolkit.
agents.agent_types.AgentType(value[, names, ...])
Enumerator with the Agent types.
agents.chat.base.ChatAgent
Chat Agent.
agents.chat.output_parser.ChatOutputParser
Output parser for the chat agent.
agents.conversational.base.ConversationalAgent
An agent that holds a conversation in addition to using tools.
agents.conversational.output_parser.ConvoOutputParser
Output parser for the conversational agent.
agents.conversational_chat.base.ConversationalChatAgent
An agent designed to hold a conversation in addition to using tools.
agents.conversational_chat.output_parser.ConvoOutputParser
Output parser for the conversational agent.
agents.mrkl.base.ChainConfig(action_name, ...)
Configuration for chain to use in MRKL system.
agents.mrkl.base.MRKLChain
[Deprecated] Chain that implements the MRKL system.
agents.mrkl.base.ZeroShotAgent
Agent for the MRKL chain.
agents.mrkl.output_parser.MRKLOutputParser
MRKL Output parser for the chat agent.
agents.openai_functions_agent.agent_token_buffer_memory.AgentTokenBufferMemory
Memory used to save agent output AND intermediate steps.
agents.openai_functions_agent.base.OpenAIFunctionsAgent
An Agent driven by OpenAIs function powered API.
agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent
An Agent driven by OpenAIs function powered API.
agents.output_parsers.json.JSONAgentOutputParser
Parses tool invocations and final answers in XML format.
agents.output_parsers.openai_functions.OpenAIFunctionsAgentOutputParser
Parses a message into agent action/finish.
agents.output_parsers.react_json_single_input.ReActJsonSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input in json format.
agents.output_parsers.react_single_input.ReActSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input.
agents.output_parsers.self_ask.SelfAskOutputParser
Parses self-ask style LLM calls.
agents.output_parsers.xml.XMLAgentOutputParser
Parses tool invocations and final answers in XML format.
agents.react.base.DocstoreExplorer(docstore)
Class to assist with exploration of a document store.
agents.react.base.ReActChain
[Deprecated] Chain that implements the ReAct paper.
agents.react.base.ReActDocstoreAgent
Agent for the ReAct chain.
agents.react.base.ReActTextWorldAgent
Agent for the ReAct TextWorld chain.
agents.react.output_parser.ReActOutputParser
Output parser for the ReAct agent.
agents.schema.AgentScratchPadChatPromptTemplate
Chat prompt template for the agent scratchpad.
agents.self_ask_with_search.base.SelfAskWithSearchAgent
Agent for the self-ask-with-search paper.
agents.self_ask_with_search.base.SelfAskWithSearchChain
[Deprecated] Chain that does self-ask with search.
agents.structured_chat.base.StructuredChatAgent
Structured Chat Agent.
agents.structured_chat.output_parser.StructuredChatOutputParser
Output parser for the structured chat agent.
agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries
Output parser with retries for the structured chat agent.
agents.tools.InvalidTool
Tool that is run when invalid tool name is encountered by agent.
agents.xml.base.XMLAgent
Agent that uses XML tags.
Functions¶
agents.agent_iterator.rebuild_callback_manager_on_set(...)
Decorator to force setters to rebuild callback mgr
agents.agent_toolkits.conversational_retrieval.openai_functions.create_conversational_retrieval_agent(...)
A convenience method for creating a conversational retrieval agent.
agents.agent_toolkits.conversational_retrieval.tool.create_retriever_tool(...)
Create a tool to do retrieval of documents.
agents.agent_toolkits.csv.base.create_csv_agent(...)
Create csv agent by loading to a dataframe and using pandas agent.
agents.agent_toolkits.json.base.create_json_agent(...)
Construct a json agent from an LLM and tools.
agents.agent_toolkits.openapi.base.create_openapi_agent(...)
Construct an OpenAPI agent from an LLM and tools.
agents.agent_toolkits.openapi.planner.create_openapi_agent(...)
Instantiate OpenAI API planner and controller for a given spec.
agents.agent_toolkits.openapi.spec.reduce_openapi_spec(spec)
Simplify/distill/minify a spec somehow.
agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent(llm, df)
Construct a pandas agent from an LLM and dataframe.
agents.agent_toolkits.powerbi.base.create_pbi_agent(llm)
Construct a Power BI agent from an LLM and tools.
agents.agent_toolkits.powerbi.chat_base.create_pbi_chat_agent(llm)
Construct a Power BI agent from a Chat LLM and tools.
agents.agent_toolkits.python.base.create_python_agent(...)
Construct a python agent from an LLM and tool.
agents.agent_toolkits.spark.base.create_spark_dataframe_agent(llm, df)
Construct a Spark agent from an LLM and dataframe.
agents.agent_toolkits.spark_sql.base.create_spark_sql_agent(...)
Construct a Spark SQL agent from an LLM and tools.
agents.agent_toolkits.sql.base.create_sql_agent(...)
Construct an SQL agent from an LLM and tools.
agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(...)
Construct a VectorStore agent from an LLM and tools.
agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(...)
Construct a VectorStore router agent from an LLM and tools.
agents.agent_toolkits.xorbits.base.create_xorbits_agent(...)
Construct a xorbits agent from an LLM and dataframe.
agents.format_scratchpad.log.format_log_to_str(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.log_to_messages.format_log_to_messages(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.openai_functions.format_to_openai_functions(...)
Format intermediate steps.
agents.format_scratchpad.xml.format_xml(...)
agents.initialize.initialize_agent(tools, llm)
Load an agent executor given tools and LLM.
agents.load_tools.get_all_tool_names()
Get a list of all possible tool names.
agents.load_tools.load_huggingface_tool(...)
Loads a tool from the HuggingFace Hub.
agents.load_tools.load_tools(tool_names[, ...])
Load tools based on their name.
agents.loading.load_agent(path, **kwargs)
Unified method for loading an agent from LangChainHub or local fs.
agents.loading.load_agent_from_config(config)
Load agent from Config Dict.
agents.utils.validate_tools_single_input(...)
Validate tools for single input.
langchain.agents.format_scratchpad¶
Logic for formatting intermediate steps into an agent scratchpad.
Intermediate steps refers to the list of (AgentAction, observation) tuples
that result from previous iterations of the agent.
Depending on the prompting strategy you are using, you may want to format these
differently before passing them into the LLM.
Functions¶
agents.format_scratchpad.log.format_log_to_str(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.log_to_messages.format_log_to_messages(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.openai_functions.format_to_openai_functions(...)
Format intermediate steps.
agents.format_scratchpad.xml.format_xml(...)
langchain.agents.output_parsers¶
Parsing utils to go from string to AgentAction or Agent Finish.
AgentAction means that an action should be taken.
This contains the name of the tool to use, the input to pass to that tool,
and a log variable (which contains a log of the agent’s thinking).
AgentFinish means that a response should be given.
This contains a return_values dictionary. This usually contains a
single output key, but can be extended to contain more.
This also contains a log variable (which contains a log of the agent’s thinking).
Classes¶
agents.output_parsers.json.JSONAgentOutputParser
Parses tool invocations and final answers in XML format.
agents.output_parsers.openai_functions.OpenAIFunctionsAgentOutputParser
Parses a message into agent action/finish.
agents.output_parsers.react_json_single_input.ReActJsonSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input in json format.
agents.output_parsers.react_single_input.ReActSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input.
agents.output_parsers.self_ask.SelfAskOutputParser
Parses self-ask style LLM calls.
agents.output_parsers.xml.XMLAgentOutputParser
Parses tool invocations and final answers in XML format.
langchain.cache¶
Warning
Beta Feature!
Cache provides an optional caching layer for LLMs.
Cache is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM
provider if you’re often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make
to the LLM provider.
Cache directly competes with Memory. See documentation for Pros and Cons.
Class hierarchy:
BaseCache --> <name>Cache # Examples: InMemoryCache, RedisCache, GPTCache
Classes¶
cache.CassandraCache([session, keyspace, ...])
Cache that uses Cassandra / Astra DB as a backend.
cache.CassandraSemanticCache(session, ...[, ...])
Cache that uses Cassandra as a vector-store backend for semantic (i.e.
cache.FullLLMCache(**kwargs)
SQLite table for full LLM Cache (all generations).
cache.GPTCache([init_func])
Cache that uses GPTCache as a backend.
cache.InMemoryCache()
Cache that stores things in memory.
cache.MomentoCache(cache_client, cache_name, *)
Cache that uses Momento as a backend.
cache.RedisCache(redis_, *[, ttl])
Cache that uses Redis as a backend.
cache.RedisSemanticCache(redis_url, embedding)
Cache that uses Redis as a vector-store backend.
cache.SQLAlchemyCache(engine, cache_schema)
Cache that uses SQAlchemy as a backend.
cache.SQLiteCache([database_path])
Cache that uses SQLite as a backend.
Functions¶
langchain.callbacks¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
callbacks.aim_callback.AimCallbackHandler([...])
Callback Handler that logs to Aim.
callbacks.aim_callback.BaseMetadataCallbackHandler()
This class handles the metadata and associated function states for callbacks.
callbacks.argilla_callback.ArgillaCallbackHandler(...)
Callback Handler that logs into Argilla.
callbacks.arize_callback.ArizeCallbackHandler([...])
Callback Handler that logs to Arize.
callbacks.arthur_callback.ArthurCallbackHandler(...)
Callback Handler that logs to Arthur platform.
callbacks.base.AsyncCallbackHandler()
Async callback handler that can be used to handle callbacks from langchain.
callbacks.base.BaseCallbackHandler()
Base callback handler that can be used to handle callbacks from langchain.
callbacks.base.BaseCallbackManager(handlers)
Base callback manager that handles callbacks from LangChain.
callbacks.base.CallbackManagerMixin()
Mixin for callback manager.
callbacks.base.ChainManagerMixin()
Mixin for chain callbacks.
callbacks.base.LLMManagerMixin()
Mixin for LLM callbacks.
callbacks.base.RetrieverManagerMixin()
Mixin for Retriever callbacks.
callbacks.base.RunManagerMixin()
Mixin for run manager.
callbacks.base.ToolManagerMixin()
Mixin for tool callbacks.
callbacks.clearml_callback.ClearMLCallbackHandler([...])
Callback Handler that logs to ClearML.
callbacks.comet_ml_callback.CometCallbackHandler([...])
Callback Handler that logs to Comet.
callbacks.confident_callback.DeepEvalCallbackHandler(metrics)
Callback Handler that logs into deepeval.
callbacks.context_callback.ContextCallbackHandler([...])
Callback Handler that records transcripts to the Context service.
callbacks.file.FileCallbackHandler(filename)
Callback Handler that writes to a file.
callbacks.flyte_callback.FlyteCallbackHandler()
This callback handler that is used within a Flyte task.
callbacks.human.HumanApprovalCallbackHandler(...)
Callback for manually validating values.
callbacks.human.HumanRejectedException
Exception to raise when a person manually review and rejects a value.
callbacks.infino_callback.InfinoCallbackHandler([...])
Callback Handler that logs to Infino.
callbacks.labelstudio_callback.LabelStudioCallbackHandler([...])
Label Studio callback handler.
callbacks.labelstudio_callback.LabelStudioMode(value)
callbacks.llmonitor_callback.LLMonitorCallbackHandler([...])
Initializes the LLMonitorCallbackHandler. #### Parameters: - app_id: The app id of the app you want to report to. Defaults to None, which means that LLMONITOR_APP_ID will be used. - api_url: The url of the LLMonitor API. Defaults to None, which means that either LLMONITOR_API_URL environment variable or https://app.llmonitor.com will be used.
callbacks.llmonitor_callback.UserContextManager(user_id)
callbacks.manager.AsyncCallbackManager(handlers)
Async callback manager that handles callbacks from LangChain.
callbacks.manager.AsyncCallbackManagerForChainGroup(...)
Initialize callback manager.
callbacks.manager.AsyncCallbackManagerForChainRun(*, ...)
Async callback manager for chain run.
callbacks.manager.AsyncCallbackManagerForLLMRun(*, ...)
Async callback manager for LLM run.
callbacks.manager.AsyncCallbackManagerForRetrieverRun(*, ...)
Async callback manager for retriever run.
callbacks.manager.AsyncCallbackManagerForToolRun(*, ...)
Async callback manager for tool run.
callbacks.manager.AsyncParentRunManager(*, ...)
Async Parent Run Manager.
callbacks.manager.AsyncRunManager(*, run_id, ...)
Async Run Manager.
callbacks.manager.BaseRunManager(*, run_id, ...)
Base class for run manager (a bound callback manager).
callbacks.manager.CallbackManager(handlers)
Callback manager that handles callbacks from langchain.
callbacks.manager.CallbackManagerForChainGroup(...)
Initialize callback manager.
callbacks.manager.CallbackManagerForChainRun(*, ...)
Callback manager for chain run.
callbacks.manager.CallbackManagerForLLMRun(*, ...)
Callback manager for LLM run.
callbacks.manager.CallbackManagerForRetrieverRun(*, ...)
Callback manager for retriever run.
callbacks.manager.CallbackManagerForToolRun(*, ...)
Callback manager for tool run.
callbacks.manager.ParentRunManager(*, ...[, ...])
Sync Parent Run Manager.
callbacks.manager.RunManager(*, run_id, ...)
Sync Run Manager.
callbacks.mlflow_callback.MlflowCallbackHandler([...])
Callback Handler that logs metrics and artifacts to mlflow server.
callbacks.mlflow_callback.MlflowLogger(**kwargs)
Callback Handler that logs metrics and artifacts to mlflow server.
callbacks.openai_info.OpenAICallbackHandler()
Callback Handler that tracks OpenAI info.
callbacks.promptlayer_callback.PromptLayerCallbackHandler([...])
Callback handler for promptlayer.
callbacks.sagemaker_callback.SageMakerCallbackHandler(run)
Callback Handler that logs prompt artifacts and metrics to SageMaker Experiments.
callbacks.stdout.StdOutCallbackHandler([color])
Callback Handler that prints to std out.
callbacks.streaming_aiter.AsyncIteratorCallbackHandler()
Callback handler that returns an async iterator.
callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*)
Callback handler that returns an async iterator.
callbacks.streaming_stdout.StreamingStdOutCallbackHandler()
Callback handler for streaming.
callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*)
Callback handler for streaming in agents.
callbacks.streamlit.mutable_expander.ChildRecord(...)
The child record as a NamedTuple.
callbacks.streamlit.mutable_expander.ChildType(value)
The enumerator of the child type.
callbacks.streamlit.mutable_expander.MutableExpander(...)
A Streamlit expander that can be renamed and dynamically expanded/collapsed.
callbacks.streamlit.streamlit_callback_handler.LLMThought(...)
A thought in the LLM's thought stream.
callbacks.streamlit.streamlit_callback_handler.LLMThoughtLabeler()
Generates markdown labels for LLMThought containers.
callbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value)
Enumerator of the LLMThought state.
callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(...)
A callback handler that writes to a Streamlit app.
callbacks.streamlit.streamlit_callback_handler.ToolRecord(...)
The tool record as a NamedTuple.
callbacks.tracers.base.BaseTracer(**kwargs)
Base interface for tracers.
callbacks.tracers.base.TracerException
Base class for exceptions in tracers module.
callbacks.tracers.evaluation.EvaluatorCallbackHandler(...)
A tracer that runs a run evaluator whenever a run is persisted.
callbacks.tracers.langchain.LangChainTracer([...])
An implementation of the SharedTracer that POSTS to the langchain endpoint.
callbacks.tracers.langchain_v1.LangChainTracerV1(...)
An implementation of the SharedTracer that POSTS to the langchain endpoint.
callbacks.tracers.log_stream.LogEntry
callbacks.tracers.log_stream.LogStreamCallbackHandler(*)
callbacks.tracers.log_stream.RunLog(*ops, state)
callbacks.tracers.log_stream.RunLogPatch(*ops)
callbacks.tracers.log_stream.RunState
callbacks.tracers.run_collector.RunCollectorCallbackHandler([...])
A tracer that collects all nested runs in a list.
callbacks.tracers.schemas.BaseRun
Base class for Run.
callbacks.tracers.schemas.ChainRun
Class for ChainRun.
callbacks.tracers.schemas.LLMRun
Class for LLMRun.
callbacks.tracers.schemas.Run
Run schema for the V2 API in the Tracer.
callbacks.tracers.schemas.ToolRun
Class for ToolRun.
callbacks.tracers.schemas.TracerSession
TracerSessionV1 schema for the V2 API.
callbacks.tracers.schemas.TracerSessionBase
Base class for TracerSession.
callbacks.tracers.schemas.TracerSessionV1
TracerSessionV1 schema.
callbacks.tracers.schemas.TracerSessionV1Base
Base class for TracerSessionV1.
callbacks.tracers.schemas.TracerSessionV1Create
Create class for TracerSessionV1.
callbacks.tracers.stdout.ConsoleCallbackHandler(...)
Tracer that prints to the console.
callbacks.tracers.stdout.FunctionCallbackHandler(...)
Tracer that calls a function with a single str parameter.
callbacks.tracers.wandb.RunProcessor(...)
Handles the conversion of a LangChain Runs into a WBTraceTree.
callbacks.tracers.wandb.WandbRunArgs
Arguments for the WandbTracer.
callbacks.tracers.wandb.WandbTracer([run_args])
Callback Handler that logs to Weights and Biases.
callbacks.trubrics_callback.TrubricsCallbackHandler([...])
Callback handler for Trubrics.
callbacks.utils.BaseMetadataCallbackHandler()
This class handles the metadata and associated function states for callbacks.
callbacks.wandb_callback.WandbCallbackHandler([...])
Callback Handler that logs to Weights and Biases.
callbacks.whylabs_callback.WhyLabsCallbackHandler(...)
Callback Handler for logging to WhyLabs.
Functions¶
callbacks.aim_callback.import_aim()
Import the aim python package and raise an error if it is not installed.
callbacks.clearml_callback.import_clearml()
Import the clearml python package and raise an error if it is not installed.
callbacks.comet_ml_callback.import_comet_ml()
Import comet_ml and raise an error if it is not installed.
callbacks.context_callback.import_context()
Import the getcontext package.
callbacks.flyte_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.flyte_callback.import_flytekit()
Import flytekit and flytekitplugins-deck-standard.
callbacks.infino_callback.import_infino()
Import the infino client.
callbacks.labelstudio_callback.get_default_label_configs(mode)
callbacks.llmonitor_callback.identify(user_id)
callbacks.manager.atrace_as_chain_group(...)
Get an async callback manager for a chain group in a context manager.
callbacks.manager.collect_runs()
Collect all run traces in context.
callbacks.manager.env_var_is_set(env_var)
Check if an environment variable is set.
callbacks.manager.get_openai_callback()
Get the OpenAI callback handler in a context manager.
callbacks.manager.trace_as_chain_group(...)
Get a callback manager for a chain group in a context manager.
callbacks.manager.tracing_enabled([session_name])
Get the Deprecated LangChainTracer in a context manager.
callbacks.manager.tracing_v2_enabled([...])
Instruct LangChain to log all runs in context to LangSmith.
callbacks.manager.wandb_tracing_enabled([...])
Get the WandbTracer in a context manager.
callbacks.mlflow_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.mlflow_callback.construct_html_from_prompt_and_generation(...)
Construct an html element from a prompt and a generation.
callbacks.mlflow_callback.import_mlflow()
Import the mlflow python package and raise an error if it is not installed.
callbacks.openai_info.get_openai_token_cost_for_model(...)
Get the cost in USD for a given model and number of tokens.
callbacks.openai_info.standardize_model_name(...)
Standardize the model name to a format that can be used in the OpenAI API.
callbacks.sagemaker_callback.save_json(data, ...)
Save dict to local file path.
callbacks.tracers.evaluation.wait_for_all_evaluators()
Wait for all tracers to finish.
callbacks.tracers.langchain.get_client()
Get the client.
callbacks.tracers.langchain.log_error_once(...)
Log an error once.
callbacks.tracers.langchain.wait_for_all_tracers()
Wait for all tracers to finish.
callbacks.tracers.langchain_v1.get_headers()
Get the headers for the LangChain API.
callbacks.tracers.schemas.RunTypeEnum()
RunTypeEnum.
callbacks.tracers.stdout.elapsed(run)
Get the elapsed time of a run.
callbacks.tracers.stdout.try_json_stringify(...)
Try to stringify an object to JSON.
callbacks.utils.flatten_dict(nested_dict[, ...])
Flattens a nested dictionary into a flat dictionary.
callbacks.utils.hash_string(s)
Hash a string using sha1.
callbacks.utils.import_pandas()
Import the pandas python package and raise an error if it is not installed.
callbacks.utils.import_spacy()
Import the spacy python package and raise an error if it is not installed.
callbacks.utils.import_textstat()
Import the textstat python package and raise an error if it is not installed.
callbacks.utils.load_json(json_path)
Load json file to a string.
callbacks.wandb_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.wandb_callback.construct_html_from_prompt_and_generation(...)
Construct an html element from a prompt and a generation.
callbacks.wandb_callback.import_wandb()
Import the wandb python package and raise an error if it is not installed.
callbacks.wandb_callback.load_json_to_dict(...)
Load json file to a dictionary.
callbacks.whylabs_callback.import_langkit([...])
Import the langkit python package and raise an error if it is not installed.
langchain.chains¶
Chains are easily reusable components linked together.
Chains encode a sequence of calls to components like models, document retrievers,
other Chains, etc., and provide a simple interface to this sequence.
The Chain interface makes it easy to create apps that are:
Stateful: add Memory to any Chain to give it state,
Observable: pass Callbacks to a Chain to execute additional functionality,
like logging, outside the main sequence of component calls,
Composable: combine Chains with other components, including other Chains.
Class hierarchy:
Chain --> <name>Chain # Examples: LLMChain, MapReduceChain, RouterChain
Classes¶
chains.api.base.APIChain
Chain that makes API calls and summarizes the responses to answer a question.
chains.api.openapi.chain.OpenAPIEndpointChain
Chain interacts with an OpenAPI endpoint using natural language.
chains.api.openapi.requests_chain.APIRequesterChain
Get the request parser.
chains.api.openapi.requests_chain.APIRequesterOutputParser
Parse the request and error tags.
chains.api.openapi.response_chain.APIResponderChain
Get the response parser.
chains.api.openapi.response_chain.APIResponderOutputParser
Parse the response and error tags.
chains.base.Chain
Abstract base class for creating structured sequences of calls to components.
chains.combine_documents.base.AnalyzeDocumentChain
Chain that splits documents, then analyzes it in pieces.
chains.combine_documents.base.BaseCombineDocumentsChain
Base interface for chains combining documents.
chains.combine_documents.map_reduce.MapReduceDocumentsChain
Combining documents by mapping a chain over them, then combining results.
chains.combine_documents.map_rerank.MapRerankDocumentsChain
Combining documents by mapping a chain over them, then reranking results.
chains.combine_documents.reduce.AsyncCombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.CombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.ReduceDocumentsChain
Combine documents by recursively reducing them.
chains.combine_documents.refine.RefineDocumentsChain
Combine documents by doing a first pass and then refining on more documents.
chains.combine_documents.stuff.StuffDocumentsChain
Chain that combines documents by stuffing into context.
chains.constitutional_ai.base.ConstitutionalChain
Chain for applying constitutional principles.
chains.constitutional_ai.models.ConstitutionalPrinciple
Class for a constitutional principle.
chains.conversation.base.ConversationChain
Chain to have a conversation and load context from memory.
chains.conversational_retrieval.base.BaseConversationalRetrievalChain
Chain for chatting with an index.
chains.conversational_retrieval.base.ChatVectorDBChain
Chain for chatting with a vector database.
chains.conversational_retrieval.base.ConversationalRetrievalChain
Chain for having a conversation based on retrieved documents.
chains.elasticsearch_database.base.ElasticsearchDatabaseChain
Chain for interacting with Elasticsearch Database.
chains.flare.base.FlareChain
Chain that combines a retriever, a question generator, and a response generator.
chains.flare.base.QuestionGeneratorChain
Chain that generates questions from uncertain spans.
chains.flare.prompts.FinishedOutputParser
Output parser that checks if the output is finished.
chains.graph_qa.arangodb.ArangoGraphQAChain
Chain for question-answering against a graph by generating AQL statements.
chains.graph_qa.base.GraphQAChain
Chain for question-answering against a graph.
chains.graph_qa.cypher.GraphCypherQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.falkordb.FalkorDBQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.hugegraph.HugeGraphQAChain
Chain for question-answering against a graph by generating gremlin statements.
chains.graph_qa.kuzu.KuzuQAChain
Chain for question-answering against a graph by generating Cypher statements for Kùzu.
chains.graph_qa.nebulagraph.NebulaGraphQAChain
Chain for question-answering against a graph by generating nGQL statements.
chains.graph_qa.neptune_cypher.NeptuneOpenCypherQAChain
Chain for question-answering against a Neptune graph by generating openCypher statements.
chains.graph_qa.sparql.GraphSparqlQAChain
Chain for question-answering against an RDF or OWL graph by generating SPARQL statements.
chains.hyde.base.HypotheticalDocumentEmbedder
Generate hypothetical document for query, and then embed that.
chains.llm.LLMChain
Chain to run queries against LLMs.
chains.llm_bash.base.LLMBashChain
Chain that interprets a prompt and executes bash operations.
chains.llm_bash.prompt.BashOutputParser
Parser for bash output.
chains.llm_checker.base.LLMCheckerChain
Chain for question-answering with self-verification.
chains.llm_math.base.LLMMathChain
Chain that interprets a prompt and executes python code to do math.
chains.llm_requests.LLM |