Tonic commited on
Commit
229c4e4
·
verified ·
1 Parent(s): b869f27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +339 -1
README.md CHANGED
@@ -7,4 +7,342 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pinned: false
8
  ---
9
 
10
+ # LangGraph Agent Chat UI
11
+
12
+ This project provides a simple, intuitive user interface (UI) for interacting with LangGraph agents. It's built with React and Vite, offering a responsive chat-like experience for testing and demonstrating your LangGraph deployments. It's designed to work seamlessly with LangGraph's core concepts, including checkpoints, thread management, and human-in-the-loop capabilities.
13
+
14
+ ## Features
15
+
16
+ * **Easy Connection:** Connect to both local and production LangGraph deployments by simply providing the deployment URL and graph ID (the path used when defining the graph).
17
+ * **Chat Interface:** Interact with your agents through a familiar chat interface, sending and receiving messages in real-time. The UI manages the conversation thread, automatically using checkpoints for persistence.
18
+ * **Tool Call Rendering:** The UI automatically renders tool calls and their results, making it easy to visualize the agent's actions. This is compatible with LangGraph's [tool calling and function calling capabilities](https://python.langchain.com/docs/guides/tools/custom_tools).
19
+ * **Human-in-the-Loop Support:** Seamlessly integrate human intervention using LangGraph's `interrupt` function. The UI presents a dedicated interface for reviewing, editing, and responding to interrupt requests (e.g., for approval or modification of agent actions), following the standardized schema.
20
+ * **Thread History:** View and navigate through past chat threads, enabling you to review previous interactions. This leverages LangGraph's checkpointing for persistent conversation history.
21
+ * **Time Travel and Forking:** Leverage LangGraph's powerful state management features, including [checkpointing](https://python.langchain.com/docs/modules/agents/concepts#checkpointing) and thread manipulation. Run the graph from specific checkpoints, explore different execution paths, and edit previous messages.
22
+ * **State Inspection:** Examine the current state of your LangGraph thread for debugging and understanding the agent's internal workings. This allows you to inspect the full state object managed by LangGraph.
23
+ * **Multiple Deployment Options:**
24
+ * **Deployed Site:** Use the hosted version at [agentchat.vercel.app](https://agentchat.vercel.app/)
25
+ * **Local Development:** Clone the repository and run it locally for development and customization.
26
+ * **Quick Setup:** Use `npx create-agent-chat-app` for a fast, streamlined setup.
27
+ * **Langsmith API key:** When utilizing a product deployment you must provide an Langsmith API key.
28
+
29
+ ## Getting Started
30
+
31
+ There are three main ways to use the Agent Chat UI:
32
+
33
+ ### 1. Using the Deployed Site (Easiest)
34
+
35
+ 1. **Navigate:** Go to [agentchat.vercel.app](https://agentchat.vercel.app/).
36
+ 2. **Enter Details:**
37
+ * **Deployment URL:** The URL of your LangGraph deployment (e.g., `http://localhost:2024` for a local deployment using LangServe, or the URL provided by LangSmith for a production deployment).
38
+ * **Assistant / Graph ID:** The path of the graph you want to interact with (e.g., `chat`, `email_agent`). This is defined when adding routes with `add_routes(..., path="/your_path")`.
39
+ * **LangSmith API Key** (Production Deployments Only): If you are connecting to a deployment hosted on LangSmith, you will need to provide your LangSmith API key for authentication. *This is NOT required for local LangGraph servers.* The key is stored locally in your browser's storage.
40
+ 3. **Click "Continue":** You'll be taken to the chat interface, ready to interact with your agent.
41
+
42
+ ### 2. Local Development (Full Control)
43
+
44
+ 1. **Clone the Repository:**
45
+
46
+ ```bash
47
+ git clone https://github.com/langchain-ai/agent-chat-ui.git
48
+ cd agent-chat-ui
49
+ ```
50
+
51
+ 2. **Install Dependencies:**
52
+
53
+ ```bash
54
+ pnpm install # Or npm install, or yarn install
55
+ ```
56
+
57
+ 3. **Start the Development Server:**
58
+
59
+ ```bash
60
+ pnpm dev # Or npm run dev, or yarn dev
61
+ ```
62
+
63
+ 4. **Open in Browser:** The application will typically be available at `http://localhost:5173` (the port may vary; check your terminal output). Follow the instructions in "Using the Deployed Site" to connect to your LangGraph.
64
+
65
+ ### 3. Quick Setup with `npx create-agent-chat-app`
66
+
67
+ This method creates a new project directory with the Agent Chat UI already set up.
68
+
69
+ 1. **Run the Command:**
70
+
71
+ ```bash
72
+ npx create-agent-chat-app
73
+ ```
74
+
75
+ 2. **Follow Prompts:** You'll be prompted for a project name (default is `agent-chat-app`).
76
+
77
+ 3. **Navigate to Project Directory:**
78
+
79
+ ```bash
80
+ cd agent-chat-app
81
+ ```
82
+
83
+ 4. **Install and Run:**
84
+
85
+ ```bash
86
+ pnpm install # Or npm install, or yarn install
87
+ pnpm dev # Or npm run dev, or yarn dev
88
+ ```
89
+
90
+ 5. **Open in Browser:** The application will be available at `http://localhost:5173`. Follow the instructions in "Using the Deployed Site" to connect.
91
+
92
+ ## LangGraph Setup (Prerequisites)
93
+
94
+ Before using the Agent Chat UI, you need a running LangGraph agent served via LangServe. Below are examples using both a simple agent and an agent with human-in-the-loop.
95
+
96
+ ### Basic LangGraph Example (Python)
97
+
98
+ ```python
99
+ # agent.py (Example LangGraph agent - Python)
100
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
101
+ from langchain_core.runnables import chain
102
+ from langchain_openai import ChatOpenAI
103
+ from langchain_core.messages import AIMessage, HumanMessage
104
+ from langgraph.prebuilt import create_agent_executor
105
+ from langchain_core.tools import tool
106
+
107
+ # FastAPI and LangServe for serving the graph
108
+ from fastapi import FastAPI
109
+ from langserve import add_routes
110
+
111
+
112
+ @tool
113
+ def get_weather(city: str):
114
+ """
115
+ Gets the weather for a specified city
116
+ """
117
+ if city.lower() == "new york":
118
+ return "The weather in New York is nice today with a high of 75F."
119
+ else:
120
+ return "The weather for that city is not supported"
121
+
122
+
123
+ # Define the tools
124
+ tools = [get_weather]
125
+
126
+ prompt = ChatPromptTemplate.from_messages(
127
+ [
128
+ ("system", "You are a helpful assistant"),
129
+ MessagesPlaceholder(variable_name="messages"),
130
+ MessagesPlaceholder(variable_name="agent_scratchpad"),
131
+ ]
132
+ )
133
+
134
+ model = ChatOpenAI(temperature=0).bind_tools(tools)
135
+
136
+
137
+ @chain
138
+ def transform_messages(data):
139
+ messages = data["messages"]
140
+ if not isinstance(messages[-1], HumanMessage):
141
+ messages.append(
142
+ AIMessage(
143
+ content="I don't know how to respond to messages other than a final answer"
144
+ )
145
+ )
146
+ return {"messages": messages}
147
+
148
+
149
+ agent = (
150
+ {
151
+ "messages": transform_messages,
152
+ "agent_scratchpad": lambda x: [], # No tools in this simple example
153
+ }
154
+ | prompt
155
+ | model
156
+ )
157
+
158
+ # Wrap the agent in a RunnableGraph
159
+ app = create_agent_executor(agent, tools)
160
+
161
+ # Serve the graph using FastAPI and langserve
162
+ fastapi_app = FastAPI(
163
+ title="LangGraph Agent",
164
+ version="1.0",
165
+ description="A simple LangGraph agent server",
166
+ )
167
+
168
+ # Mount LangServe at the /agent endpoint
169
+ add_routes(
170
+ fastapi_app,
171
+ app,
172
+ path="/chat", # Matches the graph ID we'll use in the UI
173
+ )
174
+
175
+ if __name__ == "__main__":
176
+ import uvicorn
177
+
178
+ uvicorn.run(fastapi_app, host="localhost", port=2024)
179
+
180
+ ```
181
+ To run this example:
182
+
183
+ 1. Save the code as `agent.py`.
184
+ 2. Install necessary packages: `pip install langchain langchain-core langchain-openai langgraph fastapi uvicorn "langserve[all]"` (add any other packages for your tools).
185
+ 3. Set your OpenAI API key: `export OPENAI_API_KEY="your-openai-api-key"`
186
+ 4. Run the script: `python agent.py`
187
+ 5. Your LangGraph agent will be running at `http://localhost:2024/chat`, and the graph ID to enter into the ui is `chat`.
188
+
189
+ ### LangGraph with Human-in-the-Loop Example (Python)
190
+
191
+ ```python
192
+ from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
193
+ from langchain_core.runnables import chain
194
+ from langchain_openai import ChatOpenAI
195
+ from langchain_core.messages import AIMessage, HumanMessage
196
+ from langgraph.prebuilt import create_agent_executor, ToolInvocation, interrupt
197
+ from langchain_core.tools import tool
198
+ from fastapi import FastAPI
199
+ from langserve import add_routes
200
+
201
+
202
+ @tool
203
+ def write_email(subject: str, body: str, to: str):
204
+ """
205
+ Drafts an email with a specified subject, body and recipient
206
+ """
207
+ print(f"Writing email with subject '{subject}' to '{to}'") # Debugging
208
+ return f"Draft email to {to} with subject {subject} sent."
209
+
210
+
211
+ tools = [write_email]
212
+
213
+ prompt = ChatPromptTemplate.from_messages(
214
+ [
215
+ ("system", "You are a helpful assistant that drafts emails."),
216
+ MessagesPlaceholder(variable_name="messages"),
217
+ MessagesPlaceholder(variable_name="agent_scratchpad"),
218
+ ]
219
+ )
220
+
221
+
222
+ model = ChatOpenAI(temperature=0, model="gpt-4-turbo-preview").bind_tools(tools)
223
+
224
+
225
+ @chain
226
+ def transform_messages(data):
227
+ messages = data["messages"]
228
+ if not isinstance(messages[-1], HumanMessage):
229
+ messages.append(
230
+ AIMessage(
231
+ content="I don't know how to respond to messages other than a final answer"
232
+ )
233
+ )
234
+ return {"messages": messages}
235
+
236
+
237
+
238
+ def handle_interrupt(state):
239
+ """Handles human-in-the-loop interruptions."""
240
+ print("---INTERRUPT---") # Debugging
241
+ messages = state["messages"]
242
+ last_message = messages[-1]
243
+
244
+ if isinstance(last_message, AIMessage) and isinstance(
245
+ last_message.content, list
246
+ ):
247
+ # Find the tool call
248
+ for msg in last_message.content:
249
+ if isinstance(msg, ToolInvocation):
250
+ tool_name = msg.name
251
+ tool_args = msg.args
252
+ if tool_name == "write_email":
253
+ # Construct the human interrupt request
254
+ interrupt_data = {
255
+ "type": "interrupt",
256
+ "args": {
257
+ "type": "response",
258
+ "studio": { # optional
259
+ "subject": tool_args["subject"],
260
+ "body": tool_args["body"],
261
+ "to": tool_args["to"],
262
+ },
263
+ "description": "Response Instruction: \n\n- **Response**: Any response submitted will be passed to an LLM to rewrite the email. It can rewrite the email body, subject, or recipient.\n\n- **Edit or Accept**: Editing/Accepting the email.",
264
+ },
265
+ }
266
+ # Call the interrupt function and return the new state
267
+ return interrupt(messages, interrupt_data)
268
+ return {"messages": messages}
269
+
270
+
271
+ agent = (
272
+ {
273
+ "messages": transform_messages,
274
+ "agent_scratchpad": lambda x: x.get("agent_scratchpad", []),
275
+ }
276
+ | prompt
277
+ | model
278
+ | handle_interrupt # Add the interrupt handler
279
+ )
280
+
281
+ # Wrap the agent in a RunnableGraph
282
+ app = create_agent_executor(agent, tools)
283
+
284
+ # Serve the graph using FastAPI and langserve
285
+ fastapi_app = FastAPI(
286
+ title="LangGraph Agent",
287
+ version="1.0",
288
+ description="A simple LangGraph agent server",
289
+ )
290
+
291
+ # Mount LangServe at the /agent endpoint
292
+ add_routes(
293
+ fastapi_app,
294
+ app,
295
+ path="/email_agent", # Matches the graph ID we'll use in the UI
296
+ )
297
+
298
+ if __name__ == "__main__":
299
+ import uvicorn
300
+
301
+ uvicorn.run(fastapi_app, host="localhost", port=2024)
302
+
303
+ ```
304
+ To run this example:
305
+
306
+ 1. Save the code as `agent.py`.
307
+ 2. Install necessary packages: `pip install langchain langchain-core langchain-openai langgraph fastapi uvicorn "langserve[all]"` (add any other packages for your tools).
308
+ 3. Set your OpenAI API key: `export OPENAI_API_KEY="your-openai-api-key"`
309
+ 4. Run the script: `python agent.py`
310
+ 5. Your LangGraph agent will be running at `http://localhost:2024/email_agent`, and the graph ID to enter into the ui is `email_agent`.
311
+
312
+ ## Key Concepts (LangGraph Integration)
313
+
314
+ * **Messages Key:** The Agent Chat UI expects your LangGraph state to include a `messages` key, which holds a list of `langchain_core.messages.BaseMessage` instances (e.g., `HumanMessage`, `AIMessage`, `SystemMessage`, `ToolMessage`). This is standard practice in LangChain and LangGraph for conversational agents.
315
+ * **Checkpoints:** The UI automatically utilizes LangGraph's checkpointing mechanism to save and restore the conversation state. This ensures that you can resume conversations and explore different branches without losing progress.
316
+ * **`add_routes` and `path`:** The `path` argument in `add_routes` (from `langserve`) determines the "Graph ID" that you'll enter in the UI. This is crucial for the UI to connect to the correct LangGraph endpoint.
317
+ * **Tool Calling:** If you use `bind_tools` with your LLM, tool calls and tool results will be rendered in the UI, with clear labels showing the function call and the response.
318
+
319
+ ## Human-in-the-Loop Details
320
+
321
+ The Agent Chat UI supports human-in-the-loop interactions using the standard LangGraph interrupt schema. Here's how it works:
322
+
323
+ 1. **Interrupt Schema:** Your LangGraph agent should call the `interrupt` function (from `langgraph.prebuilt`) with a specific schema to pause execution and request human input. The schema should include:
324
+ * `type`: `interrupt`.
325
+ * `args`: A dictionary containing information about the interruption. This is where you provide the data the human needs to review (e.g., a draft email, a proposed action).
326
+ * `type`: Can be one of `"response"`, `"accept"`, or `"ignore"`. This indicates the type of human interaction expected.
327
+ * `args`: Further arguments specific to the interrupt type. For instance, if the interrupt type is `response`, the `args` could contain a message to give to the user.
328
+ * `studio`: *Optional.* If included, this must contain `subject`, `body`, and `to` keys for interrupt requests.
329
+ * `description`: *Optional.* If used, this provides a static prompt to the user that displays the fields the human needs to complete.
330
+ * `name` (optional): A name for the interrupt.
331
+ * `id` (optional): A unique identifier for the interrupt.
332
+
333
+ 2. **UI Rendering:** When the Agent Chat UI detects an interrupt with this schema, it will automatically render a user-friendly interface for human interaction. This interface allows the user to:
334
+ * **Inspect:** View the data provided in the `args` of the interrupt (e.g., the content of a draft email).
335
+ * **Edit:** Modify the data (if the interrupt schema allows for it).
336
+ * **Respond:** Provide a response (if the interrupt type is `"response"`).
337
+ * **Accept/Reject:** Approve or reject the proposed action (if the interrupt type is `"accept"`).
338
+ * **Ignore:** Ignore the interrupt (if the interrupt type is `"ignore"`).
339
+
340
+ 3. **Resuming Execution:** After the human interacts with the interrupt, the UI sends the response back to the LangGraph via LangServe, and execution resumes.
341
+
342
+ ## Contributing
343
+
344
+ Contributions are welcome! Please see the [GitHub repository](https://github.com/langchain-ai/agent-chat-ui) for issues and pull requests.
345
+
346
+ ## License
347
+
348
+ This organisation is an opensource community project, always check the licences in the repositories + report any ambiguities.