jdgar commited on
Commit
b37388d
1 Parent(s): 25fdc1b

Uploading the UI interface for user interaction

Browse files
Files changed (1) hide show
  1. 70-openai-ui.ipynb +322 -0
70-openai-ui.ipynb ADDED
@@ -0,0 +1,322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# User-Interface-Demonstration\n",
8
+ "\n",
9
+ "This notebook implements a user interface that allows users to select and interact with different approaches without needing to modify the underlying code. The interface provides a dropdown menu for users to select an approach (Long-context, Vanilla RAG, etc.) and a textbox for entering their queries. The selected approach and user input are processed, and the results are displayed interactively. Additionally, all user interactions are logged to facilitate user evaluations. This setup aims to streamline the experimentation process, making it more user-friendly and efficient."
10
+ ]
11
+ },
12
+ {
13
+ "cell_type": "markdown",
14
+ "metadata": {},
15
+ "source": [
16
+ "# Environment Setup\n",
17
+ "\n",
18
+ "* Loading the necessary packages and modules"
19
+ ]
20
+ },
21
+ {
22
+ "cell_type": "code",
23
+ "execution_count": 6,
24
+ "metadata": {},
25
+ "outputs": [],
26
+ "source": [
27
+ "# misc.\n",
28
+ "import gradio as gr\n",
29
+ "from dotenv import load_dotenv\n",
30
+ "from openai import OpenAI\n",
31
+ "from uuid import uuid4\n",
32
+ "load_dotenv()\n",
33
+ "\n",
34
+ "# logging import\n",
35
+ "import logging \n",
36
+ "logging.basicConfig(filename='user_interactions.log', level=logging.INFO)\n",
37
+ "\n",
38
+ "# langchain import\n",
39
+ "from langchain_core.prompts import PromptTemplate, ChatPromptTemplate, MessagesPlaceholder\n",
40
+ "from langchain_openai import ChatOpenAI\n",
41
+ "from langchain_core.output_parsers import StrOutputParser\n",
42
+ "from langchain_core.runnables import RunnablePassthrough\n",
43
+ "import os\n",
44
+ "import requests\n",
45
+ "from getpass import getpass\n",
46
+ "\n",
47
+ "# langfuse imports and tracing\n",
48
+ "from langfuse import Langfuse\n",
49
+ "from langfuse.decorators import observe\n",
50
+ "from langfuse.openai import openai\n",
51
+ "\n",
52
+ "import langfuse\n",
53
+ "from langfuse import Langfuse\n",
54
+ "trace_id = str(uuid4())\n",
55
+ "\n",
56
+ "LANGFUSE_SECRET_KEY = os.environ['LANGFUSE_SECRET_KEY']\n",
57
+ "LANGFUSE_PUBLIC_KEY = os.environ['LANGFUSE_PUBLIC_KEY']\n",
58
+ "LANGFUSE_HOST = \"https://us.cloud.langfuse.com\"\n",
59
+ "\n",
60
+ "# OpenAI API Keys \n",
61
+ "client = OpenAI(api_key=os.environ[\"OPENAI_API_KEY\"]) \n"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "markdown",
66
+ "metadata": {},
67
+ "source": [
68
+ "# System Message\n",
69
+ "\n",
70
+ "* Defines our system message so it is easier to manipulate going forward if needed"
71
+ ]
72
+ },
73
+ {
74
+ "cell_type": "code",
75
+ "execution_count": 7,
76
+ "metadata": {},
77
+ "outputs": [],
78
+ "source": [
79
+ "system_message = f'''''GOAL: You are a AI Legal Aid in which you play the role of specializing in end-of-life planning in Tennessee through a Q&A format. You guide users by asking clarification questions, one at a time, after they give you a response to gather necessary information and provide tailored legal advice. Your goal is to improve legal understanding and aid users in completing necessary legal documents based on their situation.\n",
80
+ "\n",
81
+ "PERSONA: In this scenario, you are an AI Legal Aid in which you play the role of specializing in end-of-life planning in Tennessee. You provide expert advice on advance directives, including living wills, medical care directives, powers of attorney for healthcare, and general powers of attorney in case of incapacity. You aim to explain these concepts in simple terms, while also ensuring legal accuracy, to help users without legal training understand their options, how these documents work, and the implications of their decisions. You eventually will draft the necessary legal forms based on the information provided by users. Responses should be friendly, professional, emotionally intelligent, and engaging, making a particular effort to match the user's tone. You should break down complex legal terms into simpler concepts and provide examples where necessary to aid understanding. You should avoid overwhelming users with too many options, navigate challenging conversations gracefully and engagingly, identify areas where you can help, and lead users to the answers they need. You should probe the user for what they already know to gauge how you can be helpful, slowing down to ensure clarity and understanding. \n",
82
+ "\n",
83
+ "NARRATIVE: The user is introduced to the legal aid, who asks a set of initial questions to understand what the user wants to accomplish and determine what documents they need to fill out. You then guide and support the user to help them with their goal. \n",
84
+ "\n",
85
+ "Follow these steps in order:\n",
86
+ "\n",
87
+ "STEP 1: GATHER INFORMATION\n",
88
+ "You should do this:\n",
89
+ "1. Introduce yourself: First introduce yourself to the user and tell them you are here to help them navigate their situation.\n",
90
+ "2. Ask the user the following questions. Ask these questions 1 at a time and ALWAYS wait for a response before moving on to the next question. For instance, you might ask \"How can I help you navigate your legal scenario?\" and the user would respond. And only then would you say \"Thank you for explaining. I have another question for you to help me help you: Can you explain further...\". This part of the conversations works best when you and the user take turns asking and answering questions instead of you asking a series of questions all at once. That way you can have more of a natural dialogue.\n",
91
+ "\n",
92
+ "You should do this:\n",
93
+ "- Wait for a response from the user after every question before moving on.\n",
94
+ "- Work to ascertain what the user wants to accomplish specifically.\n",
95
+ "- Ask one question at a time and explain that you are asking so that you can tailor your explanation\n",
96
+ "- Gauge what the user already knows so that you can adapt your explanations and questions moving forward based on their prior knowledge.\n",
97
+ "- You should ask for any necessary clarifications to ensure the user's needs are accurately understood and addressed.\n",
98
+ "\n",
99
+ "Do NOT do this:\n",
100
+ "- Start explaining right away before you gather the necessary information\n",
101
+ "- Ask the user more than 1 question at a time.\n",
102
+ "\n",
103
+ "Next step: Once you have all of this information, you can move on to the next step and begin with a brief explanation\n",
104
+ "\n",
105
+ "STEP 2: BEGIN DOCUMENT COMPLETION\n",
106
+ "\n",
107
+ "You should do this:\n",
108
+ "Think step by step and make a plan based on the goal of the user and based on their specific scenario. Now that you know a little bit about what the user knows, consider how you will:\n",
109
+ "- Guide the user in the most efficient way possible based on the information that is needed in their specific document.\n",
110
+ "- Help the user generate answers to the necessary questions.\n",
111
+ "- Remind the user of their goal if necessary.\n",
112
+ "- Provide explanations and examples when necessary.\n",
113
+ "- Tailor your responses and questions to the user's goal and prior knowledge, which might change as the conversation progresses. \n",
114
+ "- If applicable, use the documents uploaded in the \"knowledge\" section to guide your questions.\n",
115
+ "\n",
116
+ "Do NOT do this:\n",
117
+ "- Provide immediate answers or solutions to problems. \n",
118
+ "- Lose track of the user's goal and discuss other things that are off topic.\n",
119
+ "\n",
120
+ "Next step: Once you have all of the necessary information for the document, move to wrap up\n",
121
+ "\n",
122
+ "STEP 3: WRAP UP\n",
123
+ "You should do this:\n",
124
+ "1. Once you have all of the information needed, generate a pdf document that the user can take to the courthouse for processing in the appropriate format.\n",
125
+ "'''''"
126
+ ]
127
+ },
128
+ {
129
+ "cell_type": "markdown",
130
+ "metadata": {},
131
+ "source": [
132
+ "# Functions for each approach\n",
133
+ "\n",
134
+ "* These cells contain the code that runs the approach that the user selects. Additionally, a global variable (llm_chat_history_lc) is initialized that allows for the chats to be logged for maintaining conversation history."
135
+ ]
136
+ },
137
+ {
138
+ "cell_type": "markdown",
139
+ "metadata": {},
140
+ "source": [
141
+ "> Example function for the long-context model"
142
+ ]
143
+ },
144
+ {
145
+ "cell_type": "code",
146
+ "execution_count": 8,
147
+ "metadata": {},
148
+ "outputs": [],
149
+ "source": [
150
+ "\n",
151
+ "def get_assistant_response_with_history(user_message, llm_chat_history_lc, model_name=\"gpt-3.5-turbo\"):\n",
152
+ " # Convert the tuple-based chat history to the appropriate format\n",
153
+ " messages = [{'role': 'system', 'content': system_message}]\n",
154
+ " \n",
155
+ " for user_msg, assistant_msg in llm_chat_history_lc:\n",
156
+ " messages.append({'role': 'user', 'content': user_msg})\n",
157
+ " messages.append({'role': 'assistant', 'content': assistant_msg})\n",
158
+ " \n",
159
+ " # Add the new user message\n",
160
+ " messages.append({'role': 'user', 'content': user_message})\n",
161
+ "\n",
162
+ " # Compute a completion (response) from the LLM\n",
163
+ " completion = client.chat.completions.create( \n",
164
+ " model=model_name,\n",
165
+ " messages=messages,\n",
166
+ " trace_id = trace_id # assigns a specific trace id for the entire conversation so the whole conversation is grouped together\n",
167
+ " )\n",
168
+ " \n",
169
+ " # Get the assistant's response\n",
170
+ " assistant_response = completion.choices[0].message.content\n",
171
+ " \n",
172
+ " # Update chat history with a tuple (user_message, assistant_response)\n",
173
+ " llm_chat_history_lc.append((user_message, assistant_response))\n",
174
+ " \n",
175
+ " # Return the response and updated chat history\n",
176
+ " return assistant_response, llm_chat_history_lc\n"
177
+ ]
178
+ },
179
+ {
180
+ "cell_type": "markdown",
181
+ "metadata": {},
182
+ "source": [
183
+ "# Approach Functions\n",
184
+ "\n",
185
+ "* Here, I defined each approach as their own 'approach' functions (approach_1, approach_2, etc.). In doing so, I was then able to define a function that allows the user to select which approach they would like to use."
186
+ ]
187
+ },
188
+ {
189
+ "cell_type": "code",
190
+ "execution_count": 9,
191
+ "metadata": {},
192
+ "outputs": [],
193
+ "source": [
194
+ "\n",
195
+ "\n",
196
+ "# Long-context approach function defined as 'approach_1' with one parameter 'query'\n",
197
+ "def approach_1(query):\n",
198
+ " global llm_chat_history_lc # function will use llm_chat_history_lc to maintain conversation history\n",
199
+ " response, llm_chat_history_lc = get_assistant_response_with_history(query, llm_chat_history_lc) # calls the long context function and passes the user's query and chat history as arguments\n",
200
+ " log_interaction(\"Long-Context Model\", query, response) # logs the details of the interaction (the approach used, the query, and the llm's response)\n",
201
+ " return response # returns the model's response\n",
202
+ "\n",
203
+ "\n",
204
+ "# Logging function to log interactions and maintain conversation history\n",
205
+ "def log_interaction(approach, query, response):\n",
206
+ " log_entry = f\"Approach: {approach}, Query: {query}, Response: {response}\"\n",
207
+ " logging.info(log_entry)\n",
208
+ "\n",
209
+ "# Function that allows the user to choose an approach to get a response\n",
210
+ "def choose_approach(approach, query):\n",
211
+ " if approach == \"Long-Context Model\":\n",
212
+ " return approach_1(query)\n",
213
+ " else:\n",
214
+ " return \"Invalid approach selected.\"\n",
215
+ "\n",
216
+ "# Defines a list of the available approaches\n",
217
+ "approaches = [\"Long-Context Model\"]\n",
218
+ "\n"
219
+ ]
220
+ },
221
+ {
222
+ "cell_type": "markdown",
223
+ "metadata": {},
224
+ "source": [
225
+ "# Running the Interface\n",
226
+ "\n",
227
+ "* Run the following cell to interact with the interface. \n",
228
+ "* I am using Gradio Blocks because it allows for more flexibility and customization than gradio interface. "
229
+ ]
230
+ },
231
+ {
232
+ "cell_type": "code",
233
+ "execution_count": 10,
234
+ "metadata": {},
235
+ "outputs": [
236
+ {
237
+ "name": "stdout",
238
+ "output_type": "stream",
239
+ "text": [
240
+ "Running on local URL: http://127.0.0.1:7861\n",
241
+ "\n",
242
+ "To create a public link, set `share=True` in `launch()`.\n"
243
+ ]
244
+ },
245
+ {
246
+ "data": {
247
+ "text/html": [
248
+ "<div><iframe src=\"http://127.0.0.1:7861/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
249
+ ],
250
+ "text/plain": [
251
+ "<IPython.core.display.HTML object>"
252
+ ]
253
+ },
254
+ "metadata": {},
255
+ "output_type": "display_data"
256
+ },
257
+ {
258
+ "data": {
259
+ "text/plain": []
260
+ },
261
+ "execution_count": 10,
262
+ "metadata": {},
263
+ "output_type": "execute_result"
264
+ }
265
+ ],
266
+ "source": [
267
+ "\n",
268
+ "# Define the function that will be called when the user submits messages\n",
269
+ "def respond(user_message, chatbot_history):\n",
270
+ " # Get the response from the assistant\n",
271
+ " assistant_response, updated_history = get_assistant_response_with_history(user_message, chatbot_history)\n",
272
+ " return \"\", updated_history\n",
273
+ "\n",
274
+ "# Create the Gradio interface\n",
275
+ "with gr.Blocks() as demo:\n",
276
+ " \n",
277
+ " gr.Markdown(\"# Legal Empowerment Interface\") # Interface Title\n",
278
+ " gr.Markdown(\"### Select a model and enter your query below:\") # Interface subtitle\n",
279
+ "\n",
280
+ " with gr.Row():\n",
281
+ " with gr.Column(scale=1):\n",
282
+ " approach_dropdown = gr.Dropdown(choices=approaches, label=\"Select Approach\") # Creates the dropdown for selecting an approach\n",
283
+ "\n",
284
+ " chatbot_history = gr.Chatbot() # This will store the chat history\n",
285
+ " msg_textbox = gr.Textbox(placeholder=\"Type a message...\") # This is where the user types their message\n",
286
+ " reset_button = gr.Button(\"Clear Chat\") # Button to clear the chat history\n",
287
+ "\n",
288
+ " # Define what happens when the user submits a message\n",
289
+ " msg_textbox.submit(respond, inputs=[msg_textbox, chatbot_history], outputs=[msg_textbox, chatbot_history])\n",
290
+ " \n",
291
+ " # Define what happens when the reset button is clicked\n",
292
+ " reset_button.click(lambda: ([], \"\"), outputs=[chatbot_history, msg_textbox])\n",
293
+ "\n",
294
+ " gr.Markdown(\"### Thank you for using our Legal Empowerment Interface!\") # Closing message\n",
295
+ "\n",
296
+ "# Launch the interface\n",
297
+ "demo.launch()\n"
298
+ ]
299
+ }
300
+ ],
301
+ "metadata": {
302
+ "kernelspec": {
303
+ "display_name": "legal-empowerment",
304
+ "language": "python",
305
+ "name": "python3"
306
+ },
307
+ "language_info": {
308
+ "codemirror_mode": {
309
+ "name": "ipython",
310
+ "version": 3
311
+ },
312
+ "file_extension": ".py",
313
+ "mimetype": "text/x-python",
314
+ "name": "python",
315
+ "nbconvert_exporter": "python",
316
+ "pygments_lexer": "ipython3",
317
+ "version": "3.11.4"
318
+ }
319
+ },
320
+ "nbformat": 4,
321
+ "nbformat_minor": 2
322
+ }