{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# User-Interface-Demonstration\n", "\n", "This notebook implements a user interface that allows users to select and interact with different approaches without needing to modify the underlying code. The interface provides a dropdown menu for users to select an approach (Long-context, Vanilla RAG, etc.) and a textbox for entering their queries. The selected approach and user input are processed, and the results are displayed interactively. Additionally, all user interactions are logged to facilitate user evaluations. This setup aims to streamline the experimentation process, making it more user-friendly and efficient." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Environment Setup\n", "\n", "* Loading the necessary packages and modules" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# misc.\n", "import gradio as gr\n", "from dotenv import load_dotenv\n", "from openai import OpenAI\n", "from uuid import uuid4\n", "load_dotenv()\n", "\n", "# logging import\n", "import logging \n", "logging.basicConfig(filename='user_interactions.log', level=logging.INFO)\n", "\n", "# langchain import\n", "from langchain_core.prompts import PromptTemplate, ChatPromptTemplate, MessagesPlaceholder\n", "from langchain_openai import ChatOpenAI\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.runnables import RunnablePassthrough\n", "import os\n", "import requests\n", "from getpass import getpass\n", "\n", "# langfuse imports and tracing\n", "from langfuse import Langfuse\n", "from langfuse.decorators import observe\n", "from langfuse.openai import openai\n", "\n", "import langfuse\n", "from langfuse import Langfuse\n", "trace_id = str(uuid4())\n", "\n", "LANGFUSE_SECRET_KEY = os.environ['LANGFUSE_SECRET_KEY']\n", "LANGFUSE_PUBLIC_KEY = os.environ['LANGFUSE_PUBLIC_KEY']\n", "LANGFUSE_HOST = \"https://us.cloud.langfuse.com\"\n", "\n", "# OpenAI API Keys \n", "client = OpenAI(api_key=os.environ[\"OPENAI_API_KEY\"]) \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# System Message\n", "\n", "* Defines our system message so it is easier to manipulate going forward if needed" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "system_message = f'''''GOAL: You are a AI Legal Aid in which you play the role of specializing in end-of-life planning in Tennessee through a Q&A format. You guide users by asking clarification questions, one at a time, after they give you a response to gather necessary information and provide tailored legal advice. Your goal is to improve legal understanding and aid users in completing necessary legal documents based on their situation.\n", "\n", "PERSONA: In this scenario, you are an AI Legal Aid in which you play the role of specializing in end-of-life planning in Tennessee. You provide expert advice on advance directives, including living wills, medical care directives, powers of attorney for healthcare, and general powers of attorney in case of incapacity. You aim to explain these concepts in simple terms, while also ensuring legal accuracy, to help users without legal training understand their options, how these documents work, and the implications of their decisions. You eventually will draft the necessary legal forms based on the information provided by users. Responses should be friendly, professional, emotionally intelligent, and engaging, making a particular effort to match the user's tone. You should break down complex legal terms into simpler concepts and provide examples where necessary to aid understanding. You should avoid overwhelming users with too many options, navigate challenging conversations gracefully and engagingly, identify areas where you can help, and lead users to the answers they need. You should probe the user for what they already know to gauge how you can be helpful, slowing down to ensure clarity and understanding. \n", "\n", "NARRATIVE: The user is introduced to the legal aid, who asks a set of initial questions to understand what the user wants to accomplish and determine what documents they need to fill out. You then guide and support the user to help them with their goal. \n", "\n", "Follow these steps in order:\n", "\n", "STEP 1: GATHER INFORMATION\n", "You should do this:\n", "1. Introduce yourself: First introduce yourself to the user and tell them you are here to help them navigate their situation.\n", "2. Ask the user the following questions. Ask these questions 1 at a time and ALWAYS wait for a response before moving on to the next question. For instance, you might ask \"How can I help you navigate your legal scenario?\" and the user would respond. And only then would you say \"Thank you for explaining. I have another question for you to help me help you: Can you explain further...\". This part of the conversations works best when you and the user take turns asking and answering questions instead of you asking a series of questions all at once. That way you can have more of a natural dialogue.\n", "\n", "You should do this:\n", "- Wait for a response from the user after every question before moving on.\n", "- Work to ascertain what the user wants to accomplish specifically.\n", "- Ask one question at a time and explain that you are asking so that you can tailor your explanation\n", "- Gauge what the user already knows so that you can adapt your explanations and questions moving forward based on their prior knowledge.\n", "- You should ask for any necessary clarifications to ensure the user's needs are accurately understood and addressed.\n", "\n", "Do NOT do this:\n", "- Start explaining right away before you gather the necessary information\n", "- Ask the user more than 1 question at a time.\n", "\n", "Next step: Once you have all of this information, you can move on to the next step and begin with a brief explanation\n", "\n", "STEP 2: BEGIN DOCUMENT COMPLETION\n", "\n", "You should do this:\n", "Think step by step and make a plan based on the goal of the user and based on their specific scenario. Now that you know a little bit about what the user knows, consider how you will:\n", "- Guide the user in the most efficient way possible based on the information that is needed in their specific document.\n", "- Help the user generate answers to the necessary questions.\n", "- Remind the user of their goal if necessary.\n", "- Provide explanations and examples when necessary.\n", "- Tailor your responses and questions to the user's goal and prior knowledge, which might change as the conversation progresses. \n", "- If applicable, use the documents uploaded in the \"knowledge\" section to guide your questions.\n", "\n", "Do NOT do this:\n", "- Provide immediate answers or solutions to problems. \n", "- Lose track of the user's goal and discuss other things that are off topic.\n", "\n", "Next step: Once you have all of the necessary information for the document, move to wrap up\n", "\n", "STEP 3: WRAP UP\n", "You should do this:\n", "1. Once you have all of the information needed, generate a pdf document that the user can take to the courthouse for processing in the appropriate format.\n", "'''''" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Functions for each approach\n", "\n", "* These cells contain the code that runs the approach that the user selects. Additionally, a global variable (llm_chat_history_lc) is initialized that allows for the chats to be logged for maintaining conversation history." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> Example function for the long-context model" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "\n", "def get_assistant_response_with_history(user_message, llm_chat_history_lc, model_name=\"gpt-3.5-turbo\"):\n", " # Convert the tuple-based chat history to the appropriate format\n", " messages = [{'role': 'system', 'content': system_message}]\n", " \n", " for user_msg, assistant_msg in llm_chat_history_lc:\n", " messages.append({'role': 'user', 'content': user_msg})\n", " messages.append({'role': 'assistant', 'content': assistant_msg})\n", " \n", " # Add the new user message\n", " messages.append({'role': 'user', 'content': user_message})\n", "\n", " # Compute a completion (response) from the LLM\n", " completion = client.chat.completions.create( \n", " model=model_name,\n", " messages=messages,\n", " trace_id = trace_id # assigns a specific trace id for the entire conversation so the whole conversation is grouped together\n", " )\n", " \n", " # Get the assistant's response\n", " assistant_response = completion.choices[0].message.content\n", " \n", " # Update chat history with a tuple (user_message, assistant_response)\n", " llm_chat_history_lc.append((user_message, assistant_response))\n", " \n", " # Return the response and updated chat history\n", " return assistant_response, llm_chat_history_lc\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Approach Functions\n", "\n", "* Here, I defined each approach as their own 'approach' functions (approach_1, approach_2, etc.). In doing so, I was then able to define a function that allows the user to select which approach they would like to use." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "\n", "\n", "# Long-context approach function defined as 'approach_1' with one parameter 'query'\n", "def approach_1(query):\n", " global llm_chat_history_lc # function will use llm_chat_history_lc to maintain conversation history\n", " response, llm_chat_history_lc = get_assistant_response_with_history(query, llm_chat_history_lc) # calls the long context function and passes the user's query and chat history as arguments\n", " log_interaction(\"Long-Context Model\", query, response) # logs the details of the interaction (the approach used, the query, and the llm's response)\n", " return response # returns the model's response\n", "\n", "\n", "# Logging function to log interactions and maintain conversation history\n", "def log_interaction(approach, query, response):\n", " log_entry = f\"Approach: {approach}, Query: {query}, Response: {response}\"\n", " logging.info(log_entry)\n", "\n", "# Function that allows the user to choose an approach to get a response\n", "def choose_approach(approach, query):\n", " if approach == \"Long-Context Model\":\n", " return approach_1(query)\n", " else:\n", " return \"Invalid approach selected.\"\n", "\n", "# Defines a list of the available approaches\n", "approaches = [\"Long-Context Model\"]\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Running the Interface\n", "\n", "* Run the following cell to interact with the interface. \n", "* I am using Gradio Blocks because it allows for more flexibility and customization than gradio interface. " ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Running on local URL: http://127.0.0.1:7861\n", "\n", "To create a public link, set `share=True` in `launch()`.\n" ] }, { "data": { "text/html": [ "
" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/plain": [] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "\n", "# Define the function that will be called when the user submits messages\n", "def respond(user_message, chatbot_history):\n", " # Get the response from the assistant\n", " assistant_response, updated_history = get_assistant_response_with_history(user_message, chatbot_history)\n", " return \"\", updated_history\n", "\n", "# Create the Gradio interface\n", "with gr.Blocks() as demo:\n", " \n", " gr.Markdown(\"# Legal Empowerment Interface\") # Interface Title\n", " gr.Markdown(\"### Select a model and enter your query below:\") # Interface subtitle\n", "\n", " with gr.Row():\n", " with gr.Column(scale=1):\n", " approach_dropdown = gr.Dropdown(choices=approaches, label=\"Select Approach\") # Creates the dropdown for selecting an approach\n", "\n", " chatbot_history = gr.Chatbot() # This will store the chat history\n", " msg_textbox = gr.Textbox(placeholder=\"Type a message...\") # This is where the user types their message\n", " reset_button = gr.Button(\"Clear Chat\") # Button to clear the chat history\n", "\n", " # Define what happens when the user submits a message\n", " msg_textbox.submit(respond, inputs=[msg_textbox, chatbot_history], outputs=[msg_textbox, chatbot_history])\n", " \n", " # Define what happens when the reset button is clicked\n", " reset_button.click(lambda: ([], \"\"), outputs=[chatbot_history, msg_textbox])\n", "\n", " gr.Markdown(\"### Thank you for using our Legal Empowerment Interface!\") # Closing message\n", "\n", "# Launch the interface\n", "demo.launch()\n" ] } ], "metadata": { "kernelspec": { "display_name": "legal-empowerment", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 2 }