Zekun Wu
update
3b1fe5b
raw
history blame
3.9 kB
import streamlit as st
st.set_page_config(
page_title="AI Explainability in the EU AI Act",
page_icon="👋",
)
st.title('AI Explainability in the EU AI Act: A Case for an NLE Approach Towards Pragmatic Explanations')
st.markdown(
"""
## Welcome to the AI Explainability Demo
This application demonstrates principles of AI explainability in the context of the EU AI Act. It showcases how Natural Language Explanations (NLE) can be used to provide clear, user-specific, and context-specific explanations of AI systems.
### Overview of the Paper
**Abstract**
This paper explores the implications of the EU AI Act for AI explainability, revealing both challenges and opportunities. It reframes explainability from mere regulatory compliance to a principle that can drive user empowerment and adherence to broader EU regulations. The study focuses on conveying explanations from AI systems to users, proposing design principles for 'good explanations' through dialogue systems using natural language. AI-powered robo-advising is used as a case study to illustrate the potential benefits and limitations of these principles.
**Key Topics:**
- **EU AI Act and Explainability**: Discusses the Act’s requirements for transparency and human oversight in AI systems, emphasizing the need for explainability.
- **Explanatory Pragmatism**: Introduces a philosophical framework that views explanations as communicative acts tailored to individual users' needs.
- **Natural Language Explanations (NLE)**: Proposes using NLE to make AI model workings comprehensible, enhancing user trust and understanding.
- **Dialogue Systems**: Explores the use of dialogue systems to deliver explanations interactively, making them more user-friendly and context-specific.
- **Robo-Advising Case Study**: Demonstrates the application of NLE principles in a financial services context, highlighting both the benefits and challenges.
### Goals of the Demo
This demo aims to:
- Illustrate how NLE can be used to enhance the explainability of AI systems.
- Show how different explanation templates can be applied to generate meaningful explanations.
- Allow users to evaluate explanations and understand their quality based on defined principles.
### Instructions
Use the sidebar to navigate through different functionalities of the demo, including Single Evaluation, Explanation Generation, and Batch Evaluation.
"""
)
st.sidebar.title("Demo Instructions")
st.sidebar.markdown("## Single Evaluation")
st.sidebar.markdown("""
- **Description**: Try the single evaluation by inputting a question and an explanation. This part allows you to see how well an explanation meets the criteria of a good explanation.
- **How to Use**:
1. Enter your question and explanation in the provided fields.
2. Click "Evaluate" to see the evaluation results.
""")
st.sidebar.markdown("## Explanation Generation")
st.sidebar.markdown("""
- **Description**: Upload a CSV file containing questions and generate natural language explanations for each question using different AI models.
- **How to Use**:
1. Upload a CSV file with a column named `question`.
2. Select an explanation template (e.g., "Default", "Chain of Thought", or "Custom").
3. Adjust the model parameters such as temperature and max tokens using the sliders in the sidebar.
4. Click "Generate Explanations" to get the results.
""")
st.sidebar.markdown("## Batch Evaluation")
st.sidebar.markdown("""
- **Description**: Evaluate explanations in bulk by uploading a CSV file containing questions and explanations. This part is useful for assessing the quality of multiple explanations at once.
- **How to Use**:
1. Upload a CSV file with columns named `question` and `explanation`.
2. Click "Evaluate Batch" to see the evaluation results.
""")