Zekun Wu
commited on
Commit
·
3b1fe5b
1
Parent(s):
1c72aee
update
Browse files
app.py
CHANGED
@@ -11,81 +11,32 @@ st.markdown(
|
|
11 |
"""
|
12 |
## Welcome to the AI Explainability Demo
|
13 |
|
14 |
-
This application demonstrates
|
15 |
|
16 |
-
###
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
6. Limitations
|
29 |
-
7. Conclusion and Future Work
|
30 |
|
31 |
-
###
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
4. Application of NLE in a Robo-Advising Dialogue System (RADS).
|
38 |
-
5. Limitations of the proposed approach.
|
39 |
-
6. Future directions for research.
|
40 |
|
41 |
-
###
|
42 |
|
43 |
-
|
44 |
|
45 |
-
**Articles Overview**:
|
46 |
-
- **Article 13**: Emphasizes transparency, requiring high-risk AI systems to be understandable and interpretable by users.
|
47 |
-
- **Article 14**: Stresses human oversight to ensure AI systems are used safely and effectively.
|
48 |
-
|
49 |
-
The paper argues that transparency and explainability are crucial for user empowerment and regulatory compliance.
|
50 |
-
|
51 |
-
### 2. Explanatory Pragmatism
|
52 |
-
|
53 |
-
This section discusses different philosophical approaches to explanation, emphasizing explanatory pragmatism, which views explanations as communicative acts tailored to individual users' needs. The pragmatic framework consists of:
|
54 |
-
- **Communicative View**: Explanations as speech acts aimed at facilitating understanding.
|
55 |
-
- **Inferentialist View**: Understanding as context-dependent, involving relevant inferences.
|
56 |
-
|
57 |
-
**Design Principles for a Good Explanation**:
|
58 |
-
1. Factually Correct: Accurate and relevant information.
|
59 |
-
2. Useful: Provides actionable insights.
|
60 |
-
3. Context Specific: Tailored to the user's context.
|
61 |
-
4. User Specific: Adapted to the user's knowledge level.
|
62 |
-
5. Provides Pluralism: Allows for multiple perspectives.
|
63 |
-
|
64 |
-
### 3. NLE and Dialogue Systems
|
65 |
-
|
66 |
-
NLE transforms complex model workings into human-comprehensible language. Dialogue systems, which facilitate interaction between users and AI, are proposed as effective means for delivering NLE. Key design principles for dialogue systems include:
|
67 |
-
1. Natural language prompts.
|
68 |
-
2. Context understanding.
|
69 |
-
3. Continuity in dialogue.
|
70 |
-
4. Admission of system limitations.
|
71 |
-
5. Confidence levels for explanations.
|
72 |
-
6. Near real-time interaction.
|
73 |
-
|
74 |
-
### 4. Robo-Advising Case Study
|
75 |
-
|
76 |
-
Robo-advising, although not explicitly high-risk per the EU AI Act, benefits from explainability for user trust and regulatory adherence. The paper illustrates this through hypothetical dialogues between users and a Robo-Advising Dialogue System (RADS), showcasing the principles in action. Different user profiles—retail consumers, data scientists, and regulators—demonstrate varied needs for explanations, highlighting RADS' adaptability and limitations.
|
77 |
-
|
78 |
-
### 5. Limitations
|
79 |
-
|
80 |
-
The paper acknowledges technical and ethical challenges in implementing explainability:
|
81 |
-
- Complexity of queries.
|
82 |
-
- Coherence and relevance of explanations.
|
83 |
-
- Context retention and information accuracy.
|
84 |
-
- Risk of overreliance on AI.
|
85 |
-
|
86 |
-
### 6. Conclusion and Future Work
|
87 |
-
|
88 |
-
The paper concludes that explainability should extend beyond regulatory compliance to foster ethical AI and user empowerment. It calls for empirical testing of the proposed design principles in real-world applications, particularly focusing on the scalability and practicality of implementing NLE in dialogue systems.
|
89 |
"""
|
90 |
)
|
91 |
|
|
|
11 |
"""
|
12 |
## Welcome to the AI Explainability Demo
|
13 |
|
14 |
+
This application demonstrates principles of AI explainability in the context of the EU AI Act. It showcases how Natural Language Explanations (NLE) can be used to provide clear, user-specific, and context-specific explanations of AI systems.
|
15 |
|
16 |
+
### Overview of the Paper
|
17 |
|
18 |
+
**Abstract**
|
19 |
|
20 |
+
This paper explores the implications of the EU AI Act for AI explainability, revealing both challenges and opportunities. It reframes explainability from mere regulatory compliance to a principle that can drive user empowerment and adherence to broader EU regulations. The study focuses on conveying explanations from AI systems to users, proposing design principles for 'good explanations' through dialogue systems using natural language. AI-powered robo-advising is used as a case study to illustrate the potential benefits and limitations of these principles.
|
21 |
|
22 |
+
**Key Topics:**
|
23 |
+
- **EU AI Act and Explainability**: Discusses the Act’s requirements for transparency and human oversight in AI systems, emphasizing the need for explainability.
|
24 |
+
- **Explanatory Pragmatism**: Introduces a philosophical framework that views explanations as communicative acts tailored to individual users' needs.
|
25 |
+
- **Natural Language Explanations (NLE)**: Proposes using NLE to make AI model workings comprehensible, enhancing user trust and understanding.
|
26 |
+
- **Dialogue Systems**: Explores the use of dialogue systems to deliver explanations interactively, making them more user-friendly and context-specific.
|
27 |
+
- **Robo-Advising Case Study**: Demonstrates the application of NLE principles in a financial services context, highlighting both the benefits and challenges.
|
|
|
|
|
28 |
|
29 |
+
### Goals of the Demo
|
30 |
|
31 |
+
This demo aims to:
|
32 |
+
- Illustrate how NLE can be used to enhance the explainability of AI systems.
|
33 |
+
- Show how different explanation templates can be applied to generate meaningful explanations.
|
34 |
+
- Allow users to evaluate explanations and understand their quality based on defined principles.
|
|
|
|
|
|
|
35 |
|
36 |
+
### Instructions
|
37 |
|
38 |
+
Use the sidebar to navigate through different functionalities of the demo, including Single Evaluation, Explanation Generation, and Batch Evaluation.
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
"""
|
41 |
)
|
42 |
|