ExploreACMnaacl / posts /conclusion.py
Yacine Jernite
initial commit
7bffaaf
raw
history blame
2.12 kB
import streamlit as st
title = "Key Takeaways"
description = "Review of the information from previous pages."
date = "2022-01-26"
thumbnail = "images/raised_hand.png"
__KEY_TAKEAWAYS = """
# Key Takeaways and Review
Here are some of the main ideas we have conveyed in this exploration:
- Defining hate speech is hard and changes depending on your context and goals.
- Capturing a snapshot of what you've defined to be hate speech in a dataset is hard.
- Models learn lots of different things based on the data it sees, and that can include things you didn't intend for them to learn.
Next, please answer the following questions about the information presented in this demo:
"""
def run_article():
st.markdown(__KEY_TAKEAWAYS)
st.text_area(
"Did you click on any of the links provided in the **Hate Speech in ACM** page? If so, which one did you find most surprising?"
)
st.text_area(
"Of the datasets presented in the **Dataset Exploration** page, which one did you think best represented content that should be moderated? Which worst?"
)
st.text_area(
"Of the models presented in the **Model Exploration** page, which one did you think performed best? Which worst?"
)
st.text_area(
"Any additional comments about the materials?"
)
# from paper
st.text_area(
"How would you describe your role? E.g. model developer, dataset developer, domain expert, policy maker, platform manager, community advocate, platform user, student"
)
st.text_area(
"Why are you interested in content moderation?"
)
st.text_area(
"Which modules did you use the most?"
)
st.text_area(
"Which module did you find the most informative?"
)
st.text_area(
"Which application were you most interested in learning more about?"
)
st.text_area(
"What surprised you most about the datasets?"
)
st.text_area(
"Which models are you most concerned about as a user?"
)
st.text_area(
"Do you have any comments or suggestions?"
)