Spaces:
Runtime error
Runtime error
Commit
•
2c90032
1
Parent(s):
7fc9d23
Update app.py
Browse files
app.py
CHANGED
@@ -20,6 +20,11 @@ principles = [
|
|
20 |
- Demonstrating a commitment to acquiring data from willing, informed, and appropriately compensated sources.
|
21 |
- Designing systems that respect end-user autonomy, e.g. with privacy-preserving techniques.
|
22 |
- Avoiding extractive, chauvinist, ["dark"](https://www.deceptive.design), and otherwise "unethical" patterns of engagement.
|
|
|
|
|
|
|
|
|
|
|
23 |
"""
|
24 |
},
|
25 |
{
|
@@ -31,6 +36,10 @@ principles = [
|
|
31 |
|
32 |
- Tracking emissions from training and running inferences on large language models.
|
33 |
- Quantization and distillation methods to reduce carbon footprints without sacrificing model quality.
|
|
|
|
|
|
|
|
|
34 |
"""
|
35 |
},
|
36 |
{
|
@@ -44,6 +53,10 @@ principles = [
|
|
44 |
- Building tools to assist with medical research and practice.
|
45 |
- Developing models for text-to-speech, image captioning, and other tasks aimed at increasing accessibility.
|
46 |
- Creating systems for the digital humanities, such as for Indigenous language revitalization.
|
|
|
|
|
|
|
|
|
47 |
"""
|
48 |
},
|
49 |
{
|
@@ -56,6 +69,11 @@ principles = [
|
|
56 |
- Curating diverse datasets that increase the representation of underserved groups.
|
57 |
- Training language models on languages that aren't yet available on the Hugging Face Hub.
|
58 |
- Creating no-code frameworks that allow non-technical folk to engage with AI.
|
|
|
|
|
|
|
|
|
|
|
59 |
"""
|
60 |
},
|
61 |
{
|
@@ -73,6 +91,11 @@ principles = [
|
|
73 |
- Models that are evaluated against cutting-edge benchmarks, with results reported against disaggregated sets.
|
74 |
- Demonstrations of models failing across ["gender, skin type, ethnicity, age or other attributes"](http://gendershades.org/overview.html).
|
75 |
- Techniques for mitigating issues like over-fitting and training data memorization.
|
|
|
|
|
|
|
|
|
|
|
76 |
"""
|
77 |
},
|
78 |
{
|
@@ -87,6 +110,10 @@ principles = [
|
|
87 |
- [Reframing AI and machine learning from Indigenous perspectives](https://jods.mitpress.mit.edu/pub/lewis-arista-pechawis-kite/release/1).
|
88 |
- [Highlighting LGBTQIA2S+ marginalization in AI](https://edri.org/our-work/computers-are-binary-people-are-not-how-ai-systems-undermine-lgbtq-identity/).
|
89 |
- [Critiquing the harms perpetuated by AI systems](https://www.ajl.org).
|
|
|
|
|
|
|
|
|
90 |
"""
|
91 |
},
|
92 |
]
|
|
|
20 |
- Demonstrating a commitment to acquiring data from willing, informed, and appropriately compensated sources.
|
21 |
- Designing systems that respect end-user autonomy, e.g. with privacy-preserving techniques.
|
22 |
- Avoiding extractive, chauvinist, ["dark"](https://www.deceptive.design), and otherwise "unethical" patterns of engagement.
|
23 |
+
|
24 |
+
Featured Spaces:
|
25 |
+
|
26 |
+
- [lvwerra/in-the-stack-gr](https://huggingface.co/spaces/lvwerra/in-the-stack-gr)
|
27 |
+
- [zama-fhe/encrypted_sentiment_analysis](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis)
|
28 |
"""
|
29 |
},
|
30 |
{
|
|
|
36 |
|
37 |
- Tracking emissions from training and running inferences on large language models.
|
38 |
- Quantization and distillation methods to reduce carbon footprints without sacrificing model quality.
|
39 |
+
|
40 |
+
Featured Space:
|
41 |
+
|
42 |
+
- [pytorch/MobileNet_v2](https://huggingface.co/spaces/pytorch/MobileNet_v2)
|
43 |
"""
|
44 |
},
|
45 |
{
|
|
|
53 |
- Building tools to assist with medical research and practice.
|
54 |
- Developing models for text-to-speech, image captioning, and other tasks aimed at increasing accessibility.
|
55 |
- Creating systems for the digital humanities, such as for Indigenous language revitalization.
|
56 |
+
|
57 |
+
Featured Space:
|
58 |
+
|
59 |
+
- [vict0rsch/climateGAN](https://huggingface.co/spaces/vict0rsch/climateGAN)
|
60 |
"""
|
61 |
},
|
62 |
{
|
|
|
69 |
- Curating diverse datasets that increase the representation of underserved groups.
|
70 |
- Training language models on languages that aren't yet available on the Hugging Face Hub.
|
71 |
- Creating no-code frameworks that allow non-technical folk to engage with AI.
|
72 |
+
|
73 |
+
Featured Space:
|
74 |
+
|
75 |
+
- [hackathon-pln-es/Spanish-Nahuatl-Translation](https://huggingface.co/spaces/hackathon-pln-es/Spanish-Nahuatl-Translation)
|
76 |
+
- [hackathon-pln-es/readability-assessment-spanish](https://huggingface.co/spaces/hackathon-pln-es/readability-assessment-spanish)
|
77 |
"""
|
78 |
},
|
79 |
{
|
|
|
91 |
- Models that are evaluated against cutting-edge benchmarks, with results reported against disaggregated sets.
|
92 |
- Demonstrations of models failing across ["gender, skin type, ethnicity, age or other attributes"](http://gendershades.org/overview.html).
|
93 |
- Techniques for mitigating issues like over-fitting and training data memorization.
|
94 |
+
|
95 |
+
Featured Spaces:
|
96 |
+
|
97 |
+
- [emilylearning/spurious_correlation_evaluation](https://huggingface.co/spaces/emilylearning/spurious_correlation_evaluation)
|
98 |
+
- [ml6team/post-processing-summarization](https://huggingface.co/spaces/ml6team/post-processing-summarization)
|
99 |
"""
|
100 |
},
|
101 |
{
|
|
|
110 |
- [Reframing AI and machine learning from Indigenous perspectives](https://jods.mitpress.mit.edu/pub/lewis-arista-pechawis-kite/release/1).
|
111 |
- [Highlighting LGBTQIA2S+ marginalization in AI](https://edri.org/our-work/computers-are-binary-people-are-not-how-ai-systems-undermine-lgbtq-identity/).
|
112 |
- [Critiquing the harms perpetuated by AI systems](https://www.ajl.org).
|
113 |
+
|
114 |
+
Featured Space:
|
115 |
+
|
116 |
+
- [society-ethics/Average_diffusion_faces](https://huggingface.co/spaces/society-ethics/Average_diffusion_faces)
|
117 |
"""
|
118 |
},
|
119 |
]
|