papluca commited on
Commit
275fa07
1 Parent(s): a52234f

Add more info to the card

Browse files
Files changed (1) hide show
  1. README.md +15 -29
README.md CHANGED
@@ -94,59 +94,45 @@ The Language Identification dataset contains text in 20 languages, which are:
94
 
95
  ### Data Instances
96
 
97
- [More Information Needed]
 
 
 
98
 
99
  ### Data Fields
100
 
101
- [More Information Needed]
 
102
 
103
  ### Data Splits
104
 
105
- [More Information Needed]
 
 
106
 
107
  ## Dataset Creation
108
 
109
  ### Curation Rationale
110
 
111
- [More Information Needed]
112
 
113
  ### Source Data
114
 
115
- #### Initial Data Collection and Normalization
116
-
117
- [More Information Needed]
118
-
119
- #### Who are the source language producers?
120
-
121
- [More Information Needed]
122
-
123
- ### Annotations
124
-
125
- #### Annotation process
126
-
127
- [More Information Needed]
128
-
129
- #### Who are the annotators?
130
-
131
- [More Information Needed]
132
 
133
  ### Personal and Sensitive Information
134
 
135
- [More Information Needed]
136
 
137
  ## Considerations for Using the Data
138
 
139
  ### Social Impact of Dataset
140
 
141
- [More Information Needed]
142
 
143
  ### Discussion of Biases
144
 
145
- [More Information Needed]
146
-
147
- ### Other Known Limitations
148
-
149
- [More Information Needed]
150
 
151
  ## Additional Information
152
 
@@ -164,4 +150,4 @@ The Language Identification dataset contains text in 20 languages, which are:
164
 
165
  ### Contributions
166
 
167
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
94
 
95
  ### Data Instances
96
 
97
+ For each instance, there is a string for the text and a string for the label (the language tag). Here is an example:
98
+
99
+ `{'labels': 'fr', 'text': 'Conforme à la description, produit pratique.'}`
100
+
101
 
102
  ### Data Fields
103
 
104
+ - **labels:** a string indicating the language label.
105
+ - **text:** a string consisting of one or more sentences in one of the 20 languages listed above.
106
 
107
  ### Data Splits
108
 
109
+ The Language Identification dataset has 3 splits: *train*, *valid*, and *test*.
110
+ The train set contains 70k samples, while the validation and test sets 10k each.
111
+ All splits are perfectly balanced: the train set contains 3500 samples per language, while the validation and test sets 500.
112
 
113
  ## Dataset Creation
114
 
115
  ### Curation Rationale
116
 
117
+ This dataset was built during *The Hugging Face Course Community Event*, which took place in November 2021, with the goal of collecting a dataset with enough samples for each language to train a robust language detection model.
118
 
119
  ### Source Data
120
 
121
+ The Language Identification dataset was created by collecting data from 3 sources: [Multilingual Amazon Reviews Corpus](https://huggingface.co/datasets/amazon_reviews_multi), [XNLI](https://huggingface.co/datasets/xnli), and [STSb Multi MT](https://huggingface.co/datasets/stsb_multi_mt).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122
 
123
  ### Personal and Sensitive Information
124
 
125
+ The dataset does not contain any personal information about the authors or the crowdworkers.
126
 
127
  ## Considerations for Using the Data
128
 
129
  ### Social Impact of Dataset
130
 
131
+ This dataset was developed as a benchmark for evaluating (balanced) multi-class text classification models.
132
 
133
  ### Discussion of Biases
134
 
135
+ The possible biases correspond to those of the 3 datasets on which this dataset is based.
 
 
 
 
136
 
137
  ## Additional Information
138
 
 
150
 
151
  ### Contributions
152
 
153
+ Thanks to [@LucaPapariello](https://github.com/LucaPapariello) for adding this dataset.