albertvillanova HF staff commited on
Commit
a42b589
1 Parent(s): e17e21a

Convert dataset sizes from base 2 to base 10 in the dataset card (#4)

Browse files

- Convert dataset sizes from base 2 to base 10 in the dataset card (887c803fa7817a62af62b6167a2e3f5e4abd763d)

Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -132,9 +132,9 @@ dataset_info:
132
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
  - **Paper:** [https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf](https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf)
134
  - **Point of Contact:** [Darina Benikova](mailto:benikova@aiphes.tu-darmstadt.de)
135
- - **Size of downloaded dataset files:** 9.81 MB
136
- - **Size of the generated dataset:** 17.19 MB
137
- - **Total amount of disk used:** 27.00 MB
138
 
139
  ### Dataset Summary
140
 
@@ -154,9 +154,9 @@ German
154
 
155
  #### germeval_14
156
 
157
- - **Size of downloaded dataset files:** 9.81 MB
158
- - **Size of the generated dataset:** 17.19 MB
159
- - **Total amount of disk used:** 27.00 MB
160
 
161
  An example of 'train' looks as follows. This example was too long and was cropped:
162
 
 
132
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
  - **Paper:** [https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf](https://pdfs.semanticscholar.org/b250/3144ed2152830f6c64a9f797ab3c5a34fee5.pdf)
134
  - **Point of Contact:** [Darina Benikova](mailto:benikova@aiphes.tu-darmstadt.de)
135
+ - **Size of downloaded dataset files:** 10.29 MB
136
+ - **Size of the generated dataset:** 18.03 MB
137
+ - **Total amount of disk used:** 28.31 MB
138
 
139
  ### Dataset Summary
140
 
 
154
 
155
  #### germeval_14
156
 
157
+ - **Size of downloaded dataset files:** 10.29 MB
158
+ - **Size of the generated dataset:** 18.03 MB
159
+ - **Total amount of disk used:** 28.31 MB
160
 
161
  An example of 'train' looks as follows. This example was too long and was cropped:
162