Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:

Convert dataset sizes from base 2 to base 10 in the dataset card

#3
by albertvillanova HF staff - opened
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -104,9 +104,9 @@ dataset_info:
104
  - **Repository:** [https://github.com/facebookresearch/anli/](https://github.com/facebookresearch/anli/)
105
  - **Paper:** [Adversarial NLI: A New Benchmark for Natural Language Understanding](https://arxiv.org/abs/1910.14599)
106
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
- - **Size of downloaded dataset files:** 17.76 MB
108
- - **Size of the generated dataset:** 73.55 MB
109
- - **Total amount of disk used:** 91.31 MB
110
 
111
  ### Dataset Summary
112
 
@@ -129,9 +129,9 @@ English
129
 
130
  #### plain_text
131
 
132
- - **Size of downloaded dataset files:** 17.76 MB
133
- - **Size of the generated dataset:** 73.55 MB
134
- - **Total amount of disk used:** 91.31 MB
135
 
136
  An example of 'train_r2' looks as follows.
137
  ```
 
104
  - **Repository:** [https://github.com/facebookresearch/anli/](https://github.com/facebookresearch/anli/)
105
  - **Paper:** [Adversarial NLI: A New Benchmark for Natural Language Understanding](https://arxiv.org/abs/1910.14599)
106
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
+ - **Size of downloaded dataset files:** 18.62 MB
108
+ - **Size of the generated dataset:** 77.12 MB
109
+ - **Total amount of disk used:** 95.75 MB
110
 
111
  ### Dataset Summary
112
 
 
129
 
130
  #### plain_text
131
 
132
+ - **Size of downloaded dataset files:** 18.62 MB
133
+ - **Size of the generated dataset:** 77.12 MB
134
+ - **Total amount of disk used:** 95.75 MB
135
 
136
  An example of 'train_r2' looks as follows.
137
  ```