alexkueck commited on
Commit
2a188c0
1 Parent(s): ea48328

Update README (1).md

Browse files
Files changed (1) hide show
  1. README (1).md +24 -37
README (1).md CHANGED
@@ -4,12 +4,12 @@ annotations_creators:
4
  language_creators:
5
  - found
6
  language:
7
- - en
8
  license:
9
  - cc0-1.0
10
  multilinguality:
11
  - monolingual
12
- pretty_name: OpenWebText
13
  size_categories:
14
  - 1M<n<10M
15
  source_datasets:
@@ -20,7 +20,7 @@ task_categories:
20
  task_ids:
21
  - language-modeling
22
  - masked-language-modeling
23
- paperswithcode_id: openwebtext
24
  dataset_info:
25
  features:
26
  - name: text
@@ -28,10 +28,10 @@ dataset_info:
28
  config_name: plain_text
29
  splits:
30
  - name: train
31
- num_bytes: 39769491688
32
- num_examples: 8013769
33
- download_size: 12880189440
34
- dataset_size: 39769491688
35
  ---
36
 
37
  # Dataset Card for "openwebtext"
@@ -66,9 +66,9 @@ dataset_info:
66
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
67
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
68
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
69
- - **Size of downloaded dataset files:** 13.51 GB
70
- - **Size of the generated dataset:** 41.70 GB
71
- - **Total amount of disk used:** 55.21 GB
72
 
73
  ### Dataset Summary
74
 
@@ -90,17 +90,10 @@ This distribution was created by Aaron Gokaslan and Vanya Cohen of Brown Univers
90
 
91
  #### plain_text
92
 
93
- - **Size of downloaded dataset files:** 13.51 GB
94
- - **Size of the generated dataset:** 41.70 GB
95
- - **Total amount of disk used:** 55.21 GB
96
 
97
- An example of 'train' looks as follows.
98
- ```
99
- This example was too long and was cropped:
100
-
101
- {
102
- "text": "\"A magazine supplement with an image of Adolf Hitler and the title 'The Unreadable Book' is pictured in Berlin. No law bans “Mei..."
103
- }
104
  ```
105
 
106
  ### Data Fields
@@ -114,13 +107,13 @@ The data fields are the same among all splits.
114
 
115
  | name | train |
116
  |------------|--------:|
117
- | plain_text | 8013769 |
118
 
119
  ## Dataset Creation
120
 
121
  ### Curation Rationale
122
 
123
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
124
 
125
  ### Source Data
126
 
@@ -132,7 +125,7 @@ Subsequently, near-duplicate documents were identified using local-sensitivity h
132
 
133
  #### Who are the source language producers?
134
 
135
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
 
137
  ### Annotations
138
 
@@ -140,31 +133,31 @@ The dataset doesn't contain annotations.
140
 
141
  ### Personal and Sensitive Information
142
 
143
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
144
 
145
  ## Considerations for Using the Data
146
 
147
  ### Social Impact of Dataset
148
 
149
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
150
 
151
  ### Discussion of Biases
152
 
153
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
 
155
  ### Other Known Limitations
156
 
157
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
158
 
159
  ## Additional Information
160
 
161
  ### Dataset Curators
162
 
163
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
 
165
  ### Licensing Information
166
 
167
- These data are released under this licensing scheme from the original authors ([source](https://skylion007.github.io/OpenWebTextCorpus/)):
168
 
169
  ```
170
  We do not own any of the text from which these data has been extracted.
@@ -191,15 +184,9 @@ Hugging Face will also update this repository accordingly.
191
 
192
  ### Citation Information
193
 
194
- ```
195
- @misc{Gokaslan2019OpenWeb,
196
- title={OpenWebText Corpus},
197
- author={Aaron Gokaslan*, Vanya Cohen*, Ellie Pavlick, Stefanie Tellex},
198
- howpublished{\url{http://Skylion007.github.io/OpenWebTextCorpus}},
199
- year={2019}
200
- }
201
  ```
202
 
203
  ### Contributions
204
 
205
- Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
 
4
  language_creators:
5
  - found
6
  language:
7
+ - de
8
  license:
9
  - cc0-1.0
10
  multilinguality:
11
  - monolingual
12
+ pretty_name: TIS
13
  size_categories:
14
  - 1M<n<10M
15
  source_datasets:
 
20
  task_ids:
21
  - language-modeling
22
  - masked-language-modeling
23
+ paperswithcode_id: tis
24
  dataset_info:
25
  features:
26
  - name: text
 
28
  config_name: plain_text
29
  splits:
30
  - name: train
31
+ num_bytes:
32
+ num_examples:
33
+ download_size:
34
+ dataset_size:
35
  ---
36
 
37
  # Dataset Card for "openwebtext"
 
66
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
67
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
68
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
69
+ - **Size of downloaded dataset files:** GB
70
+ - **Size of the generated dataset:** GB
71
+ - **Total amount of disk used:** GB
72
 
73
  ### Dataset Summary
74
 
 
90
 
91
  #### plain_text
92
 
93
+ - **Size of downloaded dataset files:** GB
94
+ - **Size of the generated dataset:** GB
95
+ - **Total amount of disk used:** GB
96
 
 
 
 
 
 
 
 
97
  ```
98
 
99
  ### Data Fields
 
107
 
108
  | name | train |
109
  |------------|--------:|
110
+ | plain_text | ... |
111
 
112
  ## Dataset Creation
113
 
114
  ### Curation Rationale
115
 
116
+
117
 
118
  ### Source Data
119
 
 
125
 
126
  #### Who are the source language producers?
127
 
128
+
129
 
130
  ### Annotations
131
 
 
133
 
134
  ### Personal and Sensitive Information
135
 
136
+
137
 
138
  ## Considerations for Using the Data
139
 
140
  ### Social Impact of Dataset
141
 
142
+
143
 
144
  ### Discussion of Biases
145
 
146
+
147
 
148
  ### Other Known Limitations
149
 
150
+
151
 
152
  ## Additional Information
153
 
154
  ### Dataset Curators
155
 
156
+
157
 
158
  ### Licensing Information
159
 
160
+ These data are released under this licensing scheme from the original authors (LIF15 LI Hamburg):
161
 
162
  ```
163
  We do not own any of the text from which these data has been extracted.
 
184
 
185
  ### Citation Information
186
 
187
+
 
 
 
 
 
 
188
  ```
189
 
190
  ### Contributions
191
 
192
+