Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -150,9 +150,18 @@ configs:
|
|
150 |
data_files:
|
151 |
- split: train
|
152 |
path: graded/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
153 |
---
|
154 |
# Dataset Card for "rp_books-en"
|
155 |
|
|
|
|
|
|
|
156 |
The `default` config:
|
157 |
```python
|
158 |
Dataset({
|
@@ -166,6 +175,8 @@ Dataset({
|
|
166 |
## token count
|
167 |
|
168 |
|
|
|
|
|
169 |
|
170 |
GPT-4 tiktoken token count:
|
171 |
|
@@ -181,4 +192,4 @@ min 3.811000e+03
|
|
181 |
max 8.687685e+06
|
182 |
```
|
183 |
|
184 |
-
Total count: 2662.85 M tokens
|
|
|
150 |
data_files:
|
151 |
- split: train
|
152 |
path: graded/train-*
|
153 |
+
task_categories:
|
154 |
+
- text-generation
|
155 |
+
- feature-extraction
|
156 |
+
- fill-mask
|
157 |
+
language:
|
158 |
+
- en
|
159 |
---
|
160 |
# Dataset Card for "rp_books-en"
|
161 |
|
162 |
+
|
163 |
+
Filtering/cleaning on the 'red pajama books' subset of `togethercomputer/Long-Data-Collections`
|
164 |
+
|
165 |
The `default` config:
|
166 |
```python
|
167 |
Dataset({
|
|
|
175 |
## token count
|
176 |
|
177 |
|
178 |
+
### default
|
179 |
+
|
180 |
|
181 |
GPT-4 tiktoken token count:
|
182 |
|
|
|
192 |
max 8.687685e+06
|
193 |
```
|
194 |
|
195 |
+
Total count: 2662.85 M tokens
|