Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Thai
Size:
100K - 1M
Tags:
word-tokenization
License:
Update datasets task tags to align tags with models (#4067)
Browse files* update tasks list
* update tags in dataset cards
* more cards updates
* update dataset tags parser
* fix multi-choice-qa
* style
* small improvements in some dataset cards
* allow certain tag fields to be empty
* update vision datasets tags
* use multi-class-image-classification and remove other tags
Commit from https://github.com/huggingface/datasets/commit/edb4411d4e884690b8b328dba4360dbda6b3cbc8
README.md
CHANGED
@@ -14,9 +14,9 @@ size_categories:
|
|
14 |
source_datasets:
|
15 |
- original
|
16 |
task_categories:
|
17 |
-
-
|
18 |
task_ids:
|
19 |
-
-
|
20 |
paperswithcode_id: null
|
21 |
pretty_name: best2009
|
22 |
---
|
|
|
14 |
source_datasets:
|
15 |
- original
|
16 |
task_categories:
|
17 |
+
- token-classification
|
18 |
task_ids:
|
19 |
+
- token-classification-other-word-tokenization
|
20 |
paperswithcode_id: null
|
21 |
pretty_name: best2009
|
22 |
---
|