Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Thai
Size:
100K - 1M
Tags:
word-tokenization
License:
Commit History
Delete loading script
ac1fc9b
Convert dataset to Parquet
1272c10
Replace YAML keys from int to str (#2)
cb854e4
Reorder split names (#1)
e25239f
add dataset_info in dataset metadata
b3962be
remove dummmy data
b8aaeca
mariosasko
commited on