fail to load_dataset('allenai/OLMoE-mix-0924')

#6
by starssz - opened

I got the following error: (I also tried re-download the dataset but it gave me the same error)

Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1870, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py", line 622, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 2292, in table_cast
return cast_table_to_schema(table, schema)
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 2240, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
added: string
attributes: struct<paloma_paragraphs: list<item: list<item: int64>>>
child 0, paloma_paragraphs: list<item: list<item: int64>>
child 0, item: list<item: int64>
child 0, item: int64
created: string
doc: struct<arxiv_id: string, language: string, timestamp: timestamp[s], url: string, yymm: string>
child 0, arxiv_id: string
child 1, language: string
child 2, timestamp: timestamp[s]
child 3, url: string
child 4, yymm: string
id: string
metadata: struct<provenance: string>
child 0, provenance: string
text: string
to
{'id': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'added': Value(dtype='string', id=None), 'created': Value(dtype='string', id=None)}
because column names don't match

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2154, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1000, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1741, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1872, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 new columns ({'attributes', 'doc', 'metadata'})

This happened while the json dataset builder was generating data using

/datasets/.cache/huggingface/hub/datasets--allenai--OLMoE-mix-0924/snapshots/1e44595eaffc7491dfab23947ea4d5a62b33aff3/data/algebraic-stack/algebraic-stack-train-0000.json.gz

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Sign up or log in to comment