datasets.builder.DatasetGenerationError: An error occurred while generating the dataset

#1
by krishnagarg09 - opened

An exception occured while downloading the data using

data = load_dataset("memray/keyphrase")

Could you please get it fixed if this has to do with the original uploading of the dataset?

Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [12:32<00:00, 250.70s/it]
Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 68.75it/s]
Generating train split: 625154 examples [00:02, 236148.73 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1940, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py", line 572, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 2328, in table_cast
return cast_table_to_schema(table, schema)
File "/usr/local/lib/python3.10/dist-packages/datasets/table.py", line 2286, in cast_table_to_schema
raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
ValueError: Couldn't cast
id: string
categories: list<item: string>
child 0, item: string
date: string
title: string
abstract: string
keyword: string
to
{'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'abstract': Value(dtype='string', id=None), 'keywords': Sequence(feature=Value(dtype='string', id=None
), length=-1, id=None)}
because column names don't match

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/data/Keyphrase/bnb-4bit-training.py", line 53, in
data = load_dataset("memray/keyphrase")
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1813, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1958, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset

krishnagarg09 changed discussion status to closed

I figured that the mistake was in creating the Dataset using huggingface load_dataset class. It was not the issue downloading it.

@krishnagarg09 I only uploaded all datasets (jsonl files) to huggingface, but I didn't implement the loading part. The simplest way might be just to download each file to local or to clone the whole repo.

Thanks for getting back to me. I am now able to use it properly.

Sign up or log in to comment