Error on download (colab & huggingface)
Thanks for the dataset! However, loading the entire thing generates an error from column mismatches. Seeing as this is a new dataset it's worthwhile asking is this my problem or a dataset problem? It seems to occur during
load_dataset("felixludos/babel-briefings")
Generating train split:
335399/0 [00:41<00:00, 11289.31 examples/s]
DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 missing columns ({'en-content', 'en-title', 'en-description'}
The complete error is below:
8 frames
CastError: Couldn't cast
url: string
title: string
urlToImage: string
ID: int64
source-id: string
language: string
instances: list<item: struct<category: string, collectedAt: string, location: string>>
child 0, item: struct<category: string, collectedAt: string, location: string>
child 0, category: string
child 1, collectedAt: string
child 2, location: string
author: string
publishedAt: string
description: string
source-name: string
content: string
to
{'url': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'en-title': Value(dtype='string', id=None), 'en-description': Value(dtype='string', id=None), 'urlToImage': > Value(dtype='string', id=None), 'ID': Value(dtype='int64', id=None), 'source-id': Value(dtype='string', id=None), 'language': Value(dtype='string', id=None), 'instances': [{'category': > Value(dtype='string', id=None), 'collectedAt': Value(dtype='string', id=None), 'location': Value(dtype='string', id=None)}], 'author': Value(dtype='string', id=None), 'publishedAt': Value(dtype='string', id=None), 'en-content': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'source-name': Value(dtype='string', id=None), 'content': Value(dtype='string', id=None)}
because column names don't matchDuring handling of the above exception, another exception occurred:
DatasetGenerationCastError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1989 writer.write_table(table)
1990 except CastError as cast_error:
-> 1991 raise DatasetGenerationCastError.from_cast_error(
1992 cast_error=cast_error,
1993 builder_name=self.info.builder_name,DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 3 missing columns ({'en-content', 'en-title', 'en-description'})
This happened while the json dataset builder was generating data using
hf://datasets/felixludos/babel-briefings/data/babel-briefings-v1-au.json (at revision c6d9079dc65d7b1a33983d45884d2790b1cc9ce0)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Ah, the problem is that the samples which are already in English don't have English translations (which is what the en-*
columns are for). I would suggest downloading manually and loading the json files, but I'll look into a workaround (and worst case, I'll reupload the data to huggingface's liking).
Anyway, thanks for pointing this out - I just uploaded it a few days ago, so I haven't tested the huggingface side of things yet and I haven't cleaned up the code at all.
Fixed! I replaced the files to include the missing columns (all with value null
). Now using load_dataset
works.
Nice, 4719199 rows running smoothly 🫡