Dataset Viewer issue: TypeError: Couldn't cast array

#5
by albertvillanova HF staff - opened

The dataset viewer is not working.

Error details:

Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type
              struct<identifier: int64, comment: string, is_minor_edit: bool, editor: struct<identifier: int64, name: string, is_anonymous: bool, edit_count: int64, groups: list<item: string>, is_patroller: bool, date_started: timestamp[s], is_admin: bool>, number_of_characters: int64, size: struct<value: int64, unit_text: string>, tags: list<item: string>, scores: struct<revertrisk: struct<probability: struct<false: double, true: double>, prediction: bool>>, maintenance_tags: struct<>, noindex: bool>
              to
              {'identifier': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None), 'is_minor_edit': Value(dtype='bool', id=None), 'scores': {'revertrisk': {'probability': {'false': Value(dtype='float64', id=None), 'true': Value(dtype='float64', id=None)}, 'prediction': Value(dtype='bool', id=None)}}, 'editor': {'identifier': Value(dtype='int64', id=None), 'name': Value(dtype='string', id=None), 'edit_count': Value(dtype='int64', id=None), 'groups': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'date_started': Value(dtype='timestamp[s]', id=None), 'is_patroller': Value(dtype='bool', id=None), 'is_bot': Value(dtype='bool', id=None), 'is_admin': Value(dtype='bool', id=None), 'is_anonymous': Value(dtype='bool', id=None), 'has_advanced_rights': Value(dtype='bool', id=None)}, 'number_of_characters': Value(dtype='int64', id=None), 'size': {'value': Value(dtype='int64', id=None), 'unit_text': Value(dtype='string', id=None)}, 'noindex': Value(dtype='bool', id=None), 'maintenance_tags': {'pov_count': Value(dtype='int64', id=None), 'update_count': Value(dtype='int64', id=None), 'citation_needed_count': Value(dtype='int64', id=None)}, 'tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'is_breaking_news': Value(dtype='bool', id=None)}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp>
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2122, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<identifier: int64, comment: string, is_minor_edit: bool, editor: struct<identifier: int64, name: string, is_anonymous: bool, edit_count: int64, groups: list<item: string>, is_patroller: bool, date_started: timestamp[s], is_admin: bool>, number_of_characters: int64, size: struct<value: int64, unit_text: string>, tags: list<item: string>, scores: struct<revertrisk: struct<probability: struct<false: double, true: double>, prediction: bool>>, maintenance_tags: struct<>, noindex: bool>
              to
              {'identifier': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None), 'is_minor_edit': Value(dtype='bool', id=None), 'scores': {'revertrisk': {'probability': {'false': Value(dtype='float64', id=None), 'true': Value(dtype='float64', id=None)}, 'prediction': Value(dtype='bool', id=None)}}, 'editor': {'identifier': Value(dtype='int64', id=None), 'name': Value(dtype='string', id=None), 'edit_count': Value(dtype='int64', id=None), 'groups': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'date_started': Value(dtype='timestamp[s]', id=None), 'is_patroller': Value(dtype='bool', id=None), 'is_bot': Value(dtype='bool', id=None), 'is_admin': Value(dtype='bool', id=None), 'is_anonymous': Value(dtype='bool', id=None), 'has_advanced_rights': Value(dtype='bool', id=None)}, 'number_of_characters': Value(dtype='int64', id=None), 'size': {'value': Value(dtype='int64', id=None), 'unit_text': Value(dtype='string', id=None)}, 'noindex': Value(dtype='bool', id=None), 'maintenance_tags': {'pov_count': Value(dtype='int64', id=None), 'update_count': Value(dtype='int64', id=None), 'citation_needed_count': Value(dtype='int64', id=None)}, 'tags': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'is_breaking_news': Value(dtype='bool', id=None)}
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1391, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 990, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2040, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

I am investigating it.

Wikimedia org

Thanks @albertvillanova !

We'll also be looking at it, let us know if we need to change anything.

We will need to define the data types explicitly and avoid their inference.

However, we also need to fix some underlying issue in the datasets library with JSON-lines data files. I opened PRs:

Wikimedia org

Thanks for all the help @albertvillanova !
Looks like the issues are fixed on your end?
We are having a closer look at the schema updates on our end and will get back to you.

Hello,

I have the same issue when loading the dataset:

I have downloaded and unzipped the dataset locally but when loading I have the same issue.

import datasets

dataset = datasets.load_dataset("/gpfsdsdir/dataset/HuggingFace/wikimedia/structured-wikipedia/20240916.fr")
TypeError: Couldn't cast array of type
struct<content_url: string, width: int64, height: int64, alternative_text: string>
to
{'content_url': Value(dtype='string', id=None), 'width': Value(dtype='int64', id=None), 'height': Value(dtype='int64', id=None)}

The above exception was the direct cause of the following exception:

My version of datasets is 3.0.1

Wikimedia org

Thanks for reporting @Aremaki ,

You are using the right datasets library version: the required fixes to support missing fields were released in datasets-3.0.1: https://github.com/huggingface/datasets/releases/tag/3.0.1

However the issue here is not a missing field, but a present field which is not defined in the README schema:

  • a data item with 4 fields was found:
    • content_url: string
    • width: int64
    • height: int64
    • alternative_text: string
  • however the expected schema contains only 3 fields:
    • content_url: Value(dtype='string', id=None)
    • width: Value(dtype='int64', id=None)
    • height: Value(dtype='int64', id=None)

While trying to load the dataset, I discovered several schema misalignments between the expected (either provided in the README or inferred) schema and the real data:

  • sections.has_parts.has_parts.has_parts.has_parts.name
  • sections.has_parts.has_parts.has_parts.has_parts.has_parts.links.images
  • sections.has_parts.has_parts.has_parts.has_parts.has_parts.has_parts
    • sections.has_parts.has_parts.has_parts.has_parts.has_parts.has_parts.links

I reported them to the Wikimedia team. And they replied they are working on updating their schema to align it with the data.

Wikimedia org

Thanks @Aremaki and @albertvillanova !
Happy to report the work on schema updates is part of our current sprint, you can follow the ticket here: https://phabricator.wikimedia.org/T375462. We'll let you know when this is completed.

Sign up or log in to comment