Datasets:
Error when loading the dataset directly using datasets.load_dataset()
When I'm trying to load data directly from the cloned repo, I'm running into the following error quickly during the step of Generating train split
:
TypeError: Couldn't cast array of type
struct<Content-Length: string, Content-Type: string, WARC-Block-Digest: string, WARC-Concurrent-To: string, WARC-Date: timestamp[s],
WARC-IP-Address: string, WARC-Identified-Payload-Type: string, WARC-Payload-Digest: string, WARC-Record-ID: string, WARC-Target-URI: string,
WARC-Type: string, WARC-Warcinfo-ID: string, WARC-Truncated: string>
to
{'Content-Length': Value(dtype='string', id=None), 'Content-Type': Value(dtype='string', id=None), 'WARC-Block-Digest': Value(dtype='string', id=None),
'WARC-Concurrent-To': Value(dtype='string', id=None), 'WARC-Date': Value(dtype='timestamp[s]', id=None), 'WARC-IP-Address': Value(dtype='string', id=None),
'WARC-Payload-Digest': Value(dtype='string', id=None), 'WARC-Record-ID': Value(dtype='string', id=None), 'WARC-Target-URI': Value(dtype='string', id=None),
'WARC-Truncated': Value(dtype='string', id=None), 'WARC-Type': Value(dtype='string', id=None), 'WARC-Warcinfo-ID': Value(dtype='string', id=None)}
This message is a bit confusing, as it seems like types of fields are correct. Do you have any suggestions on how to fix it?
Actually, the same issue is present with HuggingFace's dataset viewer on the page of dclm-baseline:
This error is raised when the names of the fields or their data types do not match with the inferred ones.
In this case, an item has a field called "WARC-Identified-Payload-Type" (in the "metadata" column) that was not present in the items before it and therefore it was not taken into account when inferring the "metadata" struct types (just using the first items).
In order to fix this error, feature inference should be avoided by providing explicitly all the fields and their types. This can be done in the README.md
file, using the dataset_info
(and features
) YAML tags.
@albertvillanova
thanks for the quick reply and explanation. I tried giving manually created features
to load_dataset()
, but it's still failing.
Actually, is there a way to ignore fields in the raw files? I tried skipping them in manually defined features
, but it fails immediately.
I can try defining features in the YAML file, let's see, if it works.
Using README.md
also doesn't work. I tried specifying only important features for us with this:
dataset_info:
features:
- name: bff_contained_ngram_count_before_dedupe
dtype: int64
- name: previous_word_count
dtype: int64
- name: url
dtype: string
- name: text
dtype: string
- name: fasttext_openhermes_reddit_eli5_vs_rw_v2_bigram_200k_train_prob
dtype: float64
But it fails: All the data files must have the same columns, but at some point there are 3 new columns ({'metadata', 'language_id_whole_page_fasttext', 'warcinfo'})
Also I tried specifying all the features, but in this case it fails with the same error cast error as was in the original post.
Is the only viable option to write to a builder script?
@yury-zyphra you need to specify all the features present in the dataset, but you can tweak their dtype so that no error is raised.
I guess the main issue is the dtype of the "metadata" feature: it is a struct, but their field names are not always the same.
You need to investigate all the possible field names within the "metadata" feature across the entire dataset and exhaustively define all of them within a struct dtype
dataset_info:
features:
- name: ...
...
- name: metadata
struct:
- name: Content-Length
dtype: string
- name: Content-Type
dtype: string
- name: ... # list ALL possible field names
I tried specifying metadata
as string
hoping it would just cast it as string, but it fails with the message that it cannot cast struct as string.
Could someone from dclm team, please, comment?
Calling DCLM Team Member
@albertvillanova
The issue is still there even after your fix here: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0/discussions/11
The error can also still be seen in the dataset viewer.
I ended up manually converting the whole dataset locally forcing metadata
column to be string in jsonl's.
Hi
@yury-zyphra
, could you share how you managed to convert datasets from zstd
to jsonl
and ensure the metadata fields are treated as strings
? I'm also running into type casting errors with my current approach (error details and schema/features below). Any tips you could provide would be greatly appreciated!
TypeError: Couldn't cast array of type
struct<Content-Length: string, Content-Type: string, WARC-Block-Digest: string, WARC-Concurrent-To: string, WARC-Date: timestamp[s], WARC-IP-Address: string, WARC-Payload-Digest: string, WARC-Record-ID: string, WARC-Target-URI: string, WARC-Type: string, WARC-Warcinfo-ID: string, WARC-Truncated: string>
to
{'Content-Length': Value(dtype='string', id=None), 'Content-Type': Value(dtype='string', id=None), 'WARC-Block-Digest': Value(dtype='string', id=None), 'WARC-Concurrent-To': Value(dtype='string', id=None), 'WARC-Date': Value(dtype='timestamp[s]', id=None), 'WARC-IP-Address': Value(dtype='string', id=None), 'WARC-Identified-Payload-Type': Value(dtype='string', id=None), 'WARC-Payload-Digest': Value(dtype='string', id=None), 'WARC-Record-ID': Value(dtype='string', id=None), 'WARC-Target-URI': Value(dtype='string', id=None), 'WARC-Type': Value(dtype='string', id=None), 'WARC-Warcinfo-ID': Value(dtype='string', id=None), 'WARC-Truncated': Value(dtype='string', id=None)}
features = Features({
'bff_contained_ngram_count_before_dedupe': Value('int64'),
'language_id_whole_page_fasttext': {
'en': Value('float64')
},
'metadata': {
'Content-Length': Value('string'),
'Content-Type': Value('string'),
'WARC-Block-Digest': Value('string'),
'WARC-Concurrent-To': Value('string'),
'WARC-Date': Value('timestamp[s]'),
'WARC-IP-Address': Value('string'),
'WARC-Identified-Payload-Type': Value('string'),
'WARC-Payload-Digest': Value('string'),
'WARC-Record-ID': Value('string'),
'WARC-Target-URI': Value('string'),
'WARC-Type': Value('string'),
'WARC-Warcinfo-ID': Value('string'),
'WARC-Truncated': Value('string')
},
'previous_word_count': Value('int64'),
'text': Value('string'),
'url': Value('string'),
'warcinfo': Value('string'),
'fasttext_openhermes_reddit_eli5_vs_rw_v2_bigram_200k_train_prob': Value('float64')
})
There is a parquet version of the dataset: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet . It seems like it dropped metadata
column completely, but it works without issues.
Great, thank you very much for the pointer!