Error in train split, question containing 25651 characters!
There is an error with a sample in the train split.
Sample id=572fdefb947a6a140053cd8d
contains the question "What radiates two lobes perpendicular to the antennas axis?"
however this string is prefixed and suffixed by MANY whitespace characters, leading to a total character count of 25651.
There are 25457 whitespace characters before the question, and 135 whitespace characters after the question.
The code used to obtain this numbers is below:
from datasets import load_dataset
from functools import reduce
squadv2 = load_dataset( 'squad_v2', split='train' )
squadv2_lens = squadv2.map( lambda x: { 'context_len': len( x['question'] ) } )
long_question = reduce( lambda value, elem: value if value['context_len'] > elem['context_len'] else elem, squadv2_lens, { 'context_len': 0 } )
print( 'Total len:', len( long_question['question'] ) )
print( 'Stripped question:', long_question['question'].strip() )
print( 'id:', long_question[ 'id' ] )
print( 'Spaces before:', len( long_question['question'].rstrip() ) - len( long_question['question'].strip() ) )
print( 'Spaces after: ', len( long_question['question'].lstrip() ) - len( long_question['question'].strip() ) )
Which gives the following output:
Total len: 25651
Stripped question: What radiates two lobes perpendicular to the antennas axis?
id: 572fdefb947a6a140053cd8d
Spaces before: 25457
Spaces after: 135
Note, I have only checked for the instance of the longest question. There may be other questions with incorrect spacing, but this is the primary offender.
I do not know if this is an issue with HF preprocessing the data, or if the error is in the underlying files used by HF to generated the parquet splits.