Datasets:
Tasks:
Image Classification
Sub-tasks:
multi-class-image-classification
Languages:
English
Size:
1M<n<10M
ArXiv:
License:
2552th image has an enormous ICC profile (1.5MB)
#13
by
francois-rozet
- opened
Hello, I noticed that the 2552th image in the training set has an enormous ICC profile (1.5MB). This is problematic when modifying (map
) the dataset because Hugging Face saves the image as a PNG which also contains the ICC profile and cannot be opened by PIL anymore.
>>> dataset = load_dataset('imagenet-1k')
>>> dataset['train'][2552]['image'].save('hello_kitty.png')
>>> Image.open('hello_kitty.png')
...
ValueError: Decompressed Data Too Large
Therefore, if we map over the dataset (map
) to modify the images (e.g. crop or resize) we get corrupted cache files.
def transform(row):
row['image'] = row['image'].resize((128, 128))
return row
temp = dataset['train'].map(transform)
temp[2552] # break
Hi! You can avoid this error by using the fix from this SO thread. Or, delete the ICC profile from the metadata before saving an image:
def transform(row):
row['image'] = row['image'].resize((128, 128))
row["image"].info.pop("icc_profile", None)
return row
Thanks @mariosasko , your fix seems to work! An alternative is to transform the image into a NumPy array and then back into an image, which removes all metadata.
def transform(row):
row['image'] = row['image'].resize((128, 128))
row['image'] = Image.fromarray(np.asarray(row['image']))
return row
francois-rozet
changed discussion status to
closed