BadZipFile when using datasets.load_dataset

#1
by trancelestial - opened

Hi, I tried using the dataset with the HF datasets library, however, load_dataset function results in "zipfile.BadZipFile: zipfiles that span multiple disks are not supported". Can anyone reproduce the issue, or is this specific to my setup?

Hi, the issue is due to hugging face not liking the multipart zips. You can currently use the download script, provided in the scripts folder, to download the dataset while it is fixed

I can reproduce this issue. Here is my downloading code:

import os
import argparse
import zipfile
import warnings

from typing import Literal, get_args

from huggingface_hub import HfFileSystem


PBRMapType = Literal[
    "specular",
    "roughness",
    "normal",
    "metallic",
    "height",
    "displacement",
    "diffuse",
    "basecolor",
]


def main(
    map_type: PBRMapType,
    dest_dir: str,
    repo_id: str = "gvecchio/MatSynth",
    revision: str = "45c95ca2c7d6cfe514028c31cbfe4aa88ee33bcc",
) -> None:
    if not os.path.exists(dest_dir):
        os.makedirs(dest_dir)
    fs = HfFileSystem()
    zip_files = sorted(fs.glob(os.path.join("datasets", repo_id, "maps/*/*.zip"), revision=revision))
    for zf in zip_files:
        with fs.open(zf) as opened_file:
            try:
                handle = zipfile.ZipFile(opened_file)
            except zipfile.BadZipFile as err:
                warnings.warn(f"failed to open {zf} with {repr(err)}")  # zipfile complains that "zipfiles that span multiple disks are not supported".
                continue
            for file_path in handle.namelist():
                if not file_path.endswith(f"{map_type}.png"):
                    continue
                pbr_bytes = handle.read(file_path)
                map_name = os.path.basename(os.path.dirname(file_path))
                new_name = f"{map_name}__{map_type}.png"
                dest_path = os.path.join(dest_dir, new_name)
                with open(dest_path, "wb+") as f:
                    f.write(pbr_bytes)
                print(f"wrote {new_name}")


if __name__ == "__main__":
    p = argparse.ArgumentParser()
    p.add_argument("--map_type", type=str, choices=get_args(PBRMapType), required=True)
    p.add_argument("--dest_dir", type=str, required=True)
    kwargs = vars(p.parse_args())
    main(**kwargs)

I wonder if the overall compression ratio of this dataset would be significantly worse if the folders containing individual maps were compressed into smaller zips (e.g. acg_acoustic_foam_001.zip), rather than entire classes into many-GB zips. @gvecchio do you have an estimate of this?

Thanks for your suggestion, we are working on making the dataset easier to lead directly from load_datasets!

Is the bug still exists? Or the dataset can be accessed from datasets library?

Cause the download script fails every time.
And the problem looks like the url..

image.png

You should be now able to load the dataset using HF APIs like in the example below. We will update the download script for extracting the raw materials for the arrow files.

ds = load_dataset(
    "gvecchio/MatSynth", 
    split = "test", 
    streaming = True,
)

thank you @gvecchio !

gvecchio changed discussion status to closed

Sign up or log in to comment