The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 75, in _split_generators
                  first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 41, in _get_pipeline_from_tar
                  extracted_file_path = streaming_download_manager.extract(f"memory://{filename}")
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 118, in extract
                  urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 484, in map_nested
                  mapped = function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 128, in _extract
                  raise NotImplementedError(
              NotImplementedError: Extraction protocol for TAR archives like 'memory://mnt/localdata4/users/jingyechen/further/0/00099.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
              
              Example usage:
              
              	url = dl_manager.download(url)
              	tar_archive_iterator = dl_manager.iter_archive(url)
              
              	for filename, file in tar_archive_iterator:
              		...
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

TextDiffuser-MARIO-10M

image/png

Dataset description

MARIO-10M is a dataset containing about 10 million text images, which includes a variety of sources such as book covers, posters, and tickets. Alongside the images, the dataset also provides OCR results and caption information.

Download

The downloading process include three steps:

[1] Download all the tar files

for i in {0..500};
do wget -O $i.tar.gz https://huggingface.co/datasets/JingyeChen22/TextDiffuser-MARIO-10M/resolve/main/$i.tar.gz?download=true;
done

[2] Unzip the top-level directory

for i in {0..500};
do tar -xvf $i.tar.gz --strip-components=5 && rm $i.tar.gz;
done

[3] Unzip the second-level directory

for i in {0..500};
do
    cd $i && for file in *.tar.gz; do tar -xvf "$file" --strip-components=5 && rm $file; done;
    cd ..;
done

Finally, the directory tree should show like this:

MARIO-10M/
β”‚
β”œβ”€β”€ 0/
β”‚   β”œβ”€β”€ 00000/
β”‚   β”œβ”€β”€β”€β”€ 000000012/
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€ caption.txt
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€ charseg.npy
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€ image.jpg
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€ info.json
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€ ocr.txt
...

Citation

If you find MARIO dataset useful in your research, please cite the following paper:

@article{chen2024textdiffuser,
  title={Textdiffuser: Diffusion models as text painters},
  author={Chen, Jingye and Huang, Yupan and Lv, Tengchao and Cui, Lei and Chen, Qifeng and Wei, Furu},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  year={2024}
}

@article{chen2023textdiffuser,
  title={Textdiffuser-2: Unleashing the power of language models for text rendering},
  author={Chen, Jingye and Huang, Yupan and Lv, Tengchao and Cui, Lei and Chen, Qifeng and Wei, Furu},
  journal={European Conference on Computer Vision},
  year={2024}
}

License

Microsoft Open Source Code of Conduct

Downloads last month
0