Post
5000
ππ Exciting times for the document AI community!
We're thrilled to announce the release of some of the largest OCR datasets available to the public.
π₯ With over 26 million pages , 18 billion text tokens, and 6TB of data, these resources are a significant leap forward for document AI research.
Here's how to access these datasets quickly:
This enables you to stream them directly, integrating seamlessly with your projects using the Hugging Face datasets library. On the hub, you can find them here:
pixparse/pdfa-eng-wds
pixparse/idl-wds
For lean data loading, the new [chug](https://github.com/huggingface/chug) library offers a solution with pdf decoding:
We owe a huge thank you to Peter Wyatt, Kate Tasker, Rachel Taketa, Ali Furkan Biten, Ruben Tito, and their colleagues for their contributions. Their work putting these datasets together has been invaluable. π€
Looking Ahead:
We're on a mission to enhance document AI capabilities, and these datasets are just the beginning. With your engagement and innovation, we're confident in the community's ability to develop robust OCR solutions. We encourage you to explore these datasets, experiment with the code, and contribute to the collective progress in document AI.
For detailed information on usage and licensing, please refer to the dataset cards on the Hugging Face hub.
We're thrilled to announce the release of some of the largest OCR datasets available to the public.
π₯ With over 26 million pages , 18 billion text tokens, and 6TB of data, these resources are a significant leap forward for document AI research.
Here's how to access these datasets quickly:
from datasets import load_dataset
pdfa_dataset = load_dataset('pixparse/pdfa-eng-wds', streaming=True)
IDL_dataset = load_dataset('pixparse/idl-wds', streaming=True)
This enables you to stream them directly, integrating seamlessly with your projects using the Hugging Face datasets library. On the hub, you can find them here:
pixparse/pdfa-eng-wds
pixparse/idl-wds
For lean data loading, the new [chug](https://github.com/huggingface/chug) library offers a solution with pdf decoding:
import chug
task_cfg = chug.DataTaskDocReadCfg(
page_sampling='all',
)
data_cfg = chug.DataCfg(
source='pixparse/pdfa-eng-wds',
split='train',
batch_size=None,
format='hfids',
num_workers=0,
)
data_loader = chug.create_loader(
data_cfg,
task_cfg,
)
sample = next(iter(data_loader))
We owe a huge thank you to Peter Wyatt, Kate Tasker, Rachel Taketa, Ali Furkan Biten, Ruben Tito, and their colleagues for their contributions. Their work putting these datasets together has been invaluable. π€
Looking Ahead:
We're on a mission to enhance document AI capabilities, and these datasets are just the beginning. With your engagement and innovation, we're confident in the community's ability to develop robust OCR solutions. We encourage you to explore these datasets, experiment with the code, and contribute to the collective progress in document AI.
For detailed information on usage and licensing, please refer to the dataset cards on the Hugging Face hub.