Datasets:
Tasks:
Image-to-Text
Modalities:
Text
Formats:
webdataset
Languages:
English
Size:
1K - 10K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,63 @@
|
|
1 |
---
|
2 |
-
license:
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: other
|
3 |
+
license_name: pdfa-eng-train
|
4 |
+
license_link: LICENSE
|
5 |
+
task_categories:
|
6 |
+
- image-to-text
|
7 |
+
size_categories:
|
8 |
+
- 10M<n<100M
|
9 |
---
|
10 |
+
# Dataset Card for PDF Association dataset (PDFA)
|
11 |
+
|
12 |
+
## Dataset Description
|
13 |
+
|
14 |
+
- **Point of Contact from curators:** [Peter Wyatt, PDF Association CTO](mailto:peter.wyatt@pdfa.org)
|
15 |
+
- **Point of Contact Hugging Face:** [Pablo Montalvo](mailto:pablo@huggingface.co)
|
16 |
+
|
17 |
+
### Dataset Summary
|
18 |
+
|
19 |
+
PDFA dataset is a document dataset filtered from the SafeDocs corpus, aka CC-MAIN-2021-31-PDF-UNTRUNCATED, with 48 million pages kept as valid samples.
|
20 |
+
Each document exists as a pairing of a pdf and a json file containing extensive OCR annotation as well as metadata information about rendering times. The filterings and packaging in
|
21 |
+
webdataset format are tailored towards multimodal machine learning at scale, specifically image-to-text tasks.
|
22 |
+
|
23 |
+
In this dataset, an additional filtering has been done to restrict documents to the english language to 18.6 million pages over 2.16 million documents
|
24 |
+
Further, the metadata for each document has been formatted in the same way as `https://huggingface.co/datasets/pixparse/IDL-wds`.
|
25 |
+
|
26 |
+
### Usage
|
27 |
+
|
28 |
+
This instance of PDFA is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format.
|
29 |
+
It can be used with webdataset library or current releases of Hugging Face `datasets` library. It can also be streamed directly from the hub that way.
|
30 |
+
|
31 |
+
```python
|
32 |
+
from datasets import load_dataset
|
33 |
+
|
34 |
+
pdfa_english = load_dataset('pixparse/pdfa-english-train', streaming=True)
|
35 |
+
|
36 |
+
print(next(iter(dataset['train'])).keys())
|
37 |
+
>> dict_keys(['__key__', '__url__', 'json', 'pdf'])
|
38 |
+
|
39 |
+
```
|
40 |
+
|
41 |
+
Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension,
|
42 |
+
as well as the count of files per shard.
|
43 |
+
|
44 |
+
### Data Splits
|
45 |
+
|
46 |
+
|
47 |
+
#### Train
|
48 |
+
* `pdfa-eng-train-*.tar`
|
49 |
+
* Downloaded on 2024/01/22
|
50 |
+
* 1800 shards, 2,159,433 samples, 18,686,346 pages, 5,997,818,991 words
|
51 |
+
|
52 |
+
## Additional Information
|
53 |
+
|
54 |
+
### Dataset Curators
|
55 |
+
|
56 |
+
Pablo Montalvo, Ross Wightman
|
57 |
+
|
58 |
+
### Licensing Information
|
59 |
+
|
60 |
+
Data has been filtered from the original corpus. As a consequence, users should note [Common Crawl's license and terms of use](https://commoncrawl.org/terms-of-use) and the [Digital Corpora project's Terms of Use](https://digitalcorpora.org/about-digitalcorpora/terms-of-use/).
|
61 |
+
|
62 |
+
### Citation Information
|
63 |
+
??
|