Datasets:

Modalities:
Image
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
jordanparker6 commited on
Commit
57e8250
1 Parent(s): d7bd42c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -1,3 +1,32 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: PubLayNet
3
+ license: other
4
+ annotations_creators: []
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - 100B<n<1T
9
+ source_datasets: []
10
+ task_categories:
11
+ - image-to-text
12
+ task_ids: []
13
  ---
14
+
15
+ # PubLayNet
16
+
17
+ PubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is [PubMed Central Open Access Subset (commercial use collection)](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper ["PubLayNet: largest dataset ever for document layout analysis."](https://arxiv.org/abs/1908.07836).
18
+
19
+ The public dataset is in tar.gz format which doesn't fit nicely with huggingface streaming. Modifications have been made to optimise the delivery of the dataset for the hugginface datset api. The original files can be found [here](https://developer.ibm.com/exchanges/data/all/publaynet/).
20
+
21
+ Licence: [Community Data License Agreement – Permissive – Version 1.0 License](https://cdla.dev/permissive-1-0/)
22
+
23
+ Author: IBM
24
+
25
+ GitHub: https://github.com/ibm-aur-nlp/PubLayNet
26
+
27
+ @article{ zhong2019publaynet,
28
+ title = { PubLayNet: largest dataset ever for document layout analysis },
29
+ author = { Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno },
30
+ journal = { arXiv preprint arXiv:1908.07836},
31
+ year. = { 2019 }
32
+ }