pierreguillou commited on
Commit
b37b665
1 Parent(s): 561d013

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -3
README.md CHANGED
@@ -2,7 +2,7 @@
2
  annotations_creators:
3
  - crowdsourced
4
  license: other
5
- pretty_name: DocLayNet
6
  size_categories:
7
  - 0K<n<1K
8
  tags:
@@ -50,14 +50,36 @@ For all these reasons, I decided to process the DocLayNet dataset:
50
  - into 3 datasets of different sizes:
51
  - [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) < 1.000k document images
52
  - [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) < 10.000k document images- with associated texts,
 
53
  - and in a format facilitating their use by HF notebooks.
54
 
55
  *Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!*
56
 
57
- ### Download
58
 
59
  ```
60
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  ```
62
 
63
  ### HF notebooks
 
2
  annotations_creators:
3
  - crowdsourced
4
  license: other
5
+ pretty_name: DocLayNet small
6
  size_categories:
7
  - 0K<n<1K
8
  tags:
 
50
  - into 3 datasets of different sizes:
51
  - [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) < 1.000k document images
52
  - [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) < 10.000k document images- with associated texts,
53
+ - DocLayNet large with full dataset (to be done)
54
  - and in a format facilitating their use by HF notebooks.
55
 
56
  *Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!*
57
 
58
+ ### Download & overview
59
 
60
  ```
61
+ # !pip install -q datasets
62
+
63
+ from datasets import load_dataset
64
+
65
+ dataset_small = load_dataset("pierreguillou/DocLayNet-small")
66
+
67
+ # overview of dataset_small
68
+
69
+ DatasetDict({
70
+ train: Dataset({
71
+ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
72
+ num_rows: 691
73
+ })
74
+ validation: Dataset({
75
+ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
76
+ num_rows: 64
77
+ })
78
+ test: Dataset({
79
+ features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
80
+ num_rows: 49
81
+ })
82
+ })
83
  ```
84
 
85
  ### HF notebooks