Datasets:

Modalities:
Image
Text
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
File size: 11,262 Bytes
21d0c0d
fb7df5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21d0c0d
fb7df5f
514b56b
fb7df5f
 
514b56b
fb7df5f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b28e5a0
 
fb7df5f
 
 
ab392dc
fb7df5f
 
 
ab392dc
fb7df5f
 
 
 
ab392dc
 
 
 
 
fb7df5f
 
 
 
92b7b16
 
fb7df5f
92b7b16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fb7df5f
 
1f5103d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fb7df5f
 
1f5103d
fb7df5f
 
ab392dc
 
 
 
 
 
 
 
fb7df5f
ab392dc
 
 
 
4a1a055
ab392dc
 
 
 
58b03f0
fb7df5f
 
ab392dc
fb7df5f
 
58b03f0
fb7df5f
 
5feba22
 
 
fb7df5f
 
 
 
58b03f0
 
fb7df5f
 
58b03f0
fb7df5f
58b03f0
fb7df5f
58b03f0
fb7df5f
 
 
58b03f0
 
fb7df5f
58b03f0
fb7df5f
 
 
58b03f0
fb7df5f
 
 
58b03f0
 
 
 
 
 
 
06e7bf3
fb7df5f
 
 
58b03f0
fb7df5f
 
 
58b03f0
fb7df5f
 
 
58b03f0
fb7df5f
 
 
 
58b03f0
fb7df5f
 
 
58b03f0
fb7df5f
 
 
58b03f0
 
fb7df5f
 
 
 
 
 
 
 
58b03f0
fb7df5f
 
 
 
c5a56da
 
 
 
 
 
 
 
 
fb7df5f
 
 
 
58b03f0
 
 
 
 
fb7df5f
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
---
annotations_creators:
- expert-generated
language: []
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality: []
pretty_name: YALTAi Tabular Dataset
size_categories:
- n<1K
source_datasets: []
tags:
- manuscripts
- lam
task_categories:
- object-detection
task_ids: []
---

# YALTAi Tabular Dataset

## Table of Contents
- [YALTAi Tabular Dataset](#YALTAi-Tabular-Dataset)
  - [Table of Contents](#table-of-contents)
  - [Dataset Description](#dataset-description)
    - [Dataset Summary](#dataset-summary)
    - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Dataset Structure](#dataset-structure)
    - [Data Instances](#data-instances)
    - [Data Fields](#data-fields)
    - [Data Splits](#data-splits)
  - [Dataset Creation](#dataset-creation)
    - [Curation Rationale](#curation-rationale)
    - [Source Data](#source-data)
      - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
      - [Who are the source language producers?](#who-are-the-source-language-producers)
    - [Annotations](#annotations)
      - [Annotation process](#annotation-process)
      - [Who are the annotators?](#who-are-the-annotators)
    - [Personal and Sensitive Information](#personal-and-sensitive-information)
  - [Considerations for Using the Data](#considerations-for-using-the-data)
    - [Social Impact of Dataset](#social-impact-of-dataset)
    - [Discussion of Biases](#discussion-of-biases)
    - [Other Known Limitations](#other-known-limitations)
  - [Additional Information](#additional-information)
    - [Dataset Curators](#dataset-curators)
    - [Licensing Information](#licensing-information)
    - [Citation Information](#citation-information)
    - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [https://doi.org/10.5281/zenodo.6827706](https://doi.org/10.5281/zenodo.6827706) 
- **Paper:** [https://arxiv.org/abs/2207.11230](https://arxiv.org/abs/2207.11230)

### Dataset Summary

This dataset contains a subset of data used in the paper [You Actually Look Twice At it (YALTAi): using an object detectionapproach instead of region segmentation within the Kraken engine](https://arxiv.org/abs/2207.11230). This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects "Header", "Col", "Marginal", "text". 

### Supported Tasks and Leaderboards

- `object-detection`: This dataset can be used to train a model for object-detection on historic document images. 


## Dataset Structure

This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to intergrate the data with existing processing pipelines. 

- The first configuration `YOLO` uses the original format of the data. 
- The second configuration converts the YOLO format into a format which is closer to the `COCO` annotation format. This is done in particular to make it easier to work with the `feature_extractor`s from the `Transformers` models for object detection which expect data to be in a COCO style format. 

### Data Instances

Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.

An example instance from the COCO config:

```
{'height': 2944,
 'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FA413CDA210>,
 'image_id': 0,
 'objects': [{'area': 435956,
   'bbox': [0.0, 244.0, 1493.0, 292.0],
   'category_id': 0,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 88234,
   'bbox': [305.0, 127.0, 562.0, 157.0],
   'category_id': 2,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 5244,
   'bbox': [1416.0, 196.0, 92.0, 57.0],
   'category_id': 2,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 5720,
   'bbox': [1681.0, 182.0, 88.0, 65.0],
   'category_id': 2,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 374085,
   'bbox': [0.0, 540.0, 163.0, 2295.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 577599,
   'bbox': [104.0, 537.0, 253.0, 2283.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 598670,
   'bbox': [304.0, 533.0, 262.0, 2285.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 56,
   'bbox': [284.0, 539.0, 8.0, 7.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 1868412,
   'bbox': [498.0, 513.0, 812.0, 2301.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 307800,
   'bbox': [1250.0, 512.0, 135.0, 2280.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 494109,
   'bbox': [1330.0, 503.0, 217.0, 2277.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 52,
   'bbox': [1734.0, 1013.0, 4.0, 13.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []},
  {'area': 90666,
   'bbox': [0.0, 1151.0, 54.0, 1679.0],
   'category_id': 1,
   'id': 0,
   'image_id': '0',
   'iscrowd': False,
   'segmentation': []}],
 'width': 2064}
```

An example instance from the YOLO config: 

``` python
{'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FAA140F2450>,
 'objects': {'bbox': [[747, 390, 1493, 292],
   [586, 206, 562, 157],
   [1463, 225, 92, 57],
   [1725, 215, 88, 65],
   [80, 1688, 163, 2295],
   [231, 1678, 253, 2283],
   [435, 1675, 262, 2285],
   [288, 543, 8, 7],
   [905, 1663, 812, 2301],
   [1318, 1653, 135, 2280],
   [1439, 1642, 217, 2277],
   [1737, 1019, 4, 13],
   [26, 1991, 54, 1679]],
  'label': [0, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1]}}
```


Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.


### Data Fields

The fields for the YOLO config:

- `image`: the image
- `objects`: the annotations which consits of:
    - `bbox`: a list of bounding boxes for the image
    - `label`: a list of labels for this image

The fields for the COCO config:

- `heigh`: height of the image
- `width`: width of the image
- `image`: image 
- `image_id`: id for the image
- `objects`: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
  - `bbox`: bounding boxes for the images
  - `category_id`: label for the image
  - `image_id`: id for the image
  - `iscrowd`: COCO is crowd flag
  - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)


 
### Data Splits

The dataset contains a train, validation and test split with the following numbers per split:


|          | train | validation | test |
|----------|-------|------------|------|
| examples | 196   | 22         | 135  |


## Dataset Creation

> [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domainwith column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8
.
### Curation Rationale

This dataset was created to produce a simplified version of the [Lectaurep Repertoires dataset](https://github.com/HTR-United/lectaurep-repertoires) which was found to contain:

> around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8

### Source Data

#### Initial Data Collection and Normalization

The LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the 
 Minutier central des notaires de Paris of the National Archives, the [ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities)](https://www.inria.fr/en/almanach) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture.

> The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maître Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745.

#### Who are the source language producers?

[More information needed]

### Annotations

|          | Train | Dev | Test | Total | Average area | Median area |
|----------|-------|-----|------|-------|--------------|-------------|
| Col      | 724   | 105 | 829  | 1658  | 9.32         | 6.33        |
| Header   | 103   | 15  | 42   | 160   | 6.78         | 7.10        |
| Marginal | 60    | 8   | 0    | 68    | 0.70         | 0.71        |
| Text     | 13    | 5   | 0    | 18    | 0.01         | 0.00        |
|          |       |     | -    |       |              |             |


#### Annotation process

[More information needed]

#### Who are the annotators?

[More information needed]

### Personal and Sensitive Information

This data does not contain information relating to living individuals. 
## Considerations for Using the Data

### Social Impact of Dataset

There are a growing number of datasets related to page layout for historical documents. This dataset offers a differnet approach to annotating these datasets (focusing on object detection rather than pixel level annotations).

### Discussion of Biases

Historical documents contain a broad variety of page layouts this means that the ability for models trained on this dataset to transfer to documents which may contain very different layouts is not certain. 

### Other Known Limitations

[More information needed]


## Additional Information

### Dataset Curators


### Licensing Information

[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)

### Citation Information

```
@dataset{clerice_thibault_2022_6827706,
  author       = {Clérice, Thibault},
  title        = {YALTAi: Tabular Dataset},
  month        = jul,
  year         = 2022,
  publisher    = {Zenodo},
  version      = {1.0.0},
  doi          = {10.5281/zenodo.6827706},
  url          = {https://doi.org/10.5281/zenodo.6827706}
}
```



[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.6827706.svg)](https://doi.org/10.5281/zenodo.6827706)


    
### Contributions

Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.