Datasets:

Modalities:
Image
Text
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
davanstrien HF staff commited on
Commit
ab392dc
1 Parent(s): 1f5103d

add dataset fields

Browse files
Files changed (1) hide show
  1. README.md +26 -7
README.md CHANGED
@@ -57,17 +57,20 @@ task_ids: []
57
 
58
  ### Dataset Summary
59
 
60
- Briefly summarize the dataset, its intended use and the supported tasks. Give an overview of how and why the dataset was created. The summary should explicitly mention the languages present in the dataset (possibly in broad terms, e.g. *translations between several pairs of European languages*), and describe the domain, topic, or genre covered.
61
 
62
  ### Supported Tasks and Leaderboards
63
 
64
- For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
65
-
66
- - `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
67
 
68
 
69
  ## Dataset Structure
70
 
 
 
 
 
 
71
  ### Data Instances
72
 
73
  Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
@@ -198,12 +201,28 @@ Provide any additional information that is not covered in the other sections abo
198
 
199
  ### Data Fields
200
 
201
- List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
 
 
 
 
 
 
 
202
 
203
- - `example_field`: description of `example_field`
 
 
 
 
 
 
 
 
 
204
 
205
- Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
206
 
 
207
  ### Data Splits
208
 
209
  Describe and name the splits in the dataset if there are more than one.
 
57
 
58
  ### Dataset Summary
59
 
60
+ This dataset contains a subset of data used in the paper [You Actually Look Twice At it (YALTAi): using an object detectionapproach instead of region segmentation within the Kraken engine](https://arxiv.org/abs/2207.11230). This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects "Header", "Col", "Marginal", "text".
61
 
62
  ### Supported Tasks and Leaderboards
63
 
64
+ - `object-detection`: This dataset can be used to train a model for object-detection on historic document images.
 
 
65
 
66
 
67
  ## Dataset Structure
68
 
69
+ This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to intergrate the data with existing processing pipelines.
70
+
71
+ - The first configuration `YOLO` uses the original format of the data.
72
+ - The second configuration converts the YOLO format into a format which is closer to the `COCO` annotation format. This is done in particular to make it easier to work with the `feature_extractor`s from the `Transformers` models for object detection which expect data to be in a COCO style format.
73
+
74
  ### Data Instances
75
 
76
  Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
 
201
 
202
  ### Data Fields
203
 
204
+ The fields for the YOLO config:
205
+
206
+ - `image`: the image
207
+ - `objects`: the annotations which consits of:
208
+ - `bbox`: a list of bounding boxes for the image
209
+ - `label`: a list of labels for this image
210
+
211
+ The fields for the COCO config:
212
 
213
+ - `heigh`: height of the image
214
+ - `width`: width of the image
215
+ - `image`: image
216
+ - `image_id`: id for the image
217
+ - `objects`: annotations in COCO format, consisting of a list containing dictionaries with the follwoing keys:
218
+ - `bbox`: bounding boxes for the images
219
+ - `category_id`: label for the image
220
+ - `image_id`: id for the image
221
+ - `iscrowd`: COCO is crowd flag
222
+ - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibiality with other processing scripts
223
 
 
224
 
225
+
226
  ### Data Splits
227
 
228
  Describe and name the splits in the dataset if there are more than one.