Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -35,13 +35,64 @@ dataset_info:
|
|
35 |
sequence: float64
|
36 |
splits:
|
37 |
- name: train
|
38 |
-
num_bytes: 4525075938
|
39 |
num_examples: 50016
|
40 |
download_size: 4302364671
|
41 |
-
dataset_size: 4525075938
|
42 |
configs:
|
43 |
- config_name: default
|
44 |
data_files:
|
45 |
- split: train
|
46 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
sequence: float64
|
36 |
splits:
|
37 |
- name: train
|
38 |
+
num_bytes: 4525075938
|
39 |
num_examples: 50016
|
40 |
download_size: 4302364671
|
41 |
+
dataset_size: 4525075938
|
42 |
configs:
|
43 |
- config_name: default
|
44 |
data_files:
|
45 |
- split: train
|
46 |
path: data/train-*
|
47 |
+
task_categories:
|
48 |
+
- image-to-text
|
49 |
+
- object-detection
|
50 |
+
language:
|
51 |
+
- en
|
52 |
+
size_categories:
|
53 |
+
- 10K<n<100K
|
54 |
---
|
55 |
+
|
56 |
+
## SynthDoG detection 🐕
|
57 |
+
|
58 |
+
OCR annotations for [`synthdog-en`](https://huggingface.co/datasets/naver-clova-ix/synthdog-en) using [PaddleOcr](https://github.com/PaddlePaddle/PaddleOCR).
|
59 |
+
|
60 |
+
![image](./syndog-boxes.jpg)
|
61 |
+
|
62 |
+
This dataset contains annotations with the following formats:
|
63 |
+
* `2_coord`: [(xmin, ymin), (xmax,ymax)]
|
64 |
+
* `2_coord`: [(xmin/w, ymin/h), (xmax/w,ymax/h)] normalized version of `2_coord` where (h, w) are the image height and width
|
65 |
+
* `4_coord`: [(x1, y1), (x2,y2), (x3,y3), (x4, y4)] all corners of the rectangle enclosing the text span
|
66 |
+
* `2_coord`: [(x1/w, y1/h), (x2/w,y2/h), (x3/w,y3/h), (x4/w, y4/h)] normalized version of `4_coord`
|
67 |
+
|
68 |
+
## Usage
|
69 |
+
```python
|
70 |
+
from datasets import load_dataset
|
71 |
+
|
72 |
+
ds = load_dataset("nnethercott/synthdog-en-detection", split="train[:101]")
|
73 |
+
```
|
74 |
+
|
75 |
+
to visualize the boxes
|
76 |
+
```python
|
77 |
+
from PIL import ImageDraw
|
78 |
+
|
79 |
+
sample = ds[-1]
|
80 |
+
img, boxes = sample['image'], sample['2_coord']
|
81 |
+
|
82 |
+
draw = ImageDraw.Draw(img)
|
83 |
+
for item in boxes:
|
84 |
+
draw.rectangle([tuple(xy) for xy in item['coord']], outline='red')
|
85 |
+
|
86 |
+
img.save('sample.jpg')
|
87 |
+
```
|
88 |
+
|
89 |
+
## How to Cite
|
90 |
+
**Always cite the original authors !** This dataset is just an annotated version of [Clova AI's](https://github.com/clovaai) synthdog dataset. If you find this work useful to you, please cite them:
|
91 |
+
```bibtex
|
92 |
+
@inproceedings{kim2022donut,
|
93 |
+
title = {OCR-Free Document Understanding Transformer},
|
94 |
+
author = {Kim, Geewook and Hong, Teakgyu and Yim, Moonbin and Nam, JeongYeon and Park, Jinyoung and Yim, Jinyeong and Hwang, Wonseok and Yun, Sangdoo and Han, Dongyoon and Park, Seunghyun},
|
95 |
+
booktitle = {European Conference on Computer Vision (ECCV)},
|
96 |
+
year = {2022}
|
97 |
+
}
|
98 |
+
```
|