File size: 1,756 Bytes
ec352c4
 
344bee7
 
 
c2fe1e0
 
344bee7
c2fe1e0
9e00053
8824445
9e00053
ec352c4
344bee7
 
 
 
 
 
 
3d0cbfc
344bee7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c381794
 
344bee7
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: cc-by-4.0
tags:
  - yolov5
  - yolo
  - digital humanities
  - object detection
  - computer-vision
  - document layout analysis
  - pytorch
datasets:
  - datacatalogue
---

# What's YOLOv5

YOLOv5 is an open-source object detection model released by [Ultralytics](https://ultralytics.com/), on [Github](https://github.com/ultralytics/yolov5).

# DataCatalogue (or DataCat)

[DataCatalogue](https://github.com/DataCatalogue) is a research project jointly led by Inria, the Bibliothèque nationale de France (National Library of France), and the Institut national d'histoire de l'art (National Institute of Art History).

It aims at restructuring OCR-ed auction sale catalogs kept in France national collections into TEI-XML, using machine learning solutions.

# DataCat Yolov5

We trained a YOLOv5 model on custom data to perform document layout analysis on auction sale catalogs. 

The training set consists of **581 images**, annotated with **two classes**:
* *title* (585 instances)
* *entry* (it refers to a catalog entry) (5017 instances)

59 images were used for validation.

We reached: 
| precision | recall | mAP_0.5 | mAP_0.5:0.95 |
|---|---|---|---|
| 0.99 | 0.99 | 0.98 | 0.75 |

# Dataset

The dataset is not released for the moment.

## Demo

An interactive demo is available on the following HugginFace Space: https://huggingface.co/spaces/HugoSchtr/DataCat_Yolov5

<img alt='detection example' src="https://huggingface.co/HugoSchtr/yolov5_datacat/resolve/main/eval/detection_example.png" width=30% height=30%>

## What's next

The model performs well on our data and now needs to be incorporated into a dedicated pipeline for the research project.

We also plan to train a new model on a larger training set in the near future.