Upload README.md
Browse files
README.md
CHANGED
@@ -15,4 +15,88 @@ tags:
|
|
15 |
pretty_name: DocGenome
|
16 |
size_categories:
|
17 |
- 1K<n<10K
|
18 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
pretty_name: DocGenome
|
16 |
size_categories:
|
17 |
- 1K<n<10K
|
18 |
+
---
|
19 |
+
|
20 |
+
# DocGenome: An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Language Models
|
21 |
+
|
22 |
+
**paper link**: [DocGenome](https://huggingface.co/papers/2406.11633)
|
23 |
+
|
24 |
+
We present DocGenome, a structured document dataset constructed by annotating 500K scientific documents from 153 disciplines in the arXiv open-access community, using our custom auto-labeling pipeline DocParser. DocGenome features four characteristics:
|
25 |
+
|
26 |
+
- 1) Completeness: It is the first dataset to structure data from all modalities including 13 layout attributes along with their \LaTeX\ source codes.
|
27 |
+
- 2) Logicality: It provides 6 logical relationships between different entities within each scientific document.
|
28 |
+
- 3) Diversity: It covers various document-oriented tasks, including document classification, visual grounding, document layout detection, document transformation, open-ended single-page QA and multi-page QA.
|
29 |
+
- 4) Correctness: It undergoes rigorous quality control checks conducted by a specialized team.
|
30 |
+
|
31 |
+
|
32 |
+
## Highlight:
|
33 |
+
- **Data quality rating**: for each structured document in DocGenome [shown here](https://huggingface.co/datasets/U4R/DocGenome/blob/main/tire_classification_train.json)
|
34 |
+
- **Tutorials** on how to use the [DocGenome dataset](https://github.com/UniModal4Reasoning/DocGenome/blob/main/tutorials/tutorial.ipynb).
|
35 |
+
- **TestSet** downloads from [Huggingface](https://huggingface.co/datasets/U4R/DocGenome-Testset-DocQA/tree/main ). If you want to evaluate your model on TestSet, please refer to [Evaluation](https://github.com/UniModal4Reasoning/DocGenome/blob/main/docs/Evaluation_README.md).
|
36 |
+
|
37 |
+
|
38 |
+
## DocGenome Benchmark Introduction
|
39 |
+
|
40 |
+
| Datasets | \# Discipline | \# Category of Units | \# Pages in Train-set | \# Pages in Test-set | \# Task | \# Used Metric | Publication | Entity Relations |
|
41 |
+
|------------------------------------------|--------------------------------|-----------------|--------------------|--------------|------------|--------------------|-------------|-----------------|
|
42 |
+
| |
|
43 |
+
| DocVQA | - | N/A | 11K | 1K | 1 | 2 | 1960-2000 | ❎ |
|
44 |
+
| DocLayNet | - | 11 | 80K | 8K | 1 | 1 | - | ❎ |
|
45 |
+
| DocBank | - | 13 | 0.45M | **50K** | 3 | 1 | 2014-2018 | ❎ |
|
46 |
+
| PubLayNet | - | 5 | 0.34M | 12K | 1 | 1 | - | ❎ |
|
47 |
+
| VRDU | - | 10 | 7K | 3K | 3 | 1 | - | ❎ |
|
48 |
+
| DUDE | - | N/A | 20K | 6K | 3 | 3 | 1860-2022 | ❎ |
|
49 |
+
| D^4LA | - | **27** | 8K | 2K | 1 | 3 | - | ❎ |
|
50 |
+
| Fox Benchmark | - | 5 | N/A (No train-set) | 0.2K | 3 | 5 | - | ❎ |
|
51 |
+
| ArXivCap | 32 | N/A | 6.4M* | N/A | 4 | 3 | - | ❎ |
|
52 |
+
| DocGenome (ours) | **153** | 13 | **6.8M** | 9K | **7** | **7** | 2007-2022 | ✅ |
|
53 |
+
|
54 |
+
|
55 |
+
 
|
56 |
+
------------------------
|
57 |
+
|
58 |
+
### Definition of relationships between component units
|
59 |
+
DocGenome contains 4 level relation types and 2 cite relation types, as shown in the following table:
|
60 |
+
|
61 |
+
| **Name** | Description | Example |
|
62 |
+
|--------------------------------------|---------------------------------------------------------|----------------------------------------------------------------------------|
|
63 |
+
| Identical | Two blocks share the same source code. | Cross-column text; Cross-page text. |
|
64 |
+
| Title adjacent | The two titles are adjacent. | (\section\{introduction\}, \section\{method\}) |
|
65 |
+
| Subordinate | One block is a subclass of another block. | (\section\{introduction\}, paragraph within Introduction) |
|
66 |
+
| Non-title adjacent | The two text or equation blocks are adjacent. | (Paragraph 1, Paragraph 2) |
|
67 |
+
| Explicitly-referred | One block refers to another block via footnote, reference, etc. | (As shown in \ref\{Fig: 5\} ..., Figure 5) |
|
68 |
+
| Implicitly-referred | The caption block refers to the corresponding float environment. | (Table Caption 1, Table 1)
|
69 |
+
</details>
|
70 |
+
|
71 |
+
### Attribute of component units
|
72 |
+
DocGenome has 13 attributes of component units, which can be categorized into two classes
|
73 |
+
- **1) Fixed-form units**, including Text, Title, Abstract, etc., which are characterized by sequential reading and hierarchical relationships readily discernible from the list obtained in Stage-two of the designed DocParser.
|
74 |
+
- **2) Floating-form units**, including Table, Figure, etc., which establish directional references to fixed-form units through commands like \texttt{\textbackslash ref} and \texttt{\textbackslash label}.
|
75 |
+
|
76 |
+
| **Index** | **Category** | **Notes** |
|
77 |
+
|----------------|-------------------|------------------------------------------|
|
78 |
+
| 0 | Algorithm | |
|
79 |
+
| 1 | Caption | Titles of Images, Tables, and Algorithms |
|
80 |
+
| 2 | Equation | |
|
81 |
+
| 3 | Figure | |
|
82 |
+
| 4 | Footnote | |
|
83 |
+
| 5 | List | |
|
84 |
+
| 7 | Table | |
|
85 |
+
| 8 | Text | |
|
86 |
+
| 9 | Text-EQ | Text block with inline equations |
|
87 |
+
| 10 | Title | Section titles |
|
88 |
+
| 12 | PaperTitle | |
|
89 |
+
| 13 | Code | |
|
90 |
+
| 14 | Abstract | |
|
91 |
+
|
92 |
+
|
93 |
+
## Citation
|
94 |
+
If you find our work useful in your research, please consider citing Fox:
|
95 |
+
```bibtex
|
96 |
+
@article{xia2024docgenome,
|
97 |
+
title={DocGenome: An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Language Models},
|
98 |
+
author={Xia, Renqiu and Mao, Song and Yan, Xiangchao and Zhou, Hongbin and Zhang, Bo and Peng, Haoyang and Pi, Jiahao and Fu, Daocheng and Wu, Wenjie and Ye, Hancheng and others},
|
99 |
+
journal={arXiv preprint arXiv:2406.11633},
|
100 |
+
year={2024}
|
101 |
+
}
|
102 |
+
```
|