cyk1337 commited on
Commit
aea82d9
·
verified ·
1 Parent(s): 4a895f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -7
README.md CHANGED
@@ -3,16 +3,24 @@
3
  # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
  {}
5
  ---
6
- This is the official checkpoint for DualGPT, as featured in the paper [Dual Modalities of Text: Visual and Textual Generative Pre-training](https://arxiv.org/abs/2404.10710). For more details on how to use it, please visit the [GitHub page](https://github.com/ernie-research/pixelgpt).
 
 
 
 
 
7
  ## Model Description
8
- [More Information Needed]
9
 
10
  ## Citation
11
  ```
12
- @article{chai2024dual,
13
- title={Dual Modalities of Text: Visual and Textual Generative Pre-training},
14
- author={Chai, Yekun and Liu, Qingyi and Xiao, Jingwu and Wang, Shuohuan and Sun, Yu and Wu, Hua},
15
- journal={arXiv preprint arXiv:2404.10710},
16
- year={2024}
 
 
 
17
  }
18
  ```
 
3
  # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
  {}
5
  ---
6
+
7
+ <a href="https://2024.emnlp.org/" target="_blank"> <img alt="EMNLP 2024" src="https://img.shields.io/badge/Proceedings-EMNLP2024-red" /> </a>
8
+
9
+
10
+ This repository contains the official checkpoint for PixelGPT, as presented in the paper [Autoregressive Pre-Training on Pixels and Texts (EMNLP 2024)](https://arxiv.org/pdf/2404.10710). For detailed instructions on how to use the model, please visit our [GitHub page](https://github.com/ernie-research/pixelgpt/).
11
+
12
  ## Model Description
13
+ DualGPT is an autoregressive language model pre-trained on the dual modality of both pixels and texts. By processing documents as visual data (pixels), the model learns to predict both the next token and the next image patch in a sequence, enabling it to handle visually complex tasks in different modalities.
14
 
15
  ## Citation
16
  ```
17
+ @misc{chai2024autoregressivepretrainingpixelstexts,
18
+ title = {Autoregressive Pre-Training on Pixels and Texts},
19
+ author = {Chai, Yekun and Liu, Qingyi and Xiao, Jingwu and Wang, Shuohuan and Sun, Yu and Wu, Hua},
20
+ year = {2024},
21
+ eprint = {2404.10710},
22
+ archiveprefix = {arXiv},
23
+ primaryclass = {cs.CL},
24
+ url = {https://arxiv.org/abs/2404.10710},
25
  }
26
  ```