Datasets:

Modalities:
Image
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
oottyy commited on
Commit
ba8945d
·
verified ·
1 Parent(s): ec1a40d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #### Mind2Web training set for the paper: [Harnessing Webpage Uis For Text Rich Visual Understanding]()
2
+
3
+ 🌐 [Homepage](https://neulab.github.io/MultiUI/) | 🐍 [GitHub](https://github.com/neulab/multiui) | 📖 [arXiv]()
4
+
5
+ ## Introduction
6
+ We introduce **MultiUI**, a dataset containing 7.3 million samples from 1 million websites, covering diverse multi- modal tasks and UI layouts. Models trained on **MultiUI** not only excel in web UI tasks—achieving up to a 48% improvement on VisualWebBench and a 19.1% boost in action accuracy on a web agent dataset Mind2Web—but also generalize surprisingly well to non-web UI tasks and even to non-UI domains, such as document understanding, OCR, and chart interpretation.
7
+
8
+ <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/65403d8781a8731a1c09a584/vk7yT4Y7ydBOHM6BojmlI.mp4"></video>
9
+
10
+
11
+ ## Contact
12
+ * Junpeng Liu: jpliu@link.cuhk.edu.hk
13
+ * Xiang Yue: xyue2@andrew.cmu.edu
14
+
15
+ ## Citation
16
+ If you find this work helpful, please cite out paper: