--- dataset_info: features: - name: image dtype: 'null' - name: image_url dtype: string - name: instruction sequence: string - name: bbox sequence: sequence: float64 - name: point sequence: sequence: float64 - name: type sequence: string splits: - name: train num_bytes: 59376321 num_examples: 21988 download_size: 8450810 dataset_size: 59376321 configs: - config_name: default data_files: - split: train path: data/train-* --- [Github](https://github.com/showlab/ShowUI/tree/main) | [arXiv](https://arxiv.org/abs/2411.17465) | [HF Paper](https://huggingface.co/papers/2411.17465) | [Spaces](https://huggingface.co/spaces/showlab/ShowUI) | [Datasets](https://huggingface.co/datasets/showlab/ShowUI-desktop-8K) | [Quick Start](https://huggingface.co/showlab/ShowUI-2B) **ShowUI-web** is a UI-grounding dataset focused on Web grounding, with screenshots and annotations originally sourced from [OmniAct](https://huggingface.co/datasets/Writer/omniact). We developed a parser and collected 22K screenshots, retaining only visual-related elements such as those tagged with ‘Button’ or ‘Checkbox’ by removing static text. If you find our work helpful, please consider citing our paper. ``` @misc{lin2024showui, title={ShowUI: One Vision-Language-Action Model for GUI Visual Agent}, author={Kevin Qinghong Lin and Linjie Li and Difei Gao and Zhengyuan Yang and Shiwei Wu and Zechen Bai and Weixian Lei and Lijuan Wang and Mike Zheng Shou}, year={2024}, eprint={2411.17465}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2411.17465}, } ```