juliet shen's picture
5

juliet shen

juliets
ยท

AI & ML interests

None yet

Recent Activity

liked a Space 12 days ago
UNESCO/nllb
liked a model about 2 months ago
meta-llama/Llama-Guard-3-1B
View all activity

Organizations

juliets's activity

liked a Space 12 days ago
Reacted to singhsidhukuldeep's post with ๐Ÿ”ฅ 27 days ago
view post
Post
2572
Good folks from @Microsoft have released an exciting breakthrough in GUI automation!

OmniParser โ€“ a game-changing approach for pure vision-based GUI agents that works across multiple platforms and applications.

Key technical innovations:
- Custom-trained interactable icon detection model using 67k screenshots from popular websites
- Specialized BLIP-v2 model fine-tuned on 7k icon-description pairs for extracting functional semantics
- Novel combination of icon detection, OCR, and semantic understanding to create structured UI representations

The results are impressive:
- Outperforms GPT-4V baseline by significant margins on the ScreenSpot benchmark
- Achieves 73% accuracy on Mind2Web without requiring HTML data
- Demonstrates a 57.7% success rate on AITW mobile tasks

What makes OmniParser special is its ability to work across platforms (mobile, desktop, web) using only screenshot data โ€“ no HTML or view hierarchy needed. This opens up exciting possibilities for building truly universal GUI automation tools.

The team has open-sourced both the interactable region detection dataset and icon description dataset to accelerate research in this space.

Kudos to the Microsoft Research team for pushing the boundaries of what's possible with pure vision-based GUI understanding!

What are your thoughts on vision-based GUI automation?
updated a Space 4 months ago