INTRA: Interaction Relationship-aware Weakly Supervised Affordance Grounding
Abstract
Affordance denotes the potential interactions inherent in objects. The perception of affordance can enable intelligent agents to navigate and interact with new environments efficiently. Weakly supervised affordance grounding teaches agents the concept of affordance without costly pixel-level annotations, but with exocentric images. Although recent advances in weakly supervised affordance grounding yielded promising results, there remain challenges including the requirement for paired exocentric and egocentric image dataset, and the complexity in grounding diverse affordances for a single object. To address them, we propose INTeraction Relationship-aware weakly supervised Affordance grounding (INTRA). Unlike prior arts, INTRA recasts this problem as representation learning to identify unique features of interactions through contrastive learning with exocentric images only, eliminating the need for paired datasets. Moreover, we leverage vision-language model embeddings for performing affordance grounding flexibly with any text, designing text-conditioned affordance map generation to reflect interaction relationship for contrastive learning and enhancing robustness with our text synonym augmentation. Our method outperformed prior arts on diverse datasets such as AGD20K, IIT-AFF, CAD and UMD. Additionally, experimental results demonstrate that our method has remarkable domain scalability for synthesized images / illustrations and is capable of performing affordance grounding for novel interactions and objects.
Community
We present INTRA (Interaction Relationship-aware Weakly Supervised Affordance Grounding), a novel framework for affordance grounding which enables training without egocentric images, ground different part for different interaction on same object and enables free-form text input. If you need animated explanantion and more results, pleas come to our project page! https://jeeit17.github.io/INTRA
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Learning Precise Affordances from Egocentric Videos for Robotic Manipulation (2024)
- Learning 2D Invariant Affordance Knowledge for 3D Affordance Grounding (2024)
- Exploring Conditional Multi-Modal Prompts for Zero-shot HOI Detection (2024)
- FrozenSeg: Harmonizing Frozen Foundation Models for Open-Vocabulary Segmentation (2024)
- DISCO: Embodied Navigation and Interaction via Differentiable Scene Semantics and Dual-level Control (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper