Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models
Abstract
Orientation is a key attribute of objects, crucial for understanding their spatial pose and arrangement in images. However, practical solutions for accurate orientation estimation from a single image remain underexplored. In this work, we introduce Orient Anything, the first expert and foundational model designed to estimate object orientation in a single- and free-view image. Due to the scarcity of labeled data, we propose extracting knowledge from the 3D world. By developing a pipeline to annotate the front face of 3D objects and render images from random views, we collect 2M images with precise orientation annotations. To fully leverage the dataset, we design a robust training objective that models the 3D orientation as probability distributions of three angles and predicts the object orientation by fitting these distributions. Besides, we employ several strategies to improve synthetic-to-real transfer. Our model achieves state-of-the-art orientation estimation accuracy in both rendered and real images and exhibits impressive zero-shot ability in various scenarios. More importantly, our model enhances many applications, such as comprehension and generation of complex spatial concepts and 3D object pose adjustment.
Community
A robust foundational model for estimating the 3D orientation of objects in images!
Project Page: https://orient-anything.github.io/
Code: https://github.com/SpatialVision/Orient-Anything
Demo: https://huggingface.co/spaces/Viglong/Orient-Anything
thank you for sharing the nice work! i couldn't find the supplementary material. could you please provide a link or something?
We will release our Ori-Bench on our github page later (https://github.com/SpatialVision/Orient-Anything), please stay tuned.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Diorama: Unleashing Zero-shot Single-view 3D Scene Modeling (2024)
- Generative Zoo (2024)
- CameraHMR: Aligning People with Perspective (2024)
- GS2Pose: Two-stage 6D Object Pose Estimation Guided by Gaussian Splatting (2024)
- EasyHOI: Unleashing the Power of Large Models for Reconstructing Hand-Object Interactions in the Wild (2024)
- GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation (2024)
- GSGTrack: Gaussian Splatting-Guided Object Pose Tracking from RGB Videos (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper