Depth Estimation
sapiens
English
Edit model card

Depth-Sapiens-2B-Bfloat16

Model Details

Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, when finetuned for human-centric vision tasks, generalize to in-the-wild conditions. Sapiens-2B natively support 1K high-resolution inference. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic.

  • Developed by: Meta
  • Model type: Vision Transformer
  • License: Creative Commons Attribution-NonCommercial 4.0
  • Task: depth
  • Format: bfloat16
  • File: sapiens_2b_render_people_epoch_25_bfloat16.pt2

Model Card

  • Image Size: 1024 x 768 (H x W)
  • Num Parameters: 2.163 B
  • FLOPs: 8.709 TFLOPs
  • Patch Size: 16 x 16
  • Embedding Dimensions: 1920
  • Num Layers: 48
  • Num Heads: 32
  • Feedforward Channels: 7680

More Resources

Uses

Depth 2B model can be used to estimate relative depth on human images.

Downloads last month
53
Inference API
Inference API (serverless) does not yet support sapiens models for this pipeline type.

Collection including facebook/sapiens-depth-2b-bfloat16