Text-to-3D
3d
stable-zero123 / README.md
dmitriitochilkin's picture
Update README.md
40749c2
|
raw
history blame
2.54 kB
metadata
datasets:
  - Objaverse
tags:
  - 3d
extra_gated_fields:
  Name: text
  Email: text
  Country: text
  Organization or Affiliation: text
  I ALLOW Stability AI to email me about new model releases: checkbox
license: other

Stable Zero123

Model Description

Stable Zero123 is a model for view-conditioned image generation based on Zero123.

With improved data rendering and model conditioning strategies, our model demonstrates improved performance when compared to the original Zero123 and its subsequent iteration, Zero123-XL.

Usage

By using Score Distillation Sampling (SDS) along with the Stable Zero123 model, we can produce high-quality 3D models from any input image. The process can also extend to text-to-3D generation by first generating a single image using SDXL and then using SDS on Stable Zero123 to generate the 3D object.

To use Stable Zero123 for object 3D mesh generation in threestudio, please follow the installation instructions and run

python launch.py --config configs/stable_zero123.yaml --train --gpu 0 data.image_path=./load/images/dog1_rgba.png

Model Details

Training Dataset

We use renders from the Objaverse dataset, utilizing our enhanced rendering method

Training Infrastructure

  • Hardware: Stable Zero123 was trained on the Stability AI cluster on a single node with 8 A100 80GBs GPUs.
  • Code Base: We use our modified version of the original zero123 repository.

Misuse, Malicious Use, and Out-of-Scope Use

The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.