Text-to-Image
Diffusers
Safetensors
PixArtAlphaPipeline
Pixart-α
yujincheng08 commited on
Commit
a810eba
1 Parent(s): b8b78cf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -10,11 +10,10 @@ tags:
10
  </p>
11
 
12
  <div style="display:flex;justify-content: center">
13
- <a href="https://huggingface.co/spaces/PixArt-alpha/PixArt-alpha"><img src="https://img.shields.io/static/v1?label=Demo&message=Huggingface%20Gradio&color=yellow"></a> &ensp;
14
- <a href="https://pixart-alpha.github.io/"><img src="https://img.shields.io/static/v1?label=Project&message=Github&color=blue"></a> &ensp;
15
- <a href="https://arxiv.org/abs/2310.00426"><img src="https://img.shields.io/badge/arXiv-2310.00426-b31b1b.svg?style=flat-square"></a> &ensp;
16
- <a href="https://colab.research.google.com/drive/1jZ5UZXk7tcpTfVwnX33dDuefNMcnW9ME?usp=sharing"><img src="https://img.shields.io/badge/Google-Free%20Colab-orange.svg?logo=google"></a> &ensp;
17
-
18
  </div>
19
 
20
  # 🐱 Pixart-α Model Card
@@ -34,7 +33,8 @@ Source code is available at https://github.com/PixArt-alpha/PixArt-alpha.
34
  - **Model type:** Diffusion-Transformer-based text-to-image generative model
35
  - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
36
  - **Model Description:** This is a model that can be used to generate and modify images based on text prompts.
37
- It is a [Transformer Latent Diffusion Model](https://arxiv.org/abs/2310.00426) that uses one fixed, pretrained text encoders ([T5](https://huggingface.co/DeepFloyd/t5-v1_1-xxl))
 
38
  and one latent feature encoder ([VAE](https://arxiv.org/abs/2112.10752)).
39
  - **Resources for more information:** Check out our [GitHub Repository](https://github.com/PixArt-alpha/PixArt-alpha) and the [Pixart-α report on arXiv](https://arxiv.org/abs/2310.00426).
40
 
 
10
  </p>
11
 
12
  <div style="display:flex;justify-content: center">
13
+ <a href="https://huggingface.co/spaces/PixArt-alpha/PixArt-alpha"><img src="https://img.shields.io/static/v1?label=Demo&message=Huggingface&color=yellow"></a> &ensp;
14
+ <a href="https://pixart-alpha.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Github&color=blue&logo=github"></a> &ensp;
15
+ <a href="https://arxiv.org/abs/2310.00426"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv&color=red&logo=arxiv"></a> &ensp;
16
+ <a href="https://colab.research.google.com/drive/1jZ5UZXk7tcpTfVwnX33dDuefNMcnW9ME?usp=sharing"><img src="https://img.shields.io/static/v1?label=Free%20Trial&message=Google%20Colab&logo=google&color=orange"></a> &ensp;
 
17
  </div>
18
 
19
  # 🐱 Pixart-α Model Card
 
33
  - **Model type:** Diffusion-Transformer-based text-to-image generative model
34
  - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
35
  - **Model Description:** This is a model that can be used to generate and modify images based on text prompts.
36
+ It is a [Transformer Latent Diffusion Model](https://arxiv.org/abs/2310.00426) that uses one fixed, pretrained text encoders ([T5](
37
+ https://huggingface.co/DeepFloyd/t5-v1_1-xxl))
38
  and one latent feature encoder ([VAE](https://arxiv.org/abs/2112.10752)).
39
  - **Resources for more information:** Check out our [GitHub Repository](https://github.com/PixArt-alpha/PixArt-alpha) and the [Pixart-α report on arXiv](https://arxiv.org/abs/2310.00426).
40