Update README.md
Browse files
README.md
CHANGED
@@ -40,6 +40,8 @@ AltDiffusion支持线上演示,点击 [这里](https://huggingface.co/spaces/B
|
|
40 |
|
41 |
We used [AltCLIP](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md), and trained a bilingual Diffusion model based on [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion), with training data from [WuDao dataset](https://data.baai.ac.cn/details/WuDaoCorporaText) and [LAION](https://huggingface.co/datasets/laion/laion2B-en).
|
42 |
|
|
|
|
|
43 |
AltDiffusion model is backed by a bilingual CLIP model named AltCLIP, which is also accessible in FlagAI. You can read [this tutorial](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) for more information.
|
44 |
|
45 |
AltDiffusion now supports online demo, try out it by clicking [here](https://huggingface.co/spaces/BAAI/FlagStudio)!
|
|
|
40 |
|
41 |
We used [AltCLIP](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md), and trained a bilingual Diffusion model based on [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion), with training data from [WuDao dataset](https://data.baai.ac.cn/details/WuDaoCorporaText) and [LAION](https://huggingface.co/datasets/laion/laion2B-en).
|
42 |
|
43 |
+
Our model performs well in aligning Chinese and English, and is the strongest open source version on the market today, retaining most of the stable diffusion capabilities of the original, and in some cases even better than the original model.
|
44 |
+
|
45 |
AltDiffusion model is backed by a bilingual CLIP model named AltCLIP, which is also accessible in FlagAI. You can read [this tutorial](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) for more information.
|
46 |
|
47 |
AltDiffusion now supports online demo, try out it by clicking [here](https://huggingface.co/spaces/BAAI/FlagStudio)!
|