Update README.md
#212
by
Niggay
- opened
README.md
CHANGED
@@ -128,17 +128,7 @@ Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/bl
|
|
128 |
which consists of images that are primarily limited to English descriptions.
|
129 |
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
|
130 |
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
|
131 |
-
ability of the model to generate content with non-English prompts is significantly worse than with English-language
|
132 |
-
|
133 |
-
### Safety Module
|
134 |
-
|
135 |
-
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
|
136 |
-
This checker works by checking model outputs against known hard-coded NSFW concepts.
|
137 |
-
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
|
138 |
-
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
|
139 |
-
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
|
140 |
-
|
141 |
-
|
142 |
## Training
|
143 |
|
144 |
**Training Data**
|
|
|
128 |
which consists of images that are primarily limited to English descriptions.
|
129 |
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
|
130 |
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
|
131 |
+
ability of the model to generate content with non-English prompts is significantly worse than with English-language
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
132 |
## Training
|
133 |
|
134 |
**Training Data**
|