mustafa commited on
Commit
9e91066
1 Parent(s): 39dbb07

removed images

Browse files
README.md CHANGED
@@ -1,3 +1,117 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: creativeml-openrail-m
3
+ thumbnail: "https://huggingface.co/cag/anything-v3-1/resolve/main/example-images/thumbnail.png"
4
+ language:
5
+ - en
6
+ pipeline_tag: text-to-image
7
+ tags:
8
+ - stable-diffusion
9
+ - stable-diffusion-diffusers
10
+ inference: true
11
+ widget:
12
+ - text: >-
13
+ masterpiece, best quality, 1girl, brown hair, green eyes, colorful, autumn,
14
+ cumulonimbus clouds, lighting, blue sky, falling leaves, garden
15
+ example_title: example 1girl
16
+ - text: >-
17
+ masterpiece, best quality, 1boy, medium hair, blonde hair, blue eyes,
18
+ bishounen, colorful, autumn, cumulonimbus clouds, lighting, blue sky,
19
+ falling leaves, garden
20
+ example_title: example 1boy
21
+ datasets:
22
+ - cag/anything-v3-1-dataset
23
+ library_name: diffusers
24
  ---
25
+
26
+ # Anything V3.1
27
+
28
+ ![Anime Girl](https://huggingface.co/cag/anything-v3-1/resolve/main/example-images/thumbnail.png)
29
+
30
+ Anything V3.1 is a third-party continuation of a latent diffusion model, Anything V3.0. This model is claimed to be a better version of Anything V3.0 with a fixed VAE model and a fixed CLIP position id key. The CLIP reference was taken from Stable Diffusion V1.5. The VAE was swapped using Kohya's merge-vae script and the CLIP was fixed using Arena's stable-diffusion-model-toolkit webui extensions.
31
+
32
+ Anything V3.2 is supposed to be a resume training of Anything V3.1. The current model has been fine-tuned with a learning rate of 2.0e-6, 50 epochs, and 4 batch sizes on datasets collected from many sources, with 1/4 of them being synthetic datasets. The dataset has been preprocessed using the Aspect Ratio Bucketing Tool so that it can be converted to latents and trained at non-square resolutions. This model is supposed to be a test model to see how the clip fix affects training. Like other anime-style Stable Diffusion models, it also supports Danbooru tags to generate images.
33
+
34
+ e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_**
35
+
36
+ - Use it with the [`Automatic1111's Stable Diffusion Webui`](https://github.com/AUTOMATIC1111/stable-diffusion-webui) see: ['how-to-use'](#how-to-use)
37
+ - Use it with 🧨 [`diffusers`](##🧨Diffusers)
38
+
39
+
40
+ # Model Details
41
+
42
+ - **Currently maintained by:** Cagliostro Research Lab
43
+ - **Model type:** Diffusion-based text-to-image generation model
44
+ - **Model Description:** This is a model that can be used to generate and modify anime-themed images based on text prompts.
45
+ - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
46
+ - **Finetuned from model:** Anything V3.1
47
+
48
+ ## How-to-Use
49
+ - Download `Anything V3.1` [here](https://huggingface.co/cag/anything-v3-1/resolve/main/anything-v3-1.safetensors), or `Anything V3.2` [here](https://huggingface.co/cag/anything-v3-1/resolve/main/anything-v3-2.safetensors), all model are in `.safetensors` format.
50
+ - You need to adjust your prompt using aesthetic tags to get better result, you can use any generic negative prompt or use the following suggested negative prompt to guide the model towards high aesthetic generationse:
51
+ ```
52
+ lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
53
+ ```
54
+ - And, the following should also be prepended to prompts to get high aesthetic results:
55
+ ```
56
+ masterpiece, best quality, illustration, beautiful detailed, finely detailed, dramatic light, intricate details
57
+ ```
58
+ ## 🧨Diffusers
59
+
60
+ This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). Pretrained model currently based on Anything V3.1.
61
+
62
+ You should install dependencies below in order to running the pipeline
63
+
64
+ ```bash
65
+ pip install diffusers transformers accelerate scipy safetensors
66
+ ```
67
+ Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):
68
+
69
+ ```python
70
+ import torch
71
+ from torch import autocast
72
+ from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler
73
+
74
+ model_id = "cag/anything-v3-1"
75
+
76
+ # Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead
77
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
78
+ pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
79
+ pipe = pipe.to("cuda")
80
+
81
+ prompt = "masterpiece, best quality, high quality, 1girl, solo, sitting, confident expression, long blonde hair, blue eyes, formal dress"
82
+ negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
83
+
84
+ with autocast("cuda"):
85
+ image = pipe(prompt,
86
+ negative_prompt=negative_prompt,
87
+ width=512,
88
+ height=728,
89
+ guidance_scale=12,
90
+ num_inference_steps=50).images[0]
91
+
92
+ image.save("anime_girl.png")
93
+ ```
94
+ ## Limitation
95
+ This model is overfitted and cannot follow prompts well, even after the text encoder has been fixed. This leads to laziness in prompting, as you will only get good results by typing 1girl. Additionally, this model is anime-based and biased towards anime female characters. It is difficult to generate masculine male characters without providing specific prompts. Furthermore, not much has changed compared to the Anything V3.0 base model, as it only involved swapping the VAE and CLIP models and then fine-tuning for 50 epochs with small scale datasets.
96
+
97
+ ## Example
98
+
99
+ Here is some cherrypicked samples and comparison between available models
100
+
101
+ ![Anime Girl](https://huggingface.co/cag/anything-v3-1/resolve/main/example-images/1girl.png)
102
+ ![Anime Boy](https://huggingface.co/cag/anything-v3-1/resolve/main/example-images/1boy.png)
103
+ ![Aesthetic](https://huggingface.co/cag/anything-v3-1/resolve/main/example-images/aesthetic.png)
104
+
105
+ ## License
106
+
107
+ This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
108
+ The CreativeML OpenRAIL License specifies:
109
+
110
+ 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
111
+ 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
112
+ 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
113
+ [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
114
+
115
+
116
+ ## Credit
117
+ Public domain.
anything-v3-1.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d59c16c335fc778903fff4a66c0b6e7847ab352f24c294c157bb94a1d8e0596a
3
+ size 4265335622
anything-v3-1.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8c497baf522e35b8a943d5dc18b63ce822031316d19f182f4b7d87e4fd82ac8
3
+ size 4265097179
anything-v3-2.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4240fae2e1507308eecb0cdb39b74501f0e6dcc5d7786820fbd1ad918423c40
3
+ size 4265335302
anything-v3-2.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bb4f2fde199c07dc96e56826ee120bad4e3d9068c2ce9bca7e1881c0c0a617b
3
+ size 4265096871
feature_extractor/preprocessor_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": 224,
3
+ "do_center_crop": true,
4
+ "do_convert_rgb": true,
5
+ "do_normalize": true,
6
+ "do_resize": true,
7
+ "feature_extractor_type": "CLIPFeatureExtractor",
8
+ "image_mean": [
9
+ 0.48145466,
10
+ 0.4578275,
11
+ 0.40821073
12
+ ],
13
+ "image_std": [
14
+ 0.26862954,
15
+ 0.26130258,
16
+ 0.27577711
17
+ ],
18
+ "resample": 3,
19
+ "size": 224
20
+ }
model_index.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "StableDiffusionPipeline",
3
+ "_diffusers_version": "0.10.2",
4
+ "feature_extractor": [
5
+ null,
6
+ null
7
+ ],
8
+ "requires_safety_checker": null,
9
+ "safety_checker": [
10
+ null,
11
+ null
12
+ ],
13
+ "scheduler": [
14
+ "diffusers",
15
+ "DDIMScheduler"
16
+ ],
17
+ "text_encoder": [
18
+ "transformers",
19
+ "CLIPTextModel"
20
+ ],
21
+ "tokenizer": [
22
+ "transformers",
23
+ "CLIPTokenizer"
24
+ ],
25
+ "unet": [
26
+ "diffusers",
27
+ "UNet2DConditionModel"
28
+ ],
29
+ "vae": [
30
+ "diffusers",
31
+ "AutoencoderKL"
32
+ ]
33
+ }
safety_checker/config.json ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "clip-vit-large-patch14/",
3
+ "architectures": [
4
+ "SafetyChecker"
5
+ ],
6
+ "initializer_factor": 1.0,
7
+ "logit_scale_init_value": 2.6592,
8
+ "model_type": "clip",
9
+ "projection_dim": 768,
10
+ "text_config": {
11
+ "_name_or_path": "",
12
+ "add_cross_attention": false,
13
+ "architectures": null,
14
+ "attention_dropout": 0.0,
15
+ "bad_words_ids": null,
16
+ "bos_token_id": 0,
17
+ "chunk_size_feed_forward": 0,
18
+ "cross_attention_hidden_size": null,
19
+ "decoder_start_token_id": null,
20
+ "diversity_penalty": 0.0,
21
+ "do_sample": false,
22
+ "dropout": 0.0,
23
+ "early_stopping": false,
24
+ "encoder_no_repeat_ngram_size": 0,
25
+ "eos_token_id": 2,
26
+ "exponential_decay_length_penalty": null,
27
+ "finetuning_task": null,
28
+ "forced_bos_token_id": null,
29
+ "forced_eos_token_id": null,
30
+ "hidden_act": "quick_gelu",
31
+ "hidden_size": 768,
32
+ "id2label": {
33
+ "0": "LABEL_0",
34
+ "1": "LABEL_1"
35
+ },
36
+ "initializer_factor": 1.0,
37
+ "initializer_range": 0.02,
38
+ "intermediate_size": 3072,
39
+ "is_decoder": false,
40
+ "is_encoder_decoder": false,
41
+ "label2id": {
42
+ "LABEL_0": 0,
43
+ "LABEL_1": 1
44
+ },
45
+ "layer_norm_eps": 1e-05,
46
+ "length_penalty": 1.0,
47
+ "max_length": 20,
48
+ "max_position_embeddings": 77,
49
+ "min_length": 0,
50
+ "model_type": "clip_text_model",
51
+ "no_repeat_ngram_size": 0,
52
+ "num_attention_heads": 12,
53
+ "num_beam_groups": 1,
54
+ "num_beams": 1,
55
+ "num_hidden_layers": 12,
56
+ "num_return_sequences": 1,
57
+ "output_attentions": false,
58
+ "output_hidden_states": false,
59
+ "output_scores": false,
60
+ "pad_token_id": 1,
61
+ "prefix": null,
62
+ "problem_type": null,
63
+ "pruned_heads": {},
64
+ "remove_invalid_values": false,
65
+ "repetition_penalty": 1.0,
66
+ "return_dict": true,
67
+ "return_dict_in_generate": false,
68
+ "sep_token_id": null,
69
+ "task_specific_params": null,
70
+ "temperature": 1.0,
71
+ "tie_encoder_decoder": false,
72
+ "tie_word_embeddings": true,
73
+ "tokenizer_class": null,
74
+ "top_k": 50,
75
+ "top_p": 1.0,
76
+ "torch_dtype": null,
77
+ "torchscript": false,
78
+ "transformers_version": "4.21.0.dev0",
79
+ "typical_p": 1.0,
80
+ "use_bfloat16": false,
81
+ "vocab_size": 49408
82
+ },
83
+ "text_config_dict": {
84
+ "hidden_size": 768,
85
+ "intermediate_size": 3072,
86
+ "num_attention_heads": 12,
87
+ "num_hidden_layers": 12
88
+ },
89
+ "torch_dtype": "float32",
90
+ "transformers_version": null,
91
+ "vision_config": {
92
+ "_name_or_path": "",
93
+ "add_cross_attention": false,
94
+ "architectures": null,
95
+ "attention_dropout": 0.0,
96
+ "bad_words_ids": null,
97
+ "bos_token_id": null,
98
+ "chunk_size_feed_forward": 0,
99
+ "cross_attention_hidden_size": null,
100
+ "decoder_start_token_id": null,
101
+ "diversity_penalty": 0.0,
102
+ "do_sample": false,
103
+ "dropout": 0.0,
104
+ "early_stopping": false,
105
+ "encoder_no_repeat_ngram_size": 0,
106
+ "eos_token_id": null,
107
+ "exponential_decay_length_penalty": null,
108
+ "finetuning_task": null,
109
+ "forced_bos_token_id": null,
110
+ "forced_eos_token_id": null,
111
+ "hidden_act": "quick_gelu",
112
+ "hidden_size": 1024,
113
+ "id2label": {
114
+ "0": "LABEL_0",
115
+ "1": "LABEL_1"
116
+ },
117
+ "image_size": 224,
118
+ "initializer_factor": 1.0,
119
+ "initializer_range": 0.02,
120
+ "intermediate_size": 4096,
121
+ "is_decoder": false,
122
+ "is_encoder_decoder": false,
123
+ "label2id": {
124
+ "LABEL_0": 0,
125
+ "LABEL_1": 1
126
+ },
127
+ "layer_norm_eps": 1e-05,
128
+ "length_penalty": 1.0,
129
+ "max_length": 20,
130
+ "min_length": 0,
131
+ "model_type": "clip_vision_model",
132
+ "no_repeat_ngram_size": 0,
133
+ "num_attention_heads": 16,
134
+ "num_beam_groups": 1,
135
+ "num_beams": 1,
136
+ "num_hidden_layers": 24,
137
+ "num_return_sequences": 1,
138
+ "output_attentions": false,
139
+ "output_hidden_states": false,
140
+ "output_scores": false,
141
+ "pad_token_id": null,
142
+ "patch_size": 14,
143
+ "prefix": null,
144
+ "problem_type": null,
145
+ "pruned_heads": {},
146
+ "remove_invalid_values": false,
147
+ "repetition_penalty": 1.0,
148
+ "return_dict": true,
149
+ "return_dict_in_generate": false,
150
+ "sep_token_id": null,
151
+ "task_specific_params": null,
152
+ "temperature": 1.0,
153
+ "tie_encoder_decoder": false,
154
+ "tie_word_embeddings": true,
155
+ "tokenizer_class": null,
156
+ "top_k": 50,
157
+ "top_p": 1.0,
158
+ "torch_dtype": null,
159
+ "torchscript": false,
160
+ "transformers_version": "4.21.0.dev0",
161
+ "typical_p": 1.0,
162
+ "use_bfloat16": false
163
+ },
164
+ "vision_config_dict": {
165
+ "hidden_size": 1024,
166
+ "intermediate_size": 4096,
167
+ "num_attention_heads": 16,
168
+ "num_hidden_layers": 24,
169
+ "patch_size": 14
170
+ }
171
+ }
safety_checker/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64b8393f1afd5a0c1ed2aa5f341fa7c08286839a48f3743162a76a2835c808bd
3
+ size 1216067303
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "DDIMScheduler",
3
+ "_diffusers_version": "0.10.2",
4
+ "beta_end": 0.012,
5
+ "beta_schedule": "scaled_linear",
6
+ "beta_start": 0.00085,
7
+ "clip_sample": false,
8
+ "num_train_timesteps": 1000,
9
+ "prediction_type": "epsilon",
10
+ "set_alpha_to_one": false,
11
+ "skip_prk_steps": true,
12
+ "steps_offset": 1,
13
+ "trained_betas": null
14
+ }
text_encoder/config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "openai/clip-vit-large-patch14",
3
+ "architectures": [
4
+ "CLIPTextModel"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "dropout": 0.0,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "quick_gelu",
11
+ "hidden_size": 768,
12
+ "initializer_factor": 1.0,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 77,
17
+ "model_type": "clip_text_model",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "projection_dim": 768,
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.26.0",
24
+ "vocab_size": 49408
25
+ }
text_encoder/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79f531155c765c22c89e23328793a2e91a1178070af961c57e2eae5f0509b65b
3
+ size 492265879
text_encoder/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98124f3d5663b2f14ff08d4c29db93800622b4fcfa3d952bb6f9112f5d6dadd7
3
+ size 492307041
tokenizer/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<|startoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "do_lower_case": true,
12
+ "eos_token": {
13
+ "__type": "AddedToken",
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "errors": "replace",
21
+ "model_max_length": 77,
22
+ "name_or_path": "runwayml/stable-diffusion-v1-5",
23
+ "pad_token": "<|endoftext|>",
24
+ "special_tokens_map_file": "./special_tokens_map.json",
25
+ "tokenizer_class": "CLIPTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<|endoftext|>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }
tokenizer/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
unet/config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "UNet2DConditionModel",
3
+ "_diffusers_version": "0.10.2",
4
+ "act_fn": "silu",
5
+ "attention_head_dim": 8,
6
+ "block_out_channels": [
7
+ 320,
8
+ 640,
9
+ 1280,
10
+ 1280
11
+ ],
12
+ "center_input_sample": false,
13
+ "cross_attention_dim": 768,
14
+ "down_block_types": [
15
+ "CrossAttnDownBlock2D",
16
+ "CrossAttnDownBlock2D",
17
+ "CrossAttnDownBlock2D",
18
+ "DownBlock2D"
19
+ ],
20
+ "downsample_padding": 1,
21
+ "dual_cross_attention": false,
22
+ "flip_sin_to_cos": true,
23
+ "freq_shift": 0,
24
+ "in_channels": 4,
25
+ "layers_per_block": 2,
26
+ "mid_block_scale_factor": 1,
27
+ "norm_eps": 1e-05,
28
+ "norm_num_groups": 32,
29
+ "num_class_embeds": null,
30
+ "only_cross_attention": false,
31
+ "out_channels": 4,
32
+ "sample_size": 64,
33
+ "up_block_types": [
34
+ "UpBlock2D",
35
+ "CrossAttnUpBlock2D",
36
+ "CrossAttnUpBlock2D",
37
+ "CrossAttnUpBlock2D"
38
+ ],
39
+ "upcast_attention": false,
40
+ "use_linear_projection": false
41
+ }
unet/diffusion_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c727c1864179e43a3338a99e28e08f9c7b12cd46e56e60afb429e17c57644c14
3
+ size 3438366373
unet/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ccc116a8958b82f8e313ab3c78e1d74ad4e82ce9c245ae0bb494d97a993cba7
3
+ size 3438167503
vae/config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.10.2",
4
+ "act_fn": "silu",
5
+ "block_out_channels": [
6
+ 128,
7
+ 256,
8
+ 512,
9
+ 512
10
+ ],
11
+ "down_block_types": [
12
+ "DownEncoderBlock2D",
13
+ "DownEncoderBlock2D",
14
+ "DownEncoderBlock2D",
15
+ "DownEncoderBlock2D"
16
+ ],
17
+ "in_channels": 3,
18
+ "latent_channels": 4,
19
+ "layers_per_block": 2,
20
+ "norm_num_groups": 32,
21
+ "out_channels": 3,
22
+ "sample_size": 256,
23
+ "up_block_types": [
24
+ "UpDecoderBlock2D",
25
+ "UpDecoderBlock2D",
26
+ "UpDecoderBlock2D",
27
+ "UpDecoderBlock2D"
28
+ ]
29
+ }
vae/diffusion_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:876a906810a8b2470c3092f307742f6ad8b9dbf759fb7c0ff020d0c610c996da
3
+ size 334711857
vae/diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:586a3e1e239d17bbf1d9f41fe37bcfd8f62e488f7b23cf60b3db420ff9354efe
3
+ size 334643245