PseudoTerminal X
commited on
Commit
•
bce9dd4
1
Parent(s):
90816e1
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,148 @@
|
|
1 |
---
|
2 |
license: openrail++
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: openrail++
|
3 |
---
|
4 |
+
|
5 |
+
# Terminus XL Gamma
|
6 |
+
|
7 |
+
## Model Details
|
8 |
+
|
9 |
+
### Model Description
|
10 |
+
|
11 |
+
Terminus XL Gamma is a new state-of-the-art latent diffusion model that uses zero-terminal SNR noise schedule and velocity prediction objective at training and inference time.
|
12 |
+
|
13 |
+
Terminus is based on the same architecture as SDXL, and has the same layout. It has been trained on fewer steps with very high quality data captions via COCO and Midjourney.
|
14 |
+
|
15 |
+
This model will not be capable of as many concepts as SDXL, and some subjects will simply look very bad.
|
16 |
+
|
17 |
+
The objective of this model was to use min-SNR gamma loss to efficiently train a full model on a single A100-80G.
|
18 |
+
|
19 |
+
|
20 |
+
- **Developed by:** pseudoterminal X (@bghira)
|
21 |
+
- **Funded by:** pseudoterminal X (@bghira)
|
22 |
+
- **Model type:** Latent Diffusion
|
23 |
+
- **License:** openrail++
|
24 |
+
- **Architecture:** SDXL
|
25 |
+
|
26 |
+
### Model Sources
|
27 |
+
|
28 |
+
- **Repository:** https://github.com/bghira/SimpleTuner
|
29 |
+
|
30 |
+
## Uses
|
31 |
+
|
32 |
+
### Direct Use
|
33 |
+
|
34 |
+
Terminus XL Gamma can be used for generating high-quality images given text prompts. It should particularly excel at inpainting tasks, where a zero-terminal SNR noise schedule allows it to more effectively retain contrast.
|
35 |
+
|
36 |
+
The model can be utilized in creative industries such as art, advertising, and entertainment to create visually appealing content.
|
37 |
+
|
38 |
+
### Downstream Use
|
39 |
+
|
40 |
+
Terminus XL Gamma can be fine-tuned for specific tasks such as image super-resolution, style transfer, and more.
|
41 |
+
|
42 |
+
### Out-of-Scope Use
|
43 |
+
|
44 |
+
The model is not designed for tasks outside of image generation. It should not be used to produce harmful content, or deceive others. Please use common sense.
|
45 |
+
|
46 |
+
## Bias, Risks, and Limitations
|
47 |
+
|
48 |
+
The model might exhibit biases present in the training data. The generated images should be carefully reviewed to ensure they meet ethical and societal standards.
|
49 |
+
|
50 |
+
### Recommendations
|
51 |
+
|
52 |
+
Users should be cautious of potential biases in the generated images and thoroughly review them before use.
|
53 |
+
|
54 |
+
## Training Details
|
55 |
+
|
56 |
+
### Training Data
|
57 |
+
|
58 |
+
This model's success largely depended on a somewhat small collection of very high quality data samples.
|
59 |
+
|
60 |
+
* LAION-HD, filtered down to EXIF samples without watermarks. Luminance value of samples capped to 100 (.5).
|
61 |
+
* Midjourney 5.2 dataset `ptx0/mj-general` with zero filtration.
|
62 |
+
|
63 |
+
### Training Procedure
|
64 |
+
|
65 |
+
#### Preprocessing
|
66 |
+
|
67 |
+
Followed SDXL's pretraining procedure using crop conditional inputs and centre-cropped images with their full size as the input.
|
68 |
+
|
69 |
+
Trained on 512x512, followed by 768x768, and finally, ~1 megapixel multi-aspect training for the rest of the training time.
|
70 |
+
|
71 |
+
Images were downsampled while maintaining aspect ratio and cropped on 64 pixel increments. Many aspect ratios were trained, but only a few are likely to work fully.
|
72 |
+
|
73 |
+
#### Training Hyperparameters
|
74 |
+
|
75 |
+
- **Training regime:** bf16 mixed precision
|
76 |
+
- **Learning rate:** \(4 \times 10^{-7}\) to \(8 \times 10^{-7}\), cosine schedule
|
77 |
+
- **Epochs:** 60
|
78 |
+
- **Batch size:** 24 * 15 = 360
|
79 |
+
|
80 |
+
#### Speeds, Sizes, Times
|
81 |
+
|
82 |
+
[More Information Needed]
|
83 |
+
|
84 |
+
## Evaluation
|
85 |
+
|
86 |
+
### Testing Data, Factors & Metrics
|
87 |
+
|
88 |
+
[More Information Needed]
|
89 |
+
|
90 |
+
### Results
|
91 |
+
|
92 |
+
[More Information Needed]
|
93 |
+
|
94 |
+
#### Summary
|
95 |
+
|
96 |
+
[More Information Needed]
|
97 |
+
|
98 |
+
## Environmental Impact
|
99 |
+
|
100 |
+
- **Hardware Type:** [More Information Needed]
|
101 |
+
- **Hours used:** [More Information Needed]
|
102 |
+
- **Cloud Provider:** [More Information Needed]
|
103 |
+
- **Compute Region:** [More Information Needed]
|
104 |
+
- **Carbon Emitted:** [More Information Needed]
|
105 |
+
|
106 |
+
## Technical Specifications
|
107 |
+
|
108 |
+
### Model Architecture and Objective
|
109 |
+
|
110 |
+
The model uses an SDXL-compatible latent diffusion architecture with a unique min-SNR augmented velocity objective.
|
111 |
+
|
112 |
+
### Compute Infrastructure
|
113 |
+
|
114 |
+
[More Information Needed]
|
115 |
+
|
116 |
+
#### Hardware
|
117 |
+
|
118 |
+
[More Information Needed]
|
119 |
+
|
120 |
+
#### Software
|
121 |
+
|
122 |
+
[More Information Needed]
|
123 |
+
|
124 |
+
## Citation
|
125 |
+
|
126 |
+
**BibTeX:**
|
127 |
+
|
128 |
+
[More Information Needed]
|
129 |
+
|
130 |
+
**APA:**
|
131 |
+
|
132 |
+
[More Information Needed]
|
133 |
+
|
134 |
+
## Glossary
|
135 |
+
|
136 |
+
[More Information Needed]
|
137 |
+
|
138 |
+
## More Information
|
139 |
+
|
140 |
+
[More Information Needed]
|
141 |
+
|
142 |
+
## Model Card Authors
|
143 |
+
|
144 |
+
[More Information Needed]
|
145 |
+
|
146 |
+
## Model Card Contact
|
147 |
+
|
148 |
+
[More Information Needed]
|