Commit
·
b65b4c9
1
Parent(s):
c70c1a8
Update d-adaptation/notes.md
Browse files- d-adaptation/notes.md +3 -0
d-adaptation/notes.md
CHANGED
@@ -9,6 +9,9 @@ I have tried dim 8 alpha 1 with success as well as failure. Both Amber and Casto
|
|
9 |
UMP ends up with image generations that look like a single brown square, still testing if alpha has a relationship to this issue.
|
10 |
As noted in the same github issue, alpha/rank scaling modifies the gradient update to become smaller and thus d-adaptation to boost the learning rate. This could be the reason why it goes bad.
|
11 |
|
|
|
|
|
|
|
12 |
## Dim
|
13 |
128 dim shows some local noisy patterns. Reranking the model to a lower dim from 128 doesn't get rid of it. Converting the weights of the last up block in the unet does but also causes a noticable change in the generated character. Obviously you could reduce the last up block by a smaller amount.
|
14 |
Lower dims show good performance. Need much larger test to check for accuracy between them.
|
|
|
9 |
UMP ends up with image generations that look like a single brown square, still testing if alpha has a relationship to this issue.
|
10 |
As noted in the same github issue, alpha/rank scaling modifies the gradient update to become smaller and thus d-adaptation to boost the learning rate. This could be the reason why it goes bad.
|
11 |
|
12 |
+
UMP redone at dim 8 alpha 8 showed recognizable character but still significantly degraded aesthetics and prompt coherence.
|
13 |
+
|
14 |
+
|
15 |
## Dim
|
16 |
128 dim shows some local noisy patterns. Reranking the model to a lower dim from 128 doesn't get rid of it. Converting the weights of the last up block in the unet does but also causes a noticable change in the generated character. Obviously you could reduce the last up block by a smaller amount.
|
17 |
Lower dims show good performance. Need much larger test to check for accuracy between them.
|