Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,37 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
pipeline_tag: image-to-image
|
4 |
+
tags:
|
5 |
+
- pytorch
|
6 |
+
- super-resolution
|
7 |
+
---
|
8 |
+
|
9 |
+
[Link to Github Release](https://github.com/Phhofm/models/releases/tag/4xHFA2k_ludvae_realplksr_dysample)
|
10 |
+
|
11 |
+
# 4xFFHQLDAT
|
12 |
+
|
13 |
+
Since the 4xFFHQDAT model is not able to handle the noise present in low quality input images, i made a small variant/finetune of this, the 4xFFHQLDAT model. This model might come in handy if your input image is of bad quality/not suited for above model. I basically made this model in a response to an input image posted in upscaling-results channel as a request to this upscale model (since 4xFFHQDAT would not be able to handle noise), see Imgsli1 example below for result.
|
14 |
+
|
15 |
+
Name: 4xFFHQLDAT
|
16 |
+
Author: Philip Hofmann
|
17 |
+
Release Date: 25.08.2023
|
18 |
+
License: CC BY 4.0
|
19 |
+
Network: DAT
|
20 |
+
Scale: 4
|
21 |
+
Purpose: 4x upscaling model for low quality input photos of faces
|
22 |
+
Iterations: 44000
|
23 |
+
batch_size: 4
|
24 |
+
HR_size: 128
|
25 |
+
Dataset: FFHQ - full dataset till 50k, then first 10k img multiscaled (resulted in ~260k imgs, 126GB)
|
26 |
+
Number of train images: 259990
|
27 |
+
OTF Training: Yes
|
28 |
+
Pretrained_Model_G: 4xFFHQDAT
|
29 |
+
|
30 |
+
Examples 4xFFHQLDAT:
|
31 |
+
[Imgsli1](https://imgsli.com/MjAwNjYx)
|
32 |
+
[Imgsli2](https://imgsli.com/MjAwNjYy)
|
33 |
+
[Imgsli3](https://imgsli.com/MjAwNjYz)
|
34 |
+
|
35 |
+
|
36 |
+
![Example6](https://github.com/Phhofm/models/assets/14755670/61b3cff7-117b-4510-bdcf-cd49a1494227)
|
37 |
+
![Example7](https://github.com/Phhofm/models/assets/14755670/de8e63a4-3b7b-4583-b638-720bb6423b2d)
|