Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,48 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
pipeline_tag: image-to-image
|
4 |
+
tags:
|
5 |
+
- pytorch
|
6 |
+
- super-resolution
|
7 |
+
- pretrain
|
8 |
+
---
|
9 |
+
|
10 |
+
[Link to Github Release](https://github.com/Phhofm/models/releases/tag/4xmssim_realplksr_dysample_pretrain)
|
11 |
+
|
12 |
+
# 4xmssim_realplksr_dysample_pretrain
|
13 |
+
Scale: 4
|
14 |
+
Architecture: [RealPLKSR with Dysample](https://github.com/muslll/neosr/?tab=readme-ov-file#supported-archs)
|
15 |
+
Architecture Option: [realplksr](https://github.com/muslll/neosr/blob/master/neosr/archs/realplksr_arch.py)
|
16 |
+
|
17 |
+
Author: Philip Hofmann
|
18 |
+
License: CC-BY-0.4
|
19 |
+
Purpose: Pretrained
|
20 |
+
Subject: Photography
|
21 |
+
Input Type: Images
|
22 |
+
Release Date: 27.06.2024
|
23 |
+
|
24 |
+
Dataset: [nomosv2](https://github.com/muslll/neosr/?tab=readme-ov-file#-datasets)
|
25 |
+
Dataset Size: 6000
|
26 |
+
OTF (on the fly augmentations): No
|
27 |
+
Pretrained Model: None (=From Scratch)
|
28 |
+
Iterations: 200'000
|
29 |
+
Batch Size: 8
|
30 |
+
GT Size: 192, 512
|
31 |
+
|
32 |
+
Description: [Dysample](https://arxiv.org/pdf/2308.15085) had been recently added to RealPLKSR, which from what I had seen can resolve or help avoid the checkerboard / grid pattern on inference outputs. So with the [commits from three days ago, the 24.06.24, on neosr](https://github.com/muslll/neosr/commits/master/?since=2024-06-24&until=2024-06-24), I wanted to create a 4x photo pretrain I can then use to train more realplksr models with dysample specifically to stabilize training at the beginning.
|
33 |
+
|
34 |
+
Showcase:
|
35 |
+
[Imgsli](https://imgsli.com/Mjc0OTA1)
|
36 |
+
[Slowpics](https://slow.pics/c/I9grkcqM)
|
37 |
+
|
38 |
+
![Example1](https://github.com/Phhofm/models/assets/14755670/25406570-3388-4d22-ae68-68560c8bd917)
|
39 |
+
![Example2](https://github.com/Phhofm/models/assets/14755670/bf3e946b-9646-441e-a15e-9bbc290a8885)
|
40 |
+
![Example3](https://github.com/Phhofm/models/assets/14755670/7e5ccec4-f485-4e02-a76b-9ec8827ee663)
|
41 |
+
![Example4](https://github.com/Phhofm/models/assets/14755670/9c665c12-c30f-4b7f-a0a6-46a3645633fe)
|
42 |
+
![Example5](https://github.com/Phhofm/models/assets/14755670/4868ba82-fe8a-468c-bfd4-1f471d3ba361)
|
43 |
+
![Example6](https://github.com/Phhofm/models/assets/14755670/3ee42167-721f-4866-8529-d0a19f121ff1)
|
44 |
+
![Example7](https://github.com/Phhofm/models/assets/14755670/a3a23246-8f58-4810-823b-10b701bdbced)
|
45 |
+
![Example8](https://github.com/Phhofm/models/assets/14755670/f6665676-0845-4019-a476-ed253306f838)
|
46 |
+
![Example9](https://github.com/Phhofm/models/assets/14755670/3da3a0d9-ef5e-4e66-a1d1-03dae2bf437b)
|
47 |
+
![Example10](https://github.com/Phhofm/models/assets/14755670/3468d468-09a0-42e4-945b-f63e48d9744b)
|
48 |
+
![Example11](https://github.com/Phhofm/models/assets/14755670/83cfa6d2-3fc9-4099-834a-129a6e87e4de)
|