Phips commited on
Commit
8451b0d
1 Parent(s): d63f490

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -3
README.md CHANGED
@@ -1,3 +1,54 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ pipeline_tag: image-to-image
4
+ tags:
5
+ - pytorch
6
+ - super-resolution
7
+ ---
8
+
9
+ [Link to Github Release](https://github.com/Phhofm/models/releases/tag/4xNomos2_hq_drct-l)
10
+
11
+ # 4xNomos2_hq_dat2
12
+ Scale: 4
13
+ Architecture: [DAT](https://github.com/zhengchen1999/dat)
14
+ Architecture Option: [dat2](https://github.com/muslll/neosr/blob/5fba7f162d36052010169e6517dec3b406c569ab/neosr/archs/dat_arch.py#L1111)
15
+
16
+ Author: Philip Hofmann
17
+ License: CC-BY-0.4
18
+ Purpose: Upscaler
19
+ Subject: Photography
20
+ Input Type: Images
21
+ Release Date: 29.08.2024
22
+
23
+ Dataset: [nomosv2](https://github.com/muslll/neosr/?tab=readme-ov-file#-datasets)
24
+ Dataset Size: 6000
25
+ OTF (on the fly augmentations): No
26
+ Pretrained Model: DAT_2_x4
27
+ Iterations: 140'000
28
+ Batch Size: 4
29
+ Patch Size: 48
30
+
31
+ Description:
32
+ A dat2 4x upscaling model, similiar to the [4xNomos2_hq_mosr](https://github.com/Phhofm/models/releases/tag/4xNomos2_hq_mosr) model, trained and for usage on non-degraded input to give good quality output.
33
+
34
+ I scored 7 validation outputs of each of the 21 checkpoints (10k-210k) of this model training with 68 metrics.
35
+ [The metric scores can be found in this google sheet](https://docs.google.com/spreadsheets/d/1NL-by7WvZyDMHj5XN8UeDALVSSwH70IKvwV65ATWqrA/edit?usp=sharing).
36
+ The corresponding image files for this scoring can be [found here](https://drive.google.com/file/d/1ZTp9fBMeawftNqzg4RN9_zIvHtul5jVc/view?usp=sharing)
37
+ Screenshot of the google sheet:
38
+ ![image](https://github.com/user-attachments/assets/bc6ff9e5-d012-4b15-9e7b-766896cf3d2f)
39
+
40
+ Release checkpoint has been selected by looking at the scores, manually inspecting, and then getting responses on discord which chose B to this quick visual test, A B or C, which denote different checkpoints: https://slow.pics/c/8Akzj6rR
41
+
42
+ Checkpoint B is 140k which is 4xNomos2_hq_dat2. But I added checkpoint A (4xNomos2_hq_dat2_150000) and checkpoint C (4xNomos2_hq_dat2_10000) model files additionally here if people want to try them out).
43
+
44
+ ## Model Showcase:
45
+ [Slowpics](https://slow.pics/c/yuue9WpF)
46
+
47
+ (Click on image for better view)
48
+ ![Example1](https://github.com/user-attachments/assets/151d3f10-ea2d-4466-a4ed-595f164ec025)
49
+ ![Example2](https://github.com/user-attachments/assets/9ac764ff-42a7-4a50-89a8-dbde3ca4407e)
50
+ ![Example3](https://github.com/user-attachments/assets/62fc5c91-1320-4561-bb1d-c7c5c740ca7d)
51
+ ![Example4](https://github.com/user-attachments/assets/5f44ff1a-a2d3-4942-9f73-c6a3b41fe15b)
52
+ ![Example5](https://github.com/user-attachments/assets/46baa7c5-5a75-4971-8f3c-01657efd566f)
53
+ ![Example6](https://github.com/user-attachments/assets/1dff06a4-0870-4d57-bd50-22409023da64)
54
+ ![Example7](https://github.com/user-attachments/assets/b7681172-c560-4d93-96f3-07b206ad699b)