--- license: cc-by-4.0 pipeline_tag: image-to-image tags: - pytorch - super-resolution - pretrain --- [Link to Github Release](https://github.com/Phhofm/models/releases/tag/4xSPAN_pretrains) [Neosr](https://github.com/muslll/neosr)'s latest update from yesterday included a [new adaptation of the multi-scale ssim loss](https://github.com/muslll/neosr/wiki/Losses#mssim_opt). This was an experiment to test out the difference between making a SPAN pretrain with pixel loss with L1 criteria (as often used in research) vs mssim loss as its only loss. Models are provided so they can be used for tests or also used as a pretrain for another SPAN model. --- ## 4xpix_span_pretrain Scale: 4 Architecture: SPAN Author: Philip Hofmann License: CC-BY-4.0 Purpose: Pretrain Subject: Realistic, Anime Date: 10.04.2024 Dataset: [nomos_uni](https://github.com/muslll/neosr) Dataset Size: 2989 OTF (on the fly augmentations): No Pretrained Model: None Iterations: 80'000 Batch Size: 12 GT Size: 128 Description: 4x SPAN pretrain trained on pixel loss with L1 criteria (as often used in research) on downsampled nomos_uni dataset using kim's [dataset destroyer](https://github.com/Kim2091/helpful-scripts/tree/main/Dataset%20Destroyer) with down_up,linear,cubic_mitchell,lanczos,gauss,box (while down_up used the same and with range = 0.15,1.5). The new augmentations except CutBlur have also been used (since CutBlur is meant to be applied to real-world SR and may cause undesired effects if applied to bicubic-only). Config and training log provided for more details. --- ## 4xmssim_span_pretrain Scale: 4 Architecture: SPAN Author: Philip Hofmann License: CC-BY-4.0 Purpose: Pretrain Subject: Realistic, Anime Date: 10.04.2024 Dataset: [nomos_uni](https://github.com/muslll/neosr) Dataset Size: 2989 OTF (on the fly augmentations): No Pretrained Model: None Iterations: 80'000 Batch Size: 12 GT Size: 128 Description: 4x SPAN pretrain trained on [neosr](https://github.com/muslll/neosr)'s [new adaptation of the multi-scale ssim loss](https://github.com/muslll/neosr/wiki/Losses#mssim_opt) from yesterdays update on downsampled nomos_uni dataset using kim's [dataset destroyer](https://github.com/Kim2091/helpful-scripts/tree/main/Dataset%20Destroyer) with down_up,linear,cubic_mitchell,lanczos,gauss,box (while down_up used the same and with range = 0.15,1.5). The new augmentations except CutBlur have also been used (since CutBlur is meant to be applied to real-world SR and may cause undesired effects if applied to bicubic-only). Config and training log provided for more details. --- Showcase: [7 Slowpics Examples](https://slow.pics/c/zyilXhKU) ![Example1](https://github.com/Phhofm/models/assets/14755670/009a554c-e642-40e0-a12d-41e85c3ff618) ![Example2](https://github.com/Phhofm/models/assets/14755670/1e81ca78-6122-4e23-bd25-1b654c09bfce) ![Example3](https://github.com/Phhofm/models/assets/14755670/a654503c-3ce3-46d6-a724-e5c43e5292c5) ![Example4](https://github.com/Phhofm/models/assets/14755670/15be1785-705d-4584-bae3-9ff5fdcbb8a6) ![Example5](https://github.com/Phhofm/models/assets/14755670/7539f74f-8f47-4b05-aed8-7f41b4e8c8f7) ![Example6](https://github.com/Phhofm/models/assets/14755670/05c4c383-b5ac-4403-93c5-1ac5d59b4875) ![Example7](https://github.com/Phhofm/models/assets/14755670/22272b73-c340-471a-9cba-defcddf5b9f7)