4xFaceUpLDAT / README.md
Phips's picture
Update README.md
3b81d88 verified
|
raw
history blame
1.95 kB
metadata
license: cc-by-4.0
pipeline_tag: image-to-image
tags:
  - pytorch
  - super-resolution

Link to Github Release

4xFaceUpLDAT

This is the version of FaceUpDAT trained for 80k iters for lower quality images as a single huggingface model card and single safetensors file in files.
Below the original text

Name: 4xFaceUpDAT
Author: Philip Hofmann
Release Date: 02.09.2023
License: CC BY 4.0
Network: DAT
Scale: 4
Purpose: 4x upscaling model for faces
Iterations: 140000
epoch: 54
batch_size: 4
HR_size: 128
Dataset: FaceUp (see dataset-releases channel)
Number of train images: 10'000
OTF Training: Yes
Pretrained_Model_G: DAT_x4.pth

Description: 4x photo upscaler for faces, trained on the FaceUp dataset. These models are an improvement over the previously released 4xFFHQDAT and are its successors. These models are released together with the FaceUp dataset, plus the accompanying youtube video

This model comes in 4 different versions:
4xFaceUpDAT (for good quality input)
4xFaceUpLDAT (for lower quality input, can additionally denoise)
4xFaceUpSharpDAT (for good quality input, produces sharper output, trained without USM but sharpened input images, good quality input)
4xFaceUpSharpLDAT (for lower quality input, produces sharper output, trained without USM but sharpened input images, can additionally denoise)

Web Examples (Slowpoke pics):

High quality input:
https://slow.pics/c/rsHKvfv3
https://slow.pics/c/XMAcyBVV
https://slow.pics/c/yWQKYSea

Low quality input:

https://slow.pics/c/QboAlS0t

-- additional info about the other model versions since I make a single entry here --
4xFaceUpLDAT - iters: 80k, pretrain: 4xFaceUpDAT
4xFaceUpSharpDAT - iters: 100k, pretrain: 4xFaceUpDAT
4xFaceUpSharpLDAT - iters: 80k, pretrain: 4xFaceUpSharpDAT