|
--- |
|
license: mit |
|
language: |
|
- en |
|
tags: |
|
- code |
|
pretty_name: upscaler |
|
size_categories: |
|
- 100K<n<1M |
|
task_categories: |
|
- image-to-image |
|
--- |
|
|
|
![Thumbnail](thumbnail.jpg) |
|
|
|
# Dataset Card for Latent Diffusion Super Sampling |
|
|
|
Image datasets for building image/video upscaling networks. |
|
|
|
This repository contains implementation of training and inference code for models trained on the following works: |
|
|
|
## Part 1: Trained Sub-Pixel Convolutional Network for Upscaling on 5000 individual 720p-4K and 1080p-4K image pairs |
|
|
|
### References: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network, Shi et al |
|
|
|
Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D. and Wang, Z., 2016. |
|
|
|
### Results: |
|
|
|
720p images tested: 100 <br> |
|
Average PSNR: 40.44 dB <br> |
|
|
|
1080p images tested: 100 <br> |
|
Average PSNR: 43.05 dB <br> |
|
|
|
This outperforms the model in the proposed architecture (average is 28.09 dB) <br> |
|
|
|
Refer points 5,6,7 in Section "Dataset Description" for further information <br> |
|
|
|
## Part 2: Trained Convolutional Neural Network Video Frame Interpolation via Spatially-adaptive Separable Convolution for realtime 4K, 1080p and 720p videos |
|
|
|
### References: Video Frame Interpolation via Adaptive Separable Convolution, Niklaus et al |
|
|
|
sniklaus@pdx.edu, mtlong@cs.pdx.edu, fliu@cs.pdx.edu |
|
|
|
### Results: |
|
|
|
720p Model: <br> |
|
•PSNR: 28.35 <br> |
|
•SSIM: 0.78 <br> |
|
|
|
1080p Model: <br> |
|
•PSNR: 29.67 <br> |
|
•SSIM: 0.84 <br> |
|
|
|
4K Model: <br> |
|
•PSNR: 33.74 <br> |
|
•SSIM: 0.83 <br> |
|
|
|
Refer points 8,9,10 in Section "Dataset Description" for further information <br> |
|
|
|
## Part 3: Latent Diffusion Super Sampling coming soon!!! |
|
|
|
Stay tuned>>>>>>>>>>>>>>> |
|
|
|
## Dataset Details |
|
|
|
Consists of 300,000 ground truth 720p and 1080p frames with corresponding 4K output frames |
|
|
|
### Dataset Description |
|
|
|
|
|
1. 4K_part1: Contains first part of 4K images |
|
2. 4K_part2: Contains second part of 4K images |
|
3. 720p: Contains 100,000 ground truth 720p images |
|
4. 1080p: Contains 100,000 ground truth 1080p images |
|
5. Additionally, you will find 2 ESPCN (Efficient Sub Pixel Convolution Network) PyTorch models and a Jupyter Notebook (ESPCN.ipynb), which you can use for retraining or inference. |
|
6. Selected Super Resolution 5000 contains 5000 randomly picked image triplets for 4K, 1080p and 720p images. |
|
7. Super Resolution Test 100 serves as the test dataset for the above training set. |
|
8. In the latest update, 3 FIASC (Frame Interpolation via Adaptive Separable Convolutional) PyTorch models and a Jupyter Notebook (FIASC.ipynb) have been added to be used for retraining or inference. |
|
9. Frame Interpolation Training contains 6416 frames used for training the models, each respectively for 4K, 1080p and 720p. |
|
10. Frame Interpolation Testing contains 1309 frames used for evaluating the models, each respectively for 4K, 1080p and 720p. |
|
|
|
### Dataset Sources |
|
|
|
YouTube |
|
|
|
## Uses |
|
|
|
Diffusion networks, CNNs, Optical Flow Accelerators, etc. |
|
|
|
## Dataset Structure |
|
|
|
1. All images are in .jpg format |
|
2. Images are named in the following format: *resolution_globalframenumber.jpg* |
|
3. Resolution refers to either of 3: *720p, 1080p or 4K* |
|
4. Globalframenumber is the frame number of the image under the respective resolution. *eg: 4K_10090.jpg* |
|
|
|
### Curation Rationale |
|
|
|
1. To build a real-time upscaling network using latent diffusion supersampling. |
|
2. Design algorithms for increasing temporal resolution (framerate up-conversion) of videos in real-time. |
|
|
|
## Dataset Card Authors |
|
|
|
Alosh Denny |
|
|
|
## Dataset Card Contact |
|
|
|
aloshdenny@gmail.com |