PKUWilliamYang commited on
Commit
81ffa96
1 Parent(s): 2edfe58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -1
README.md CHANGED
@@ -1,3 +1,41 @@
1
  ---
2
- license: other
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: pytorch
3
+ tags:
4
+ - style-transfer
5
+ - face-stylization
6
  ---
7
+
8
+ ## Model Details
9
+
10
+ This system provides a web demo for the following paper:
11
+
12
+ **VToonify: Controllable High-Resolution Portrait Video Style Transfer (TOG/SIGGRAPH Asia 2022)**
13
+
14
+ - Developed by: Shuai Yang, Liming Jiang, Ziwei Liu and Chen Change Loy
15
+ - Resources for more information:
16
+ - [Project Page](https://www.mmlab-ntu.com/project/vtoonify/)
17
+ - [Research Paper](https://arxiv.org/abs/2209.11224)
18
+ - [GitHub Repo](https://github.com/williamyang1991/VToonify)
19
+
20
+ **Abstract**
21
+ > Generating high-quality artistic portrait videos is an important and desirable task in computer graphics and vision. Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency. In this work, we investigate the challenging controllable high-resolution portrait video style transfer by introducing a novel **VToonify** framework. Specifically, VToonify leverages the mid- and high-resolution layers of StyleGAN to render high-quality artistic portraits based on the multi-scale content features extracted by an encoder to better preserve the frame details. The resulting fully convolutional architecture accepts non-aligned faces in videos of variable size as input, contributing to complete face regions with natural motions in the output. Our framework is compatible with existing StyleGAN-based image toonification models to extend them to video toonification, and inherits appealing features of these models for flexible style control on color and intensity. This work presents two instantiations of VToonify built upon Toonify and DualStyleGAN for collection-based and exemplar-based portrait video style transfer, respectively. Extensive experimental results demonstrate the effectiveness of our proposed VToonify framework over existing methods in generating high-quality and temporally-coherent artistic portrait videos with flexible style controls.
22
+
23
+
24
+ ## Citation Information
25
+ ```bibtex
26
+ @article{yang2022Vtoonify,
27
+ title={VToonify: Controllable High-Resolution Portrait Video Style Transfer},
28
+ author={Yang, Shuai and Jiang, Liming and Liu, Ziwei and Loy, Chen Change},
29
+ journal={ACM Transactions on Graphics (TOG)},
30
+ volume={41},
31
+ number={6},
32
+ articleno={203},
33
+ pages={1--15},
34
+ year={2022},
35
+ publisher={ACM New York, NY, USA},
36
+ doi={10.1145/3550454.3555437},
37
+ }
38
+ ```
39
+
40
+ ## License
41
+ [S-Lab License 1.0](https://github.com/williamyang1991/VToonify/blob/main/LICENSE.md)