# AniDoc: Animation Creation Made Easier
https://github.com/user-attachments/assets/99e1e52a-f0e1-49f5-b81f-e787857901e4
> **AniDoc: Animation Creation Made Easier**
>
[Yihao Meng](https://yihao-meng.github.io/)1,2, [Hao Ouyang](https://ken-ouyang.github.io/)2, [Hanlin Wang](https://openreview.net/profile?id=~Hanlin_Wang2)3,2, [Qiuyu Wang](https://github.com/qiuyu96)2, [Wen Wang](https://github.com/encounter1997)4,2, [Ka Leong Cheng](https://felixcheng97.github.io/)1,2 , [Zhiheng Liu](https://johanan528.github.io/)5, [Yujun Shen](https://shenyujun.github.io/)2, [Huamin Qu](http://www.huamin.org/index.htm/)†,2
1HKUST 2Ant Group 3NJU 4ZJU 5HKU †corresponding author
> AniDoc colorizes a sequence of sketches based on a character design reference with high fidelity, even when the sketches significantly differ in pose and scale.
**Strongly recommend seeing our [demo page](https://yihao-meng.github.io/AniDoc_demo).**
## Showcases:
## Flexible Usage:
### Same Reference with Varying Sketches
### Same Sketch with Different References.
## TODO List
- [x] Release the paper and demo page. Visit [https://yihao-meng.github.io/AniDoc_demo/](https://yihao-meng.github.io/AniDoc_demo/)
- [x] Release the inference code.
- [ ] Build Gradio Demo
- [ ] Release the training code.
- [ ] Release the sparse sketch setting interpolation code.
## Requirements:
The training is conducted on 8 A100 GPUs (80GB VRAM), the inference is tested on RTX 5000 (32GB VRAM). In our test, the inference requires about 14GB VRAM.
## Setup
```
git clone https://github.com/yihao-meng/AniDoc.git
cd AniDoc
```
## Environment
All the tests are conducted in Linux. We suggest running our code in Linux. To set up our environment in Linux, please run:
```
conda create -n anidoc python=3.8 -y
conda activate anidoc
bash install.sh
```
## Checkpoints
1. please download the pre-trained stable video diffusion (SVD) checkpoints from [here](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid/tree/main), and put the whole folder under `pretrained_weight`, it should look like `./pretrained_weights/stable-video-diffusion-img2vid-xt`
2. please download the checkpoint for our Unet and ControlNet from [here](https://huggingface.co/Yhmeng1106/anidoc/tree/main), and put the whole folder as `./pretrained_weights/anidoc`.
3. please download the co_tracker checkpoint from [here](https://huggingface.co/facebook/cotracker/blob/main/cotracker2.pth) and put it as `./pretrained_weights/cotracker2.pth`.
## Generate Your Animation!
To colorize the target lineart sequence with a specific character design, you can run the following command:
```
bash scripts_infer/anidoc_inference.sh
```
We provide some test cases in `data_test` folder. You can also try our model with your own data. You can change the lineart sequence and corresponding character design in the script `anidoc_inference.sh`, where `--control_image` refers to the lineart sequence and `--ref_image` refers to the character design.
## Citation:
Don't forget to cite this source if it proves useful in your research!
```bibtex
@article{meng2024anidoc,
title={AniDoc: Animation Creation Made Easier},
author={Yihao Meng and Hao Ouyang and Hanlin Wang and Qiuyu Wang and Wen Wang and Ka Leong Cheng and Zhiheng Liu and Yujun Shen and Huamin Qu},
journal={arXiv preprint arXiv:2412.14173},
year={2024}
}
```