File size: 2,594 Bytes
1fa2b0f cb0e89e 1fa2b0f cb0e89e 1fa2b0f cb0e89e 1fa2b0f cb0e89e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
# Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models (CVPR 2024)
This is the StorySalon dataset proposed in StoryGen.
For the open-source PDF data, you can directly download the frames, corresponding masks, descriptions and original story narratives.
For the data extracted from YouTube videos, we also provide their corresponding masks, descriptions and original story narratives in this repository. However, you need to refer to `./Image_Inpainted/Video/metadata.json` to download the video meta-data by yourself, and then use the provided data processing pipeline to obtain the frames.
## Video Meta Data Preparation
We provide the metadata of our StorySalon dataset in `./Image_Inpainted/Video/metadata.json`. It includes the id, name, url, duration and the keyframe list after filtering the videos.
To download these videos, we recommend to use [youtube-dl](https://github.com/yt-dlp/yt-dlp) via:
```
youtube-dl --write-auto-sub -o 'file\%(title)s.%(ext)s' -f 135 [url]
```
The keyframes extracted with the following data processing pipeline (step 1) can be filtered according to the keyframe list provided in the metadata to avoid manual selection.
The corresponding masks, story-level description and visual description can be extracted with the following data processing pipeline or downloaded from [here](https://huggingface.co/datasets/haoningwu/StorySalon).
## Data Processing Pipeline
The data processing pipeline includes several necessary steps:
- Extract the keyframes and their corresponding subtitles;
- Detect and remove duplicate frames;
- Segment text, people, and headshots in images; and remove frames that only contain real people;
- Inpaint the text, headshots and real hands in the frames according to the segmentation mask;
- (Optional) Use Caption model combined with subtitles to generate a description of each image.
For a more detailed introduction to the data processing pipeline, please refer to [StoryGen](https://github.com/haoningwu3639/StoryGen) and our paper.
## Citation
If you use this dataset for your research or project, please cite:
@inproceedings{liu2024intelligent,
title = {Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion Models},
author = {Chang Liu, Haoning Wu, Yujie Zhong, Xiaoyun Zhang, Yanfeng Wang, Weidi Xie},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024},
}
## Contact
If you have any questions, please feel free to contact haoningwu3639@gmail.com or liuchang666@sjtu.edu.cn.
|