--unset
readme_zh
ab92119
metadata
license: apache-2.0
task_categories:
  - text-to-video
  - image-to-video
language:
  - en
size_categories:
  - n<1K

中文阅读

Information

We can refer to this information for our data mixing ratio. For the same character, there are 46 training samples for detailed description, 166 training samples for scene and general information description, and 1e-3 learning rate and 1000-2000 steps to reproduce the character image.

  • The length of each video is 6 seconds.
  • The frame rate of the videos is 12 frames per second.
  • The video resolution is w=720 , h=960.

Dataset Format

.
├── README.md
├── captions.txt
├── videos
└── videos.txt

Used


import os
from datasets import Dataset, DatasetDict

dataset_dir = 'lora_dataset/Dance-VideoGeneration-Dataset'
captions_file = os.path.join(dataset_dir, 'captions.txt')
videos_file = os.path.join(dataset_dir, 'videos.txt')

with open(captions_file, 'r', encoding='utf-8') as f:
    captions = f.readlines()

with open(videos_file, 'r', encoding='utf-8') as f:
    video_paths = f.readlines()

captions = [caption.strip() for caption in captions]
video_paths = [video_path.strip() for video_path in video_paths]

assert len(captions) == len(video_paths), f"captions.txt { len(captions)} and {len(video_paths)}videos.txt line not match"

data = {
    'text': captions,
    'video': video_paths
}

dataset = Dataset.from_dict(data)

dataset_dict = DatasetDict({
    'train': dataset
})
dataset_dict

Here are a few key differences in the diffusers framework used in our publicly released SAT fine-tuning code:

  • LoRA weights have a rank parameter, with the 2B transformer model defaulting to a rank of 128, and the 5B transformer model defaulting to a rank of 256.
  • The lora_scale is calculated as alpha / lora_r, where alpha is typically set to 1 during SAT training to ensure stability and prevent underflow.
  • Higher rank offers better expressiveness, but it also demands more memory and results in longer training times.

License

This dataset is released under the Apache-2.0 license.