sayakpaul HF staff commited on
Commit
a9277c4
1 Parent(s): ba67793

End of training

Browse files
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: THUDM/CogVideoX-5b
3
+ library_name: diffusers
4
+ license: other
5
+ tags:
6
+ - text-to-video
7
+ - diffusers-training
8
+ - diffusers
9
+ - lora
10
+ - cogvideox
11
+ - cogvideox-diffusers
12
+ - template:sd-lora
13
+ widget: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the training script had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+
20
+ # CogVideoX LoRA Finetune
21
+
22
+ <Gallery />
23
+
24
+ ## Model description
25
+
26
+ This is a lora finetune of the CogVideoX model `THUDM/CogVideoX-5b`.
27
+
28
+ The model was trained using [CogVideoX Factory](https://github.com/a-r-r-o-w/cogvideox-factory) - a repository containing memory-optimized training scripts for the CogVideoX family of models using [TorchAO](https://github.com/pytorch/ao) and [DeepSpeed](https://github.com/microsoft/DeepSpeed). The scripts were adopted from [CogVideoX Diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_lora.py).
29
+
30
+ ## Download model
31
+
32
+ [Download LoRA](sayakpaul/optimizer_adamw_steps_1000_lr-schedule_cosine_with_restarts_learning-rate_1e-4/tree/main) in the Files & Versions tab.
33
+
34
+ ## Usage
35
+
36
+ Requires the [🧨 Diffusers library](https://github.com/huggingface/diffusers) installed.
37
+
38
+ ```py
39
+ import torch
40
+ from diffusers import CogVideoXPipeline
41
+ from diffusers.utils import export_to_video
42
+
43
+ pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16).to("cuda")
44
+ pipe.load_lora_weights("sayakpaul/optimizer_adamw_steps_1000_lr-schedule_cosine_with_restarts_learning-rate_1e-4", weight_name="pytorch_lora_weights.safetensors", adapter_name="cogvideox-lora")
45
+
46
+ # The LoRA adapter weights are determined by what was used for training.
47
+ # In this case, we assume `--lora_alpha` is 32 and `--rank` is 64.
48
+ # It can be made lower or higher from what was used in training to decrease or amplify the effect
49
+ # of the LoRA upto a tolerance, beyond which one might notice no effect at all or overflows.
50
+ pipe.set_adapters(["cogvideox-lora"], [32 / 64])
51
+
52
+ video = pipe("None", guidance_scale=6, use_dynamic_cfg=True).frames[0]
53
+ export_to_video(video, "output.mp4", fps=8)
54
+ ```
55
+
56
+ For more details, including weighting, merging and fusing LoRAs, check the [documentation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) on loading LoRAs in diffusers.
57
+
58
+ ## License
59
+
60
+ Please adhere to the licensing terms as described [here](https://huggingface.co/THUDM/CogVideoX-5b/blob/main/LICENSE) and [here](https://huggingface.co/THUDM/CogVideoX-2b/blob/main/LICENSE).
61
+
62
+
63
+ ## Intended uses & limitations
64
+
65
+ #### How to use
66
+
67
+ ```python
68
+ # TODO: add an example code snippet for running this diffusion pipeline
69
+ ```
70
+
71
+ #### Limitations and bias
72
+
73
+ [TODO: provide examples of latent issues and potential remediations]
74
+
75
+ ## Training details
76
+
77
+ [TODO: describe the data used to train the model]
checkpoint-1000/optimizer.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6fddf850b85fd9d5267a6be614ac77e6ba76865bfda1ded7b36b8c221eb3c40a
3
+ size 528765442
checkpoint-1000/pytorch_lora_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a028a6a69d866b29538b4207a71ce7ed6f17991b04c32f6ca1f7e2777f2d2e81
3
+ size 264286184
checkpoint-1000/random_states_0.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e56b9a93e95f00a8f005254a3fdedae9b28c328c0ed3c7b98f93c42c63d5a17
3
+ size 16036
checkpoint-1000/scheduler.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0bc1fe143c02d21d27124d90adf3489a9e8b4b6d2a4986acb9972587ce8cb00
3
+ size 1000
checkpoint-500/optimizer.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa2894fb65376aa9505dc692715a69abbcb8695c552e42a5697a9140a6c32ddd
3
+ size 528765442
checkpoint-500/pytorch_lora_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f68059da95e58ac29f9b059b426d43b8eaf8a9e32458fe1c2bc18ae09786652d
3
+ size 264286184
checkpoint-500/random_states_0.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a5be9851df666641eeced0c9fe90edfdecc803a7f5a2955cb062f8c343a24b5
3
+ size 16100
checkpoint-500/scheduler.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:539cb5f724b2cf4616ffb148aca917f8c9eee2a610cf41ff7e83141c776ce27e
3
+ size 1000
pytorch_lora_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a028a6a69d866b29538b4207a71ce7ed6f17991b04c32f6ca1f7e2777f2d2e81
3
+ size 264286184