Error trying to duplicate

#3
by johnblues - opened

When trying to duplicate, I get the following error:
ValueError: Invalid model path: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt

Has anyone successfully duplicated this Space?

Hi,

I have changed the path to ckpts. You can retry in 3 ways:

  • Synchronize your space from this one
  • Replace tencent_HunyuanVideo by ckpts in app.py
  • Or duplicate your space a second time

I duplicated the Space again and got this error:
ValueError: Invalid model path: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt

So the same error.

I have added some logs. Do you see in your logs those ones?
initialize_model: ...
models_root exists: ...
Model initialized: ...

And also this one and the following?
What is dit_weight: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt

PS: I have slightly changed the code, that may fix the space

This is the output when I just tried to duplicate. It is different from the previous errors.

runtime error
Exit code: 1. Reason: A

mp_rank_00_model_states_fp8.pt: 90%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰ | 11.9G/13.2G [00:09<00:01, 1.31GB/s]
mp_rank_00_model_states_fp8.pt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 13.2G/13.2G [00:10<00:00, 1.30GB/s]

mp_rank_00_model_states_fp8_map.pt: 0%| | 0.00/104k [00:00<?, ?B/s]
mp_rank_00_model_states_fp8_map.pt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 104k/104k [00:00<00:00, 39.7MB/s]

hunyuan-video-t2v-720p/vae/config.json: 0%| | 0.00/785 [00:00<?, ?B/s]
hunyuan-video-t2v-720p/vae/config.json: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 785/785 [00:00<00:00, 8.40MB/s]

pytorch_model.pt: 0%| | 0.00/986M [00:00<?, ?B/s]

pytorch_model.pt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 986M/986M [00:01<00:00, 918MB/s]
pytorch_model.pt: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 986M/986M [00:02<00:00, 460MB/s]
initialize_model: ckpts
models_root exists: ckpts
2025-01-03 07:23:31.750 | INFO | hyvideo.inference:from_pretrained:154 - Got text-to-video model root path: ckpts
2025-01-03 07:23:31.974 | INFO | hyvideo.inference:from_pretrained:189 - Building model...
What is dit_weight: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
dit_weight.exists(): False
dit_weight.is_file(): False
dit_weight.is_dir(): False
dit_weight.is_symlink(): False
Traceback (most recent call last):
File "/home/user/app/app.py", line 170, in
demo = create_demo("ckpts")
File "/home/user/app/app.py", line 94, in create_demo
model = initialize_model(model_path)
File "/home/user/app/app.py", line 40, in initialize_model
hunyuan_video_sampler = HunyuanVideoSampler.from_pretrained(models_root_path, args=args)
File "/home/user/app/hyvideo/inference.py", line 203, in from_pretrained
model = Inference.load_state_dict(args, model, pretrained_model_path)
File "/home/user/app/hyvideo/inference.py", line 314, in load_state_dict
print('dit_weight.is_junction(): ' + str(dit_weight.is_junction()))
AttributeError: 'PosixPath' object has no attribute 'is_junction'
Container logs:

===== Application Startup at 2025-01-03 06:20:03 =====

The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().

initialize_model: ckpts
models_root exists: ckpts
2025-01-03 07:23:31.750 | INFO | hyvideo.inference:from_pretrained:154 - Got text-to-video model root path: ckpts
2025-01-03 07:23:31.974 | INFO | hyvideo.inference:from_pretrained:189 - Building model...
What is dit_weight: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
dit_weight.exists(): False
dit_weight.is_file(): False
dit_weight.is_dir(): False
dit_weight.is_symlink(): False
Traceback (most recent call last):
File "/home/user/app/app.py", line 170, in
demo = create_demo("ckpts")
File "/home/user/app/app.py", line 94, in create_demo
model = initialize_model(model_path)
File "/home/user/app/app.py", line 40, in initialize_model
hunyuan_video_sampler = HunyuanVideoSampler.from_pretrained(models_root_path, args=args)
File "/home/user/app/hyvideo/inference.py", line 203, in from_pretrained
model = Inference.load_state_dict(args, model, pretrained_model_path)
File "/home/user/app/hyvideo/inference.py", line 314, in load_state_dict
print('dit_weight.is_junction(): ' + str(dit_weight.is_junction()))
AttributeError: 'PosixPath' object has no attribute 'is_junction'
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().

OK, you can retry. (now it download with snapshot and not file by file)

Sign up or log in to comment