runtime error
Exit code: 1. Reason: ███| 1.63k/1.63k [00:00<00:00, 8.85MB/s] preprocessor_config.json: 0%| | 0.00/483 [00:00<?, ?B/s][A preprocessor_config.json: 100%|██████████| 483/483 [00:00<00:00, 4.16MB/s] Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. tokenizer_config.json: 0%| | 0.00/177k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 177k/177k [00:00<00:00, 92.4MB/s] tokenizer.json: 0%| | 0.00/9.26M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 9.26M/9.26M [00:00<00:00, 76.1MB/s] special_tokens_map.json: 0%| | 0.00/414 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 414/414 [00:00<00:00, 3.50MB/s] config.json: 0%| | 0.00/1.14k [00:00<?, ?B/s][A config.json: 100%|██████████| 1.14k/1.14k [00:00<00:00, 8.59MB/s] `rope_scaling`'s original_max_position_embeddings field must be less than max_position_embeddings, got 8192 and max_position_embeddings=2048 model.safetensors.index.json: 0%| | 0.00/157k [00:00<?, ?B/s][A model.safetensors.index.json: 100%|██████████| 157k/157k [00:00<00:00, 63.0MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 11, in <module> model = LlavaForConditionalGeneration.from_pretrained(model_id).to("cuda") File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3944, in from_pretrained resolved_archive_file, sharded_metadata = get_checkpoint_shard_files( File "/usr/local/lib/python3.10/site-packages/transformers/utils/hub.py", line 1077, in get_checkpoint_shard_files shard_filenames = sorted(set(index["weight_map"].values())) KeyError: 'weight_map'
Container logs:
Fetching error logs...