runtime error
64k [00:00<?, ?B/s][A config.json: 100%|██████████| 1.64k/1.64k [00:00<00:00, 7.83MB/s] tf_model.h5: 0%| | 0.00/1.63G [00:00<?, ?B/s][A tf_model.h5: 1%| | 10.5M/1.63G [00:04<12:43, 2.12MB/s][A tf_model.h5: 20%|█▉ | 325M/1.63G [00:05<00:18, 72.0MB/s] [A tf_model.h5: 57%|█████▋ | 934M/1.63G [00:06<00:03, 202MB/s] [A tf_model.h5: 81%|████████▏ | 1.32G/1.63G [00:08<00:01, 216MB/s][A tf_model.h5: 100%|█████████▉| 1.63G/1.63G [00:08<00:00, 186MB/s] All TF 2.0 model weights were used when initializing BartForConditionalGeneration. Some weights of BartForConditionalGeneration were not initialized from the TF 2.0 model and are newly initialized: ['model.encoder.embed_tokens.weight', 'model.decoder.embed_tokens.weight', 'lm_head.weight', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. generation_config.json: 0%| | 0.00/358 [00:00<?, ?B/s][A generation_config.json: 100%|██████████| 358/358 [00:00<00:00, 3.06MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 15, in <module> training_args = Seq2SeqTrainingArguments( File "<string>", line 126, in __init__ File "/home/user/.local/lib/python3.10/site-packages/transformers/training_args.py", line 1493, in __post_init__ and (self.device.type != "cuda") File "/home/user/.local/lib/python3.10/site-packages/transformers/training_args.py", line 1941, in device return self._setup_devices File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/generic.py", line 54, in __get__ cached = self.fget(obj) File "/home/user/.local/lib/python3.10/site-packages/transformers/training_args.py", line 1841, in _setup_devices raise ImportError( ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`
Container logs:
Fetching error logs...