runtime error
Exit code: 1. Reason: plit: 100%|ββββββββββ| 11596446/11596446 [12:42<00:00, 15209.92 examples/s] Map: 0%| | 0/10000 [00:00<?, ? examples/s][A Map: 40%|ββββ | 4000/10000 [00:01<00:02, 2893.25 examples/s][A Map: 70%|βββββββ | 7000/10000 [00:02<00:01, 2925.20 examples/s][A Map: 100%|ββββββββββ| 10000/10000 [00:03<00:00, 2877.63 examples/s][A Map: 100%|ββββββββββ| 10000/10000 [00:03<00:00, 2878.10 examples/s] /usr/local/lib/python3.10/site-packages/transformers/training_args.py:1609: FutureWarning: using `no_cuda` is deprecated and will be removed in version 5.0 of π€ Transformers. Use `use_cpu` instead warnings.warn( No label_names provided for model class `PeftModel`. Since `PeftModel` hides base models input arguments, if label_names is not given, label_names can't be set automatically within `Trainer`. Note that empty label_names list will be used instead. 0%| | 0/10000 [00:00<?, ?it/s][ATraceback (most recent call last): File "/home/user/app/app.py", line 66, in <module> train_model() # EΔitimi baΕlat File "/home/user/app/app.py", line 64, in train_model trainer.train() File "/usr/local/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train return inner_training_loop( File "/usr/local/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "/usr/local/lib/python3.10/site-packages/transformers/trainer.py", line 3698, in training_step loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) File "/usr/local/lib/python3.10/site-packages/transformers/trainer.py", line 3780, in compute_loss raise ValueError( ValueError: The model did not return a loss from the inputs, only the following keys: logits,past_key_values. For reference, the inputs it received are input_ids,attention_mask. 0%| | 0/10000 [00:08<?, ?it/s]
Container logs:
Fetching error logs...