Upload tokenizer.json
db69427
verified
-
1.52 kB
initial commit
-
2.33 kB
Update README.md
-
23 Bytes
Upload 8 files
-
706 Bytes
Updated model, better eval loss, actually is 0,7166 on the same dataset
-
1.74 GB
Adding `safetensors` variant of this model (#1)
-
1.74 GB
Updated model, better eval loss, actually is 0,7166 on the same dataset
-
280 Bytes
Upload 8 files
-
8.59 MB
Upload tokenizer.json
-
379 Bytes
Upload 8 files
training_args.bin
Detected Pickle imports (8)
- "transformers.trainer_utils.IntervalStrategy",
- "transformers.trainer_utils.HubStrategy",
- "transformers.trainer_utils.SchedulerType",
- "torch.device",
- "transformers.training_args.TrainingArguments",
- "accelerate.utils.dataclasses.DistributedType",
- "transformers.training_args.OptimizerNames",
- "accelerate.state.PartialState"
How to fix it?
4.03 kB
Updated model, better eval loss, actually is 0,7166 on the same dataset
-
7.56 MB
Upload 8 files