--- base_model: ai-forever/ruT5-large tags: - generated_from_trainer metrics: - rouge model-index: - name: skilltext results: [] --- # skilltext This model is a fine-tuned version of [ai-forever/ruT5-large](https://huggingface.co/ai-forever/ruT5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7356 - Rouge1: 35.3935 - Rouge2: 23.0684 - Rougel: 33.9649 - Rougelsum: 34.1374 - Gen Len: 18.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.6129 | 50 | 1.1474 | 27.4595 | 14.7139 | 24.8443 | 24.9381 | 18.75 | | No log | 3.2258 | 100 | 0.8913 | 28.9798 | 19.2443 | 28.6604 | 28.4206 | 18.75 | | No log | 4.8387 | 150 | 0.8489 | 35.3714 | 27.2748 | 34.0979 | 34.3062 | 18.8125 | | No log | 6.4516 | 200 | 0.7617 | 32.5432 | 19.8669 | 31.8295 | 32.0548 | 18.8125 | | No log | 8.0645 | 250 | 0.7298 | 28.0087 | 20.6388 | 27.8426 | 27.5811 | 18.8125 | | No log | 9.6774 | 300 | 0.7358 | 32.2062 | 23.9565 | 32.1268 | 31.9668 | 18.6875 | | No log | 11.2903 | 350 | 0.7243 | 31.7139 | 20.0396 | 30.9833 | 31.1632 | 18.75 | | No log | 12.9032 | 400 | 0.7151 | 31.9717 | 20.5954 | 31.2615 | 31.3954 | 18.875 | | No log | 14.5161 | 450 | 0.7560 | 35.3832 | 22.6591 | 34.7474 | 34.8568 | 18.8125 | | 0.984 | 16.1290 | 500 | 0.7420 | 34.5807 | 22.2544 | 33.2432 | 33.2304 | 18.8125 | | 0.984 | 17.7419 | 550 | 0.7399 | 34.702 | 23.2371 | 34.1402 | 34.3177 | 18.75 | | 0.984 | 19.3548 | 600 | 0.7356 | 35.3935 | 23.0684 | 33.9649 | 34.1374 | 18.75 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2 - Datasets 2.12.0 - Tokenizers 0.19.1