--- license: apache-2.0 base_model: google/mt5-small tags: - generated_from_trainer model-index: - name: bengali_news_article_summarization_mt5 results: [] --- # bengali_news_article_summarization_mt5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 20 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 160 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 83 | 0.8963 | | No log | 2.0 | 167 | 0.3201 | | 9.149 | 2.99 | 250 | 0.2583 | | 9.149 | 3.99 | 334 | 0.2372 | | 0.3009 | 5.0 | 418 | 0.2298 | | 0.3009 | 5.99 | 501 | 0.2244 | | 0.3009 | 7.0 | 585 | 0.2213 | | 0.2524 | 8.0 | 669 | 0.2163 | | 0.2524 | 8.99 | 752 | 0.2136 | | 0.2306 | 10.0 | 836 | 0.2126 | | 0.2306 | 10.99 | 919 | 0.2117 | | 0.2176 | 11.99 | 1003 | 0.2120 | | 0.2176 | 13.0 | 1087 | 0.2116 | | 0.2176 | 13.99 | 1170 | 0.2111 | | 0.2119 | 14.89 | 1245 | 0.2111 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2