Text2Text Precedents Finetuned Model
This model fine-tunes google/mt5-small model on shay681/Precedents dataset.
Training and evaluation data
Dataset | Split | # samples |
---|---|---|
Precedents | train | 473,204 |
Precedents | validation | 118,302 |
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- evaluation_strategy: "epoch"
- learning_rate: 5e-5
- train_batch_size: 4
- eval_batch_size: 4
- num_train_epochs: 5
- weight_decay: 0.01
Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
Results
Metric | # Value |
---|---|
Accuracy | 0.075 |
F1 | 0.024 |
About Me
Created by Shay Doner. This is my final project as part of intelligent systems M.Sc studies at Afeka College in Tel-Aviv. For more cooperation, please contact email: shay681@gmail.com
- Downloads last month
- 7
Model tree for shay681/Text2Text_Precedents_finetuned_model
Base model
google/mt5-small