NOT SELF REPORTED VALUES FOR THE LEADERBOARD, I HAVE NO CLUE WHY ITS BROKE. CHECK PULL REQUEST
Use summarization without adding summarize to the start of the string.
Trained on Samsum train split.
Parameters for training:
no_decay = ["bias", "LayerNorm.weight", "layer_norm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ]
lr = 0.00005 optimizer = torch.optim.RAdam(optimizer_grouped_parameters, lr=lr)
lr_scheduler = get_scheduler( name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=50005)
This was only for 10K steps with a batch size of 10
If you want more info, feel free to message me or email me at: samuelfipps@gmail.com
- Downloads last month
- 39
Space using Samuel-Fipps/t5-efficient-large-nl36_fine_tune_sum_V2 1
Evaluation results
- ROUGE-1 on samsumtest set self-reported50.505
- ROUGE-2 on samsumtest set self-reported25.647
- ROUGE-L on samsumtest set self-reported41.754
- ROUGE-LSUM on samsumtest set self-reported46.206
- loss on samsumtest set self-reported1.516
- gen_len on samsumtest set self-reported24.034
- ROUGE-1 on cnn_dailymailtest set self-reported34.406
- ROUGE-2 on cnn_dailymailtest set self-reported14.127
- ROUGE-L on cnn_dailymailtest set self-reported24.335
- ROUGE-LSUM on cnn_dailymailtest set self-reported31.658