Edit model card

KGQA-1

This model is a fine-tuned version of google/flan-t5-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0784
  • Rouge1: 72.8963
  • Rouge2: 60.8929
  • Rougel: 69.6657
  • Rougelsum: 72.9329
  • Gen Len: 4.8819
  • F1: 0.7593
  • Recall: 0.7681
  • Precision: 0.7508

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine_with_restarts
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len F1 Recall Precision
2.8587 1.0 598 2.2931 49.5203 26.8249 43.3252 49.5005 4.6943 0.5633 0.5546 0.5723
1.7685 2.0 1196 1.6857 52.6345 31.7615 46.5617 52.5831 4.7965 0.619 0.6295 0.6088
0.8979 3.0 1794 1.3095 65.3839 49.1969 60.9907 65.2835 4.8928 0.6898 0.6806 0.6992
0.4881 4.0 2392 1.4524 68.0576 53.7819 64.3964 67.9986 4.835 0.7239 0.7106 0.7378
1.2094 5.0 2990 3.2070 18.934 4.1916 14.7003 18.9198 6.0159 0.0005 0.001 0.0003
0.7018 6.0 3588 1.3772 68.1255 54.2242 64.3339 68.1513 4.7588 0.7125 0.69 0.7366
0.3275 7.0 4186 1.5585 72.2516 60.2665 68.9117 72.2482 4.9246 0.7643 0.7827 0.7468
0.112 8.0 4784 2.0784 72.8963 60.8929 69.6657 72.9329 4.8819 0.7593 0.7681 0.7508

Framework versions

  • Transformers 4.43.3
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
783M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for wepolyu/KGQA-1

Finetuned
(105)
this model