Edit model card

QA2D-t5-small

This model is a fine-tuned version of t5-small on QA2D. It achieves the following results on the evaluation set:

  • Loss: 0.3236
  • Rouge1: 89.8753
  • Rouge2: 81.8104
  • Rougel: 85.4253
  • Rougelsum: 85.4236
  • Bleu: 72.1080

See: https://wandb.ai/domenicrosati/huggingface/runs/n1yallpe for training and eval stats and https://github.com/domenicrosati/qa2d-models for the code!

Model description

A t5-model model to convert questions, answer pairs into statements.

Due to the way it's been trained the input should be all lower case and punctuation removed. Use with . as the seperator between question and answer.

"where in the world is carmen. abruzzo" Output: "carmen is in abruzzo"

Thought punctation and upper case works.

from transformers import AutoTokenizer,  AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained('domenicrosati/QA2D-t5-small')
model = AutoModelForSeq2SeqLM.from_pretrained('domenicrosati/QA2D-t5-small')

question = "where in the world is carmen sandiego"
answer = "she is in abruzzo"
SEP = ". "

prompt = f'{question}{SEP}{answer}'
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
output_ids = model.generate(input_ids)
responses = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
# ['carmen sandiego is in abruzzo']

Intended uses & limitations

To convert questions, answer pairs into statements.

Training and evaluation data

Uses QA2D.

See https://github.com/domenicrosati/qa2d-models

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 12
  • eval_batch_size: 12
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bleu
0.3177 1.0 5060 0.3144 89.6379 81.3168 85.2036 85.1904 71.4255
0.2479 2.0 10120 0.3035 89.7816 81.6556 85.3541 85.3406 71.7248
0.2268 3.0 15180 0.3015 89.8287 81.698 85.3434 85.3387 71.8344
0.2111 4.0 20240 0.3014 89.8082 81.7192 85.4094 85.406 71.9172
0.1991 5.0 25300 0.3023 89.8776 81.7607 85.3912 85.3842 71.9417
0.1886 6.0 30360 0.3012 89.901 81.7614 85.3345 85.3315 72.0218
0.1803 7.0 35420 0.3010 89.8776 81.8189 85.4154 85.4097 72.0533
0.1724 8.0 40480 0.3041 89.9168 81.8663 85.4457 85.4447 72.1470
0.1654 9.0 45540 0.3076 89.8901 81.8536 85.4857 85.4863 72.0830
0.1601 10.0 50600 0.3083 89.9186 81.881 85.4653 85.4594 72.1048
0.1546 11.0 55660 0.3136 89.8958 81.8533 85.4217 85.4238 72.0752
0.1502 12.0 60720 0.3138 89.903 81.8604 85.4301 85.4267 72.1373
0.1461 13.0 65780 0.3140 89.8867 81.7945 85.3698 85.3662 72.0718
0.1423 14.0 70840 0.3171 89.8985 81.8221 85.4348 85.4331 72.1168
0.1392 15.0 75900 0.3186 89.8938 81.8246 85.402 85.3991 72.0858
0.1366 16.0 80960 0.3208 89.859 81.8133 85.4194 85.4182 72.1014
0.1344 17.0 86020 0.3222 89.8909 81.828 85.4392 85.435 72.1380
0.1324 18.0 91080 0.3226 89.8906 81.8351 85.4506 85.4441 72.1622
0.1309 19.0 96140 0.3231 89.8925 81.8369 85.4375 85.4366 72.1552
0.1305 20.0 101200 0.3236 89.8753 81.8104 85.4253 85.4236 72.1080

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0+cu113
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train domenicrosati/QA2D-t5-small

Evaluation results