Edit model card

QA2D-t5-base

This model is a fine-tuned version of t5-base on QA2D. It achieves the following results on the evaluation set:

  • Loss: 0.2563
  • Rouge1: 90.1064
  • Rouge2: 82.378
  • Rougel: 85.7963
  • Rougelsum: 85.8004
  • Bleu: 72.7328

See: https://wandb.ai/domenicrosati/huggingface/runs/nqf7gsws for training and eval stats and https://github.com/domenicrosati/qa2d-models for the code!

Model description

A t5-model model to convert questions, answer pairs into statements.

Due to the way it's been trained the input should be all lower case and punctuation removed. Use with . as the seperator between question and answer.

"where in the world is carmen. abruzzo" Output: "carmen is in abruzzo"

Thought punctation and upper case works.

from transformers import AutoTokenizer,  AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained('domenicrosati/QA2D-t5-base')
model = AutoModelForSeq2SeqLM.from_pretrained('domenicrosati/QA2D-t5-base')

question = "where in the world is carmen sandiego"
answer = "she is in abruzzo"
SEP = ". "

prompt = f'{question}{SEP}{answer}'
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
output_ids = model.generate(input_ids)
responses = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
# ['carmen sandiego is in abruzzo']
```More information needed

## Intended uses & limitations

To convert questions, answer pairs into statements.

## Training and evaluation data

Uses [QA2D](https://huggingface.co/datasets/domenicrosati/QA2D).

See [https://github.com/domenicrosati/qa2d-models](https://github.com/domenicrosati/qa2d-models)


## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Rouge1  | Rouge2  | Rougel  | Rougelsum | Bleu    |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2304        | 1.0   | 5060  | 0.2512          | 90.044  | 82.2922 | 85.8021 | 85.8056   | 72.6252 |
| 0.1746        | 2.0   | 10120 | 0.2525          | 90.097  | 82.3468 | 85.8191 | 85.8197   | 72.7480 |
| 0.1512        | 3.0   | 15180 | 0.2563          | 90.1064 | 82.378  | 85.7963 | 85.8004   | 72.7328 |


### Framework versions

- Transformers 4.18.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.1.0
- Tokenizers 0.12.1
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train domenicrosati/QA2D-t5-base

Evaluation results