|
--- |
|
tags: |
|
- Question Answering |
|
metrics: |
|
- rouge |
|
model-index: |
|
- name: question-answering-generative-t5-v1-base-s-q-c |
|
results: [] |
|
--- |
|
|
|
# Question Answering Generative |
|
The model is intended to be used for Q&A task, given the question & context, the model would attempt to infer the answer text.<br> |
|
Model is generative (t5-v1-base), fine-tuned from [question-generation-auto-hints-t5-v1-base-s-q-c](https://huggingface.co/consciousAI/question-generation-auto-hints-t5-v1-base-s-q-c) with - **Loss:** 0.6751 & **Rougel:** 0.8022 performance scores. |
|
|
|
[Live Demo: Question Answering Encoders vs Generative](https://huggingface.co/spaces/consciousAI/question_answering) |
|
|
|
[Encoder based Question Answering V1](https://huggingface.co/consciousAI/question-answering-roberta-base-s/) |
|
<br>[Encoder based Question Answering V2](https://huggingface.co/consciousAI/question-answering-roberta-base-s-v2/) |
|
|
|
Example code: |
|
|
|
``` |
|
from transformers import ( |
|
AutoModelForSeq2SeqLM, |
|
AutoTokenizer |
|
) |
|
|
|
def _generate(query, context, model, device): |
|
|
|
FT_MODEL = AutoModelForSeq2SeqLM.from_pretrained(model).to(device) |
|
FT_MODEL_TOKENIZER = AutoTokenizer.from_pretrained(model) |
|
input_text = "question: " + query + "</s> question_context: " + context |
|
|
|
input_tokenized = FT_MODEL_TOKENIZER.encode(input_text, return_tensors='pt', truncation=True, padding='max_length', max_length=1024).to(device) |
|
_tok_count_assessment = FT_MODEL_TOKENIZER.encode(input_text, return_tensors='pt', truncation=True).to(device) |
|
|
|
summary_ids = FT_MODEL.generate(input_tokenized, |
|
max_length=30, |
|
min_length=5, |
|
num_beams=2, |
|
early_stopping=True, |
|
) |
|
output = [FT_MODEL_TOKENIZER.decode(id, clean_up_tokenization_spaces=True, skip_special_tokens=True) for id in summary_ids] |
|
|
|
return str(output[0]) |
|
|
|
device = [0 if torch.cuda.is_available() else 'cpu'][0] |
|
_generate(query, context, model="consciousAI/t5-v1-base-s-q-c-multi-task-qgen-v2", device=device) |
|
``` |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.0003 |
|
- train_batch_size: 3 |
|
- eval_batch_size: 3 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 5 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |
|
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:| |
|
| 0.5479 | 1.0 | 14600 | 0.5104 | 0.7672 | 0.4898 | 0.7666 | 0.7666 | |
|
| 0.3647 | 2.0 | 29200 | 0.5180 | 0.7862 | 0.4995 | 0.7855 | 0.7858 | |
|
| 0.2458 | 3.0 | 43800 | 0.5302 | 0.7938 | 0.5039 | 0.7932 | 0.7935 | |
|
| 0.1532 | 4.0 | 58400 | 0.6024 | 0.7989 | 0.514 | 0.7984 | 0.7984 | |
|
| 0.0911 | 5.0 | 73000 | 0.6751 | 0.8028 | 0.5168 | 0.8022 | 0.8022 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.23.0.dev0 |
|
- Pytorch 1.12.1+cu113 |
|
- Datasets 2.5.2 |
|
- Tokenizers 0.13.0 |
|
|