Task: Question Answering
Model: BERT
Lang: IT
Model description
This is a BERT [1] model for the Italian language, fine-tuned for Extractive Question Answering on the SQuAD-IT dataset [2]
If you are looking for a more accurate (but slightly heavier) model, you can refer to: https://huggingface.co/osiria/deberta-italian-question-answering
If you are looking for an uncased model, you can refer to: https://huggingface.co/osiria/bert-italian-uncased-question-answering
update: version 2.0
The 2.0 version further improves the performances by exploiting a 2-phases fine-tuning strategy: the model is first fine-tuned on the English SQuAD v2 (1 epoch, 20% warmup ratio, and max learning rate of 3e-5) then further fine-tuned on the Italian SQuAD (2 epochs, no warmup, initial learning rate of 3e-5)
In order to maximize the benefits of the multilingual procedure, bert-base-multilingual-cased is used as a pre-trained model. When the double fine-tuning is completed, the embedding layer is then compressed as in bert-base-italian-cased to obtain a mono-lingual model size
Training and Performances
The model is trained to perform question answering, given a context and a question (under the assumption that the context contains the answer to the question). It has been fine-tuned for Extractive Question Answering, using the SQuAD-IT dataset, for 2 epochs with a linearly decaying learning rate starting from 3e-5, maximum sequence length of 384 and document stride of 128.
The dataset includes 54.159 training instances and 7.609 test instances
The performances on the test set are reported in the following table:
EM | F1 |
---|---|
65.72 | 77.06 |
Testing notebook: https://huggingface.co/osiria/bert-italian-cased-question-answering/blob/main/osiria_bert_italian_cased_qa_evaluation.ipynb
Quick usage
from transformers import BertTokenizerFast, BertForQuestionAnswering
from transformers import pipeline
tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-italian-cased-question-answering")
model = BertForQuestionAnswering.from_pretrained("osiria/bert-italian-cased-question-answering")
pipeline_qa = pipeline("question-answering", model = model, tokenizer = tokenizer)
pipeline_qa(context = "Alessandro Manzoni è nato a Milano nel 1785", question = "Dove è nato Manzoni?")
# {'score': 0.9922313690185547, 'start': 28, 'end': 34, 'answer': 'Milano'}
References
[1] https://arxiv.org/abs/1810.04805
[2] https://link.springer.com/chapter/10.1007/978-3-030-03840-3_29
Limitations
This model was trained SQuAD-IT which is mainly a machine translated version of the original SQuAD v1.1. This means that the quality of the training set is limited by the machine translation. Moreover, the model is meant to answer questions under the assumption that the required information is actually contained in the given context (which is the underlying assumption of SQuAD v1.1). If the assumption is violated, the model will try to return an answer in any case, which is going to be incorrect.
License
The model is released under Apache-2.0 license
- Downloads last month
- 191
Dataset used to train osiria/bert-italian-cased-question-answering
Collection including osiria/bert-italian-cased-question-answering
Evaluation results
- Exact Match on squad_itself-reported0.657
- F1 on squad_itself-reported0.771