Edit model card

electra-base for QA

Overview

Language model: electra-base
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See example in FARM
Infrastructure: 1x Tesla v100

Hyperparameters

seed=42
batch_size = 32
n_epochs = 5
base_LM_model = "google/electra-base-discriminator"
max_seq_len = 384
learning_rate = 1e-4
lr_schedule = LinearWarmup
warmup_proportion = 0.1
doc_stride=128
max_query_length=64

Performance

Evaluated on the SQuAD 2.0 dev set with the official eval script.

"exact": 77.30144024256717,
 "f1": 81.35438272008543,
 "total": 11873,
 "HasAns_exact": 74.34210526315789,
 "HasAns_f1": 82.45961302894314,
 "HasAns_total": 5928,
 "NoAns_exact": 80.25231286795626,
 "NoAns_f1": 80.25231286795626,
 "NoAns_total": 5945

Usage

In Transformers

from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline

model_name = "deepset/electra-base-squad2"

# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
    'question': 'Why is model conversion important?',
    'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)

# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

In FARM

from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer

model_name = "deepset/electra-base-squad2"

# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
             "text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input)

# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)

In haystack

For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:

reader = FARMReader(model_name_or_path="deepset/electra-base-squad2")
# or
reader = TransformersReader(model="deepset/electra-base-squad2",tokenizer="deepset/electra-base-squad2")

Authors

Vaishali Pal vaishali.pal [at] deepset.ai Branden Chan: branden.chan [at] deepset.ai Timo Möller: timo.moeller [at] deepset.ai Malte Pietsch: malte.pietsch [at] deepset.ai Tanay Soni: tanay.soni [at] deepset.ai

Note: Borrowed this model from Haystack model repo for adding tensorflow model.

Downloads last month
186
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for bhadresh-savani/electra-base-squad2

Finetunes
2 models

Dataset used to train bhadresh-savani/electra-base-squad2