mobilebert-squadv2 / README.md
flozi00's picture
Update README.md
ea7dda1
|
raw
history blame
817 Bytes
---
datasets:
- squad_v2
language:
- en
library_name: transformers
pipeline_tag: question-answering
---
# Mobile-Bert fine-tuned on Squad V2 dataset
This is based on mobile bert architecture suitable for handy devices or device with low resources.
## usage
using transformers library first load model and Tokenizer
```
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "aware-ai/mobilebert-squadv2"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
use question answering pipeline
```
qa_engine = pipeline('question-answering', model=model, tokenizer=tokenizer)
QA_input = {
'question': 'your question?',
'context': '. your context ................ '
}
res = qa_engine (QA_input)
```