File size: 2,115 Bytes
33ee32e 9d81d46 33ee32e 9d81d46 9f11fac 4e531c1 33ee32e 5f615a4 1e12450 794ed68 1e12450 5f615a4 fa3a0cc 413398c fa3a0cc d21cf8c fa3a0cc d21cf8c fa3a0cc 1e12450 5f615a4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
language: English
task: extractive question answering
datasets: SQuAD 2.0
tags:
- bert-base
---
# Model Description
This model is for English extractive question answering. It is based on the [bert-base-cased](https://huggingface.co/bert-base-uncased) model, and it is case-sensitive: it makes a difference between english and English.
# Training data
[English SQuAD v2.0](https://rajpurkar.github.io/SQuAD-explorer/)
# How to use
You can use it directly from the [馃 Transformers](https://github.com/huggingface/transformers) library with a pipeline:
``` python
>>> from transformers.pipelines import pipeline
>>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-en-bert-base")
>>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-en-bert-base")
>>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> context = "A problem is regarded as inherently difficult if its
solution requires significant resources, whatever the
algorithm used. The theory formalizes this intuition,
by introducing mathematical models of computation to
study these problems and quantifying the amount of
resources needed to solve them, such as time and storage.
Other complexity measures are also used, such as the
amount of communication (used in communication complexity),
the number of gates in a circuit (used in circuit
complexity) and the number of processors (used in parallel
computing). One of the roles of computational complexity
theory is to determine the practical limits on what
computers can and cannot do."
>>> question = "What are two basic primary resources used to
guage complexity?"
>>> inputs = {"question": question,
"context":context }
>>> nlp(inputs)
{'score': 0.8589141368865967,
'start': 305,
'end': 321,
'answer': 'time and storage'}
``` |