Edit model card
YAML Metadata Error: "datasets" with value "SQuAD 2.0" is not valid. If possible, use a dataset id from https://hf.co/datasets.
YAML Metadata Error: "language" must only contain lowercase characters
YAML Metadata Error: "language" with value "English" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.

Model Description

This model is for English extractive question answering. It is based on the bert-base-cased model, and it is case-sensitive: it makes a difference between english and English.

Training data

English SQuAD v2.0

How to use

You can use it directly from the 馃 Transformers library with a pipeline:

>>> from transformers.pipelines import pipeline
>>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering

>>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-en-bert-base")
>>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-en-bert-base")
>>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)

>>> context = "A problem is regarded as inherently difficult if its 
              solution requires significant resources, whatever the 
              algorithm used. The theory formalizes this intuition, 
              by introducing mathematical models of computation to 
              study these problems and quantifying the amount of 
              resources needed to solve them, such as time and storage.
              Other complexity measures are also used, such as the 
              amount of communication (used in communication complexity),
              the number of gates in a circuit (used in circuit 
              complexity) and the number of processors (used in parallel
              computing). One of the roles of computational complexity
              theory is to determine the practical limits on what 
              computers can and cannot do."
              
>>> question = "What are two basic primary resources used to 
              guage complexity?"

>>> inputs = {"question": question, 
            "context":context }
            
>>> nlp(inputs)

{'score': 0.8589141368865967,
 'start': 305,
 'end': 321,
 'answer': 'time and storage'}
Downloads last month
19
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.