sjrhuschlee commited on
Commit
506ba7d
1 Parent(s): 405ade6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -8
README.md CHANGED
@@ -18,24 +18,26 @@ tags:
18
  This is the [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
19
 
20
  ## Overview
21
- **Language model:** mdeberta-v3-base
22
- **Language:** English
23
- **Downstream-task:** Extractive QA
24
- **Training data:** SQuAD 2.0
25
- **Eval data:** SQuAD 2.0
26
- **Infrastructure**: 1x NVIDIA T4
27
 
28
- ### In Transformers
29
  ```python
30
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
31
  model_name = "sjrhuschlee/mdeberta-v3-base-squad2"
32
- # a) Get predictions
 
33
  nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
34
  qa_input = {
35
  'question': 'Where do I live?',
36
  'context': 'My name is Sarah and I live in London'
37
  }
38
  res = nlp(qa_input)
 
39
  # b) Load model & tokenizer
40
  model = AutoModelForQuestionAnswering.from_pretrained(model_name)
41
  tokenizer = AutoTokenizer.from_pretrained(model_name)
 
18
  This is the [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
19
 
20
  ## Overview
21
+ **Language model:** mdeberta-v3-base
22
+ **Language:** English
23
+ **Downstream-task:** Extractive QA
24
+ **Training data:** SQuAD 2.0
25
+ **Eval data:** SQuAD 2.0
26
+ **Infrastructure**: 1x NVIDIA T4
27
 
28
+ ### Model Usage
29
  ```python
30
  from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
31
  model_name = "sjrhuschlee/mdeberta-v3-base-squad2"
32
+
33
+ # a) Using pipelines
34
  nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
35
  qa_input = {
36
  'question': 'Where do I live?',
37
  'context': 'My name is Sarah and I live in London'
38
  }
39
  res = nlp(qa_input)
40
+
41
  # b) Load model & tokenizer
42
  model = AutoModelForQuestionAnswering.from_pretrained(model_name)
43
  tokenizer = AutoTokenizer.from_pretrained(model_name)