sjrhuschlee commited on
Commit
e517659
1 Parent(s): a8a5daa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -1
README.md CHANGED
@@ -12,4 +12,43 @@ tags:
12
  - question-answering
13
  - squad
14
  - squad_v2
15
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - question-answering
13
  - squad
14
  - squad_v2
15
+ ---
16
+
17
+ # deberta-v3-large for Extractive QA
18
+
19
+ This is the [deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.
20
+
21
+ ## Overview
22
+ **Language model:** deberta-v3-large
23
+ **Language:** English
24
+ **Downstream-task:** Extractive QA
25
+ **Training data:** SQuAD 2.0
26
+ **Eval data:** SQuAD 2.0
27
+ **Infrastructure**: 1x NVIDIA 3070
28
+
29
+ ## Model Usage
30
+
31
+ ### Using with Peft
32
+ ```python
33
+ from peft import LoraConfig, PeftModelForQuestionAnswering
34
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer
35
+ model_name = "sjrhuschlee/deberta-v3-large-squad2"
36
+ ```
37
+
38
+ ### Using the Merged Model
39
+ ```python
40
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
41
+ model_name = "sjrhuschlee/deberta-v3-large-squad2"
42
+
43
+ # a) Using pipelines
44
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
45
+ qa_input = {
46
+ 'question': 'Where do I live?',
47
+ 'context': 'My name is Sarah and I live in London'
48
+ }
49
+ res = nlp(qa_input)
50
+
51
+ # b) Load model & tokenizer
52
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
53
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
54
+ ```