File size: 3,837 Bytes
9f5d79e
 
a8a5daa
 
 
 
 
 
 
 
 
 
 
 
638828b
 
a099107
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
638828b
a099107
 
 
 
26e7356
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6909c41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e517659
 
 
 
 
 
f1543e8
 
e517659
 
 
 
 
 
 
 
 
 
5412ac3
 
e517659
05f588c
 
 
 
 
 
e517659
 
 
 
 
 
 
 
 
05f588c
e517659
 
 
 
05f588c
 
 
 
 
 
 
 
 
 
 
 
 
 
f1543e8
 
 
 
 
 
 
 
 
 
e517659
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
license: mit
datasets:
- squad_v2
- squad
language:
- en
library_name: transformers
tags:
- deberta
- deberta-v3
- question-answering
- squad
- squad_v2
- lora
- peft
model-index:
- name: sjrhuschlee/deberta-v3-large-squad2
  results:
  - task:
      type: question-answering
      name: Question Answering
    dataset:
      name: squad_v2
      type: squad_v2
      config: squad_v2
      split: validation
    metrics:
    - type: exact_match
      value: 87.956
      name: Exact Match
    - type: f1
      value: 90.776
      name: F1
  - task:
      type: question-answering
      name: Question Answering
    dataset:
      name: squad
      type: squad
      config: plain_text
      split: validation
    metrics:
    - type: exact_match
      value: 89.29
      name: Exact Match
    - type: f1
      value: 94.985
      name: F1
  - task:
      type: question-answering
      name: Question Answering
    dataset:
      name: adversarial_qa
      type: adversarial_qa
      config: adversarialQA
      split: validation
    metrics:
    - type: exact_match
      value: 31.167
      name: Exact Match
    - type: f1
      value: 41.787
      name: F1
  - task:
      type: question-answering
      name: Question Answering
    dataset:
      name: squad_adversarial
      type: squad_adversarial
      config: AddOneSent
      split: validation
    metrics:
    - type: exact_match
      value: 75.993
      name: Exact Match
    - type: f1
      value: 80.495
      name: F1
---

# deberta-v3-large for Extractive QA

This is the [deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering.

This model was trained using LoRA available through the [PEFT library](https://github.com/huggingface/peft).

## Overview
**Language model:** deberta-v3-large  
**Language:** English  
**Downstream-task:** Extractive QA  
**Training data:** SQuAD 2.0  
**Eval data:** SQuAD 2.0  
**Infrastructure**: 1x NVIDIA 3070  

## Model Usage

### Using Transformers
This uses the merged weights (base model weights + LoRA weights) to allow for simple use in Transformers pipelines. It has the same performance as using the weights separately when using the PEFT library.
```python
import torch
from transformers import(
  AutoModelForQuestionAnswering,
  AutoTokenizer,
  pipeline
)
model_name = "sjrhuschlee/deberta-v3-large-squad2"

# a) Using pipelines
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
qa_input = {
'question': 'Where do I live?',
'context': 'My name is Sarah and I live in London'
}
res = nlp(qa_input)
# {'score': 0.984, 'start': 30, 'end': 37, 'answer': ' London'}

# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

question = 'Where do I live?'
context = 'My name is Sarah and I live in London'
encoding = tokenizer(question, context, return_tensors="pt")
start_scores, end_scores = model(
  encoding["input_ids"],
  attention_mask=encoding["attention_mask"],
  return_dict=False
)

all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_scores):torch.argmax(end_scores) + 1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens))
# 'London'
```

### Using with Peft
**NOTE**: This requires code in the PR https://github.com/huggingface/peft/pull/473 for the PEFT library.
```python
#!pip install peft

from peft import LoraConfig, PeftModelForQuestionAnswering
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model_name = "sjrhuschlee/deberta-v3-large-squad2"
```