File size: 2,627 Bytes
5f32929
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42c78fd
5f32929
 
 
 
 
 
 
 
 
 
 
e68dde0
5f32929
 
f98b109
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5f32929
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
language: en
tags: 
  - deberta
  - deberta-v3
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---

## DeBERTa: Decoding-enhanced BERT with Disentangled Attention

[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. 

Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.

In DeBERTa V3 we replaced MLM objective with RTD(Replaced Token Detection) objective which was first introduced by ELECTRA for pre-training. The new objective significantly improves the model performance. Please check appendix A11 in our [paper](https://arxiv.org/abs/2006.03654) for more details.

This is the DeBERTa V3 small model with 12 layers, 768 hidden size. Total parameters is 183M while Embedding layer take about 98M due to the usage of 128k vocabulary. It's trained with 160GB data.
#### Fine-tuning on NLU tasks

We present the dev results on SQuAD 1.1/2.0 and MNLI tasks.

| Model             | SQuAD 1.1 | SQuAD 2.0 | MNLI-m |
|-------------------|-----------|-----------|--------|
| RoBERTa-base      | 91.5/84.6 | 83.7/80.5 | 87.6   |
| XLNet-base        | -/-       | -/80.2    | 86.8   |
| DeBERTa-base  | 93.1/87.2 | 86.2/83.1 | 88.8   |
| **DeBERTa-v3-base**  | 93.9/88.4 | 88.4/85.4 | 90.5   |
| DeBERTa-v3-base+SiFT  | -/- | -/- | **91.0**   |

#### Fine-tuning with HF transformers

```bash
#!/bin/bash

cd transformers/examples/pytorch/text-classification/

pip install datasets
export TASK_NAME=mnli

output_dir="ds_results"

num_gpus=8

batch_size=8

python -m torch.distributed.launch --nproc_per_node=${num_gpus} \
  run_glue.py \
  --model_name_or_path microsoft/deberta-v3-small \
  --task_name $TASK_NAME \
  --do_train \
  --do_eval \
  --evaluation_strategy steps \
  --max_seq_length 256 \
  --warmup_steps 1000 \
  --per_device_train_batch_size ${batch_size} \
  --learning_rate 2.5e-5 \
  --num_train_epochs 3 \
  --output_dir $output_dir \
  --overwrite_output_dir \
  --logging_steps 1000 \
  --logging_dir $output_dir

```

### Citation

If you find DeBERTa useful for your work, please cite the following paper:

``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```