File size: 4,339 Bytes
79dd4c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7506cb1
 
79dd4c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7b324fb
79dd4c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: apache-2.0
language:
- ru
- en
library_name: transformers
pipeline_tag: feature-extraction
---

# DeBERTa-base

<!-- Provide a quick summary of what the model is/does. -->

Pretrained bidirectional encoder for russian language.
The model was trained using standard MLM objective on large text corpora including open social data.
See `Training Details` section for more information.

⚠️ This model contains only the encoder part without any pretrained head.


- **Developed by:** [deepvk](https://vk.com/deepvk)
- **Model type:** DeBERTa
- **Languages:** Mostly russian and small fraction of other languages
- **License:** Apache 2.0

## How to Get Started with the Model

```python
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("deepvk/deberta-v1-base")
model = AutoModel.from_pretrained("deepvk/deberta-v1-base")

text = "Привет, мир!"

inputs = tokenizer(text, return_tensors='pt')
predictions = model(**inputs)
```

## Training Details

### Training Data

400 GB of filtered and deduplicated texts in total.
A mix of the following data: Wikipedia, Books, Twitter comments, Pikabu, Proza.ru, Film subtitles, News websites, and Social corpus.

#### Deduplication procedure

1. Calculate shingles with size of 5
2. Calculate MinHash with 100 seeds → for every sample (text) have a hash of size 100
3. Split every hash into 10 buckets → every bucket, which contains (100 / 10) = 10 numbers, get hashed into 1 hash → we have 10 hashes for every sample
4. For each bucket find duplicates: find samples which have the same hash → calculate pair-wise jaccard similarity → if the similarity is >0.7 than it's a duplicate
5. Gather duplicates from all the buckets and filter

### Training Hyperparameters

| Argument           | Value                |
|--------------------|----------------------|
| Training regime    | fp16 mixed precision |
| Optimizer          | AdamW                |
| Adam betas         | 0.9,0.98             |
| Adam eps           | 1e-6                 |
| Weight decay       | 1e-2                 |
| Batch size         | 2240                 |
| Num training steps | 1kk                  |
| Num warm-up steps  | 10k                  |
| LR scheduler       | Linear               |
| LR                 |  2e-5                |
| Gradient norm      | 1.0                  |

The model was trained on a machine with 8xA100 for approximately 30 days. 

### Architecture details 


| Argument                | Value          |
|-------------------------|----------------|
|Encoder layers           | 12             |
|Encoder attention heads  | 12             |
|Encoder embed dim        | 768            |
|Encoder ffn embed dim    | 3,072          |
|Activation function      | GeLU           |
|Attention dropout        | 0.1            |
|Dropout                  | 0.1            |
|Max positions            | 512            |
|Vocab size               | 50266          |
|Tokenizer type           | Byte-level BPE |

## Evaluation

We evaluated the model on [Russian Super Glue](https://russiansuperglue.com/) dev set.
The best result in each task is marked in bold.
All models have the same size except the distilled version of DeBERTa.

| Model                                                                  | RCB       |  PARus | MuSeRC  | TERRa | RUSSE   | RWSD    | DaNetQA | Score     |
|------------------------------------------------------------------------|-----------|--------|---------|-------|---------|---------|---------|-----------|
| [vk-deberta-distill](https://huggingface.co/deepvk/deberta-v1-distill) | 0.433     |  0.56  | 0.625   | 0.59  | 0.943   | 0.569   | 0.726   | 0.635     |
| [vk-roberta-base](https://huggingface.co/deepvk/roberta-base)          | 0.46      |  0.56  | 0.679   | 0.769 | 0.960   | 0.569   | 0.658   | 0.665     |
| [vk-deberta-base](https://huggingface.co/deepvk/deberta-v1-base)       | 0.450     |**0.61**|**0.722**| 0.704 | 0.948   | 0.578   |**0.76** |**0.682**  |
| [vk-bert-base](https://huggingface.co/deepvk/bert-base-uncased)        | 0.467     |  0.57  | 0.587   | 0.704 | 0.953   |**0.583**| 0.737   | 0.657     |
| [sber-bert-base](https://huggingface.co/ai-forever/ruBert-base)        | **0.491** |**0.61**| 0.663   | 0.769 |**0.962**| 0.574   | 0.678   | 0.678     |