File size: 3,466 Bytes
eefa6a5
0bef4b0
 
 
 
 
 
5f5dc02
eefa6a5
0bef4b0
 
 
 
 
 
 
a201deb
0bef4b0
cb24533
 
1a06be6
 
cb24533
 
 
 
 
 
 
0bef4b0
 
 
 
 
 
 
 
 
 
f047c21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0bef4b0
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
language:
- en
tags:
- formality
licenses:
- cc-by-nc-sa
license: cc-by-nc-sa-4.0
---


**Model Overview**

This is the model presented in the paper "Detecting Text Formality: A Study of Text Classification Approaches". 

The original model is [DeBERTa (large)](https://huggingface.co/microsoft/deberta-v3-large). Then, it was fine-tuned on the English corpus for fomality classiication [GYAFC](https://arxiv.org/abs/1803.06535). 
In our experiments, the model showed the best results within Transformer-based models for the task. More details, code and data can be found [here](https://github.com/s-nlp/formality).

**Evaluation Results**

Here, we provide several metrics of the best models from each category participated in the comparison to understand the ranks of values. This is the task of English monolingual formality classification.

|                  | acc  | f1-formal | f1-informal |
|------------------|------|-----------|-------------|
| bag-of-words     | 79.1 |    81.8   |     75.6    |
| CharBiLSTM       | 87.0 |    89.0   |     84.0    |
| DistilBERT-cased | 80.1 |    83.0   |     75.6    |
| DeBERTa-large    | 87.8 |    89.0   |     86.1    |

**How to use**
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = 'deberta-large-formality-ranker'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```

**Citation**
```
@inproceedings{dementieva-etal-2023-detecting,
    title = "Detecting Text Formality: A Study of Text Classification Approaches",
    author = "Dementieva, Daryna  and
      Babakov, Nikolay  and
      Panchenko, Alexander",
    editor = "Mitkov, Ruslan  and
      Angelova, Galia",
    booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
    month = sep,
    year = "2023",
    address = "Varna, Bulgaria",
    publisher = "INCOMA Ltd., Shoumen, Bulgaria",
    url = "https://aclanthology.org/2023.ranlp-1.31",
    pages = "274--284",
    abstract = "Formality is one of the important characteristics of text documents. The automatic detection of the formality level of a text is potentially beneficial for various natural language processing tasks. Before, two large-scale datasets were introduced for multiple languages featuring formality annotation{---}GYAFC and X-FORMAL. However, they were primarily used for the training of style transfer models. At the same time, the detection of text formality on its own may also be a useful application. This work proposes the first to our knowledge systematic study of formality detection methods based on statistical, neural-based, and Transformer-based machine learning methods and delivers the best-performing models for public usage. We conducted three types of experiments {--} monolingual, multilingual, and cross-lingual. The study shows the overcome of Char BiLSTM model over Transformer-based ones for the monolingual and multilingual formality classification task, while Transformer-based classifiers are more stable to cross-lingual knowledge transfer.",
}
```

## Licensing Information

[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].

[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]

[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png