Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,16 @@ This is the model presented in the paper "Detecting Text Formality: A Study of T
|
|
17 |
The original model is [DeBERTa (large)](https://huggingface.co/microsoft/deberta-v3-large). Then, it was fine-tuned on the English corpus for fomality classiication [GYAFC](https://arxiv.org/abs/1803.06535).
|
18 |
In our experiments, the model showed the best results within Transformer-based models for the task. More details, code and data can be found [here](https://github.com/s-nlp/paradetox).
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
**How to use**
|
21 |
```python
|
22 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
|
|
17 |
The original model is [DeBERTa (large)](https://huggingface.co/microsoft/deberta-v3-large). Then, it was fine-tuned on the English corpus for fomality classiication [GYAFC](https://arxiv.org/abs/1803.06535).
|
18 |
In our experiments, the model showed the best results within Transformer-based models for the task. More details, code and data can be found [here](https://github.com/s-nlp/paradetox).
|
19 |
|
20 |
+
**Evaluation Results**
|
21 |
+
|
22 |
+
Here, we provide several metrics of the best models from each category participated in the comparison to understand the ranks of values.
|
23 |
+
| | acc | f1-formal | f1-informal |
|
24 |
+
|------------------|------|-----------|-------------|
|
25 |
+
| bag-of-words | 79.1 | 81.8 | 75.6 |
|
26 |
+
| CharBiLSTM | 87.0 | 89.0 | 84.0 |
|
27 |
+
| DistilBERT-cased | 80.1 | 83.0 | 75.6 |
|
28 |
+
| DeBERTa-large | 87.8 | 89.0 | 86.1 |
|
29 |
+
|
30 |
**How to use**
|
31 |
```python
|
32 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|