laura.vasquezrodriguez commited on
Commit
813e52e
·
1 Parent(s): f4dfac7

Add model files for readability-es-benchmark-bertin-es-sentences-2class

Browse files
README.md CHANGED
@@ -1,3 +1,73 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+ ## Readability benchmark (ES): bertin-es-sentences-2class
6
+
7
+ This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
8
+ You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
9
+
10
+ ## Models
11
+
12
+ Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
13
+ You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
14
+ These are the available models you can use (current model page in bold):
15
+
16
+ | Model | Granularity | # classes |
17
+ |-------------------------------------------------------------------------------------------------------|----------------|:---------:|
18
+ | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 |
19
+ | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 |
20
+ | [BERTIN (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
21
+ | [BERTIN (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
22
+ | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
23
+ | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
24
+ | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
25
+ | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
26
+ | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
27
+ | [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 |
28
+ | [BERTIN (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
29
+ | [BERTIN (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 |
30
+ | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 |
31
+ | [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
32
+ | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
33
+ | [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 |
34
+
35
+ For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
36
+ ## Results
37
+
38
+ These are our results for all the readability models in different settings. Please select your model based on the desired performance:
39
+
40
+ | Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
41
+ |-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
42
+ | Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
43
+ | Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
44
+ | Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
45
+ | Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
46
+ | Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
47
+ | Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
48
+ | Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
49
+ | Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
50
+ | Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
51
+ | Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
52
+ | Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
53
+ | Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
54
+ | Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
55
+ | Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
56
+
57
+
58
+ ## Citation
59
+
60
+ If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
61
+
62
+ ```
63
+ @inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
64
+ title = "The Role of Text Simplification Operations in Evaluation",
65
+ author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
66
+ Cuenca-Jim{\'\e}nez, Pedro-Manuel and
67
+ Morales-Esquivel, Sergio Esteban and
68
+ Alva-Manchego, Fernando",
69
+ booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
70
+ month = dec,
71
+ year = "2022",
72
+ }
73
+ ```
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bertin-project/bertin-roberta-base-spanish",
3
+ "architectures": [
4
+ "RobertaForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.0,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "layer_norm_eps": 1e-05,
17
+ "max_position_embeddings": 514,
18
+ "model_type": "roberta",
19
+ "num_attention_heads": 12,
20
+ "num_hidden_layers": 12,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "problem_type": "single_label_classification",
24
+ "torch_dtype": "float32",
25
+ "transformers_version": "4.19.2",
26
+ "type_vocab_size": 1,
27
+ "use_cache": true,
28
+ "vocab_size": 50262
29
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:553b1d2205c0487b78530bf87fb7b400ce0987a762747e715a074f9ccef2c631
3
+ size 997275421
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1f6772db53b516a6c91cb39bbf28967a28adca69c5ae2b083df6f9b991f757b
3
+ size 498651117
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:528887aeaf571c1dd9d1789c0fad11e336830c7f10d9174d25b3f236cf9a2aa4
3
+ size 14503
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfe4967de839d5b80defe550557e070307581482fd91970422d4653d8c5f6d9e
3
+ size 623
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"errors": "replace", "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "trim_offsets": true, "max_len": 512, "special_tokens_map_file": null, "name_or_path": "bertin-project/bertin-roberta-base-spanish", "tokenizer_class": "RobertaTokenizer"}
trainer_state.json ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.266584187746048,
3
+ "best_model_checkpoint": "./model/sent_2class_bertin_project_bertin_roberta_base_spanish/checkpoint-1078",
4
+ "epoch": 10.0,
5
+ "global_step": 10780,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 1.0,
12
+ "learning_rate": 2.7e-06,
13
+ "loss": 0.3488,
14
+ "step": 1078
15
+ },
16
+ {
17
+ "epoch": 1.0,
18
+ "eval_accuracy": 0.8970315398886828,
19
+ "eval_f1": 0.8961910338994079,
20
+ "eval_loss": 0.266584187746048,
21
+ "eval_precision": 0.8957628528645112,
22
+ "eval_recall": 0.8966965120197903,
23
+ "eval_runtime": 21.586,
24
+ "eval_samples_per_second": 99.88,
25
+ "eval_steps_per_second": 6.254,
26
+ "step": 1078
27
+ },
28
+ {
29
+ "epoch": 2.0,
30
+ "learning_rate": 2.4000000000000003e-06,
31
+ "loss": 0.2263,
32
+ "step": 2156
33
+ },
34
+ {
35
+ "epoch": 2.0,
36
+ "eval_accuracy": 0.898886827458256,
37
+ "eval_f1": 0.8977834612105712,
38
+ "eval_loss": 0.27391624450683594,
39
+ "eval_precision": 0.8984546268833581,
40
+ "eval_recall": 0.8972142020797937,
41
+ "eval_runtime": 21.5769,
42
+ "eval_samples_per_second": 99.922,
43
+ "eval_steps_per_second": 6.257,
44
+ "step": 2156
45
+ },
46
+ {
47
+ "epoch": 3.0,
48
+ "learning_rate": 2.1e-06,
49
+ "loss": 0.1778,
50
+ "step": 3234
51
+ },
52
+ {
53
+ "epoch": 3.0,
54
+ "eval_accuracy": 0.9025974025974026,
55
+ "eval_f1": 0.9005949897907656,
56
+ "eval_loss": 0.2852832078933716,
57
+ "eval_precision": 0.9078802796803653,
58
+ "eval_recall": 0.8972558952389886,
59
+ "eval_runtime": 21.6014,
60
+ "eval_samples_per_second": 99.808,
61
+ "eval_steps_per_second": 6.25,
62
+ "step": 3234
63
+ },
64
+ {
65
+ "epoch": 4.0,
66
+ "learning_rate": 1.8e-06,
67
+ "loss": 0.1403,
68
+ "step": 4312
69
+ },
70
+ {
71
+ "epoch": 4.0,
72
+ "eval_accuracy": 0.8882189239332097,
73
+ "eval_f1": 0.8875278557938429,
74
+ "eval_loss": 0.4905070662498474,
75
+ "eval_precision": 0.8867163445635435,
76
+ "eval_recall": 0.8889302925122561,
77
+ "eval_runtime": 21.5859,
78
+ "eval_samples_per_second": 99.88,
79
+ "eval_steps_per_second": 6.254,
80
+ "step": 4312
81
+ },
82
+ {
83
+ "epoch": 5.0,
84
+ "learning_rate": 1.5e-06,
85
+ "loss": 0.1057,
86
+ "step": 5390
87
+ },
88
+ {
89
+ "epoch": 5.0,
90
+ "eval_accuracy": 0.8979591836734694,
91
+ "eval_f1": 0.896845694799659,
92
+ "eval_loss": 0.5339986085891724,
93
+ "eval_precision": 0.8975153439448489,
94
+ "eval_recall": 0.8962778432128748,
95
+ "eval_runtime": 21.6294,
96
+ "eval_samples_per_second": 99.679,
97
+ "eval_steps_per_second": 6.242,
98
+ "step": 5390
99
+ },
100
+ {
101
+ "epoch": 6.0,
102
+ "learning_rate": 1.2000000000000002e-06,
103
+ "loss": 0.0784,
104
+ "step": 6468
105
+ },
106
+ {
107
+ "epoch": 6.0,
108
+ "eval_accuracy": 0.8979591836734694,
109
+ "eval_f1": 0.8963577501817979,
110
+ "eval_loss": 0.5496116280555725,
111
+ "eval_precision": 0.8998756153165819,
112
+ "eval_recall": 0.894290469291251,
113
+ "eval_runtime": 21.5964,
114
+ "eval_samples_per_second": 99.831,
115
+ "eval_steps_per_second": 6.251,
116
+ "step": 6468
117
+ },
118
+ {
119
+ "epoch": 7.0,
120
+ "learning_rate": 9e-07,
121
+ "loss": 0.0613,
122
+ "step": 7546
123
+ },
124
+ {
125
+ "epoch": 7.0,
126
+ "eval_accuracy": 0.8984230055658627,
127
+ "eval_f1": 0.8971592486956067,
128
+ "eval_loss": 0.6363572478294373,
129
+ "eval_precision": 0.8986339225376554,
130
+ "eval_recall": 0.8960685088094171,
131
+ "eval_runtime": 21.6326,
132
+ "eval_samples_per_second": 99.665,
133
+ "eval_steps_per_second": 6.241,
134
+ "step": 7546
135
+ },
136
+ {
137
+ "epoch": 8.0,
138
+ "learning_rate": 6.000000000000001e-07,
139
+ "loss": 0.0461,
140
+ "step": 8624
141
+ },
142
+ {
143
+ "epoch": 8.0,
144
+ "eval_accuracy": 0.900278293135436,
145
+ "eval_f1": 0.8989520476739503,
146
+ "eval_loss": 0.6520560383796692,
147
+ "eval_precision": 0.9009346244640362,
148
+ "eval_recall": 0.8975798858302324,
149
+ "eval_runtime": 21.6557,
150
+ "eval_samples_per_second": 99.558,
151
+ "eval_steps_per_second": 6.234,
152
+ "step": 8624
153
+ },
154
+ {
155
+ "epoch": 9.0,
156
+ "learning_rate": 3.0000000000000004e-07,
157
+ "loss": 0.0374,
158
+ "step": 9702
159
+ },
160
+ {
161
+ "epoch": 9.0,
162
+ "eval_accuracy": 0.897495361781076,
163
+ "eval_f1": 0.8959944382052516,
164
+ "eval_loss": 0.6639323830604553,
165
+ "eval_precision": 0.8988044556565169,
166
+ "eval_recall": 0.8942287981599419,
167
+ "eval_runtime": 21.6348,
168
+ "eval_samples_per_second": 99.654,
169
+ "eval_steps_per_second": 6.24,
170
+ "step": 9702
171
+ },
172
+ {
173
+ "epoch": 10.0,
174
+ "learning_rate": 0.0,
175
+ "loss": 0.0306,
176
+ "step": 10780
177
+ },
178
+ {
179
+ "epoch": 10.0,
180
+ "eval_accuracy": 0.8965677179962894,
181
+ "eval_f1": 0.8953454329048005,
182
+ "eval_loss": 0.6889322996139526,
183
+ "eval_precision": 0.8964728057179624,
184
+ "eval_recall": 0.8944667966103461,
185
+ "eval_runtime": 21.547,
186
+ "eval_samples_per_second": 100.06,
187
+ "eval_steps_per_second": 6.265,
188
+ "step": 10780
189
+ }
190
+ ],
191
+ "max_steps": 10780,
192
+ "num_train_epochs": 10,
193
+ "total_flos": 4.53813948284928e+16,
194
+ "trial_name": null,
195
+ "trial_params": null
196
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:227285920adc45962e40c3b55f40a00ae0400752e6f1232660b47e4fe96e8bc7
3
+ size 3311
vocab.json ADDED
The diff for this file is too large to render. See raw diff