bourdoiscatie
commited on
Commit
•
00762c8
1
Parent(s):
cd37af8
Update README.md
Browse files
README.md
CHANGED
@@ -38,13 +38,17 @@ Our methodology is described in a blog post available in [English](https://blog.
|
|
38 |
|
39 |
|
40 |
## Results (french QA test split)
|
41 |
-
| Model | Exact_match | F1-score | Answer_f1 | NoAnswer_f1 |
|
42 |
| ----------- | ----------- | ----------- | ----------- | ----------- |
|
43 |
-
| [
|
44 |
-
| [
|
45 |
-
| [
|
46 |
-
|
|
|
|
47 |
|
|
|
|
|
|
|
48 |
|
49 |
### Usage
|
50 |
|
@@ -84,14 +88,14 @@ A Space has been created to test the model. It is available [here](https://huggi
|
|
84 |
|
85 |
### QAmemBERT2 & QAmemBERTa
|
86 |
```
|
87 |
-
@misc {
|
88 |
author = { {BOURDOIS, Loïck} },
|
89 |
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
}
|
96 |
```
|
97 |
|
|
|
38 |
|
39 |
|
40 |
## Results (french QA test split)
|
41 |
+
| Model | Parameters | Context | Exact_match | F1-score | Answer_f1 | NoAnswer_f1 |
|
42 |
| ----------- | ----------- | ----------- | ----------- | ----------- |
|
43 |
+
| [etalab/camembert-base-squadFR-fquad-piaf](https://huggingface.co/AgentPublic/camembert-base-squadFR-fquad-piaf) | 110M | 512 tokens | 39.30 | 51.55 | 79.54 | 23.58
|
44 |
+
| [QAmembert](https://huggingface.co/CATIE-AQ/QAmembert)| 110M | 512 tokens | 77.14 | 86.88 | 75.66 | 98.11
|
45 |
+
| [QAmembert2](https://huggingface.co/CATIE-AQ/QAmembert2) (this version) | 112M | 1024 tokens | 76.47 | 88.25 | 78.66 | 97.84
|
46 |
+
| [QAmembert-large](https://huggingface.co/CATIE-AQ/QAmembert-large)| 336M | 512 tokens | 77.14 | 88.74 | 78.83 | **98.65**
|
47 |
+
| [QAmemberta](https://huggingface.co/CATIE-AQ/QAmemberta) | 111M | 1024 tokens | **78.18** | **89.53** | **81.40** | 97.64
|
48 |
|
49 |
+
Looking at the “Answer_f1” column, Etalab's model appears to be competitive on texts where the answer to the question is indeed in the text provided (it does better than QAmemBERT-large, for example). However, the fact that it doesn't handle texts where the answer to the question is not in the text provided is a drawback.
|
50 |
+
In all cases, whether in terms of metrics, number of parameters or context size, QAmemBERTa achieves the best results.
|
51 |
+
We therefore invite the reader to choose this model.
|
52 |
|
53 |
### Usage
|
54 |
|
|
|
88 |
|
89 |
### QAmemBERT2 & QAmemBERTa
|
90 |
```
|
91 |
+
@misc {qamemberta2024,
|
92 |
author = { {BOURDOIS, Loïck} },
|
93 |
organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} },
|
94 |
+
title = { QAmemberta (Revision 976a70b) },
|
95 |
+
year = 2024,
|
96 |
+
url = { https://huggingface.co/CATIE-AQ/QAmemberta },
|
97 |
+
doi = { 10.57967/hf/3639 },
|
98 |
+
publisher = { Hugging Face }
|
99 |
}
|
100 |
```
|
101 |
|