mathiascreutz
commited on
Commit
•
f549294
1
Parent(s):
e110f3a
Minor modifications
Browse files
README.md
CHANGED
@@ -157,7 +157,14 @@ data = load_dataset("GEM/opusparcus", "fr.90")
|
|
157 |
|
158 |
Remark regarding the optimal choice of training set qualities:
|
159 |
Previous work suggests that a larger and noisier set is better than a
|
160 |
-
smaller and clean set. See Sjöblom et al. (2018). [Paraphrase
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
161 |
|
162 |
### Data Instances
|
163 |
|
@@ -250,8 +257,8 @@ the value 0.0 in the `annot_score` field.
|
|
250 |
|
251 |
For an assessment of of inter-annotator agreement, see Aulamo et
|
252 |
al. (2019). [Annotation of subtitle paraphrases using a new web
|
253 |
-
tool.](http://ceur-ws.org/Vol-2364/3_paper.pdf) In Proceedings of the
|
254 |
-
Digital Humanities in the Nordic Countries 4th Conference
|
255 |
Copenhagen, Denmark.
|
256 |
|
257 |
### Data Splits
|
|
|
157 |
|
158 |
Remark regarding the optimal choice of training set qualities:
|
159 |
Previous work suggests that a larger and noisier set is better than a
|
160 |
+
smaller and clean set. See Sjöblom et al. (2018). [Paraphrase
|
161 |
+
Detection on Noisy Subtitles in Six
|
162 |
+
Languages](http://noisy-text.github.io/2018/pdf/W-NUT20189.pdf). In
|
163 |
+
*Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on
|
164 |
+
Noisy User-generated Text*, and Vahtola et al. (2021). [Coping with
|
165 |
+
Noisy Training Data Labels in Paraphrase
|
166 |
+
Detection](https://aclanthology.org/2021.wnut-1.32/). In *Proceedings
|
167 |
+
of the 7th Workshop on Noisy User-generated Text*.
|
168 |
|
169 |
### Data Instances
|
170 |
|
|
|
257 |
|
258 |
For an assessment of of inter-annotator agreement, see Aulamo et
|
259 |
al. (2019). [Annotation of subtitle paraphrases using a new web
|
260 |
+
tool.](http://ceur-ws.org/Vol-2364/3_paper.pdf) In *Proceedings of the
|
261 |
+
Digital Humanities in the Nordic Countries 4th Conference*,
|
262 |
Copenhagen, Denmark.
|
263 |
|
264 |
### Data Splits
|