mabornea commited on
Commit
ea00b3d
1 Parent(s): 6f50c91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -11
README.md CHANGED
@@ -9,17 +9,13 @@ language:
9
 
10
  # Model description
11
 
12
- Reading comprehension, XLM-RoBERTa model for [TyDiQA Primary Tasks](https://arxiv.org/abs/2003.05002).
13
-
14
- - **Passage selection task (SelectP):** Given a list of the passages in the article, return either (a) the index of the passage that answers the question or (b) NULL if no such passage exists.
15
-
16
- - **Minimal answer span task (MinSpan):** Given the full text of an article, return one of (a) the start and end byte indices of the minimal span that completely answers the question; (b) YES or NO if the question requires a yes/no answer and we can draw a conclusion from the passage; (c) NULL if it is not possible to produce a minimal answer for this question.
17
 
18
  The model is initialized with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large/) and fine-tuned on the [TyDiQA train data](https://huggingface.co/datasets/tydiqa).
19
 
20
  ## Intended uses & limitations
21
 
22
- You can use the raw model for the reading comprehension task.
23
 
24
  ## Usage
25
 
@@ -28,10 +24,47 @@ You can use this model directly with the [PrimeQA](https://github.com/primeqa/pr
28
  ### BibTeX entry and citation info
29
 
30
  ```bibtex
31
- @article{tydiqa,
32
- title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
33
- author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
34
- year = {2020},
35
- journal = {Transactions of the Association for Computational Linguistics}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  }
37
  ```
 
9
 
10
  # Model description
11
 
12
+ An XLM-RoBERTa reading comprehension model for [TyDiQA Primary Tasks](https://arxiv.org/abs/2003.05002).
 
 
 
 
13
 
14
  The model is initialized with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large/) and fine-tuned on the [TyDiQA train data](https://huggingface.co/datasets/tydiqa).
15
 
16
  ## Intended uses & limitations
17
 
18
+ You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model, tydiqa-primary-task-xlm-roberta-large.
19
 
20
  ## Usage
21
 
 
24
  ### BibTeX entry and citation info
25
 
26
  ```bibtex
27
+ @article{clark-etal-2020-tydi,
28
+ title = "{T}y{D}i {QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages",
29
+ author = "Clark, Jonathan H. and
30
+ Choi, Eunsol and
31
+ Collins, Michael and
32
+ Garrette, Dan and
33
+ Kwiatkowski, Tom and
34
+ Nikolaev, Vitaly and
35
+ Palomaki, Jennimaria",
36
+ journal = "Transactions of the Association for Computational Linguistics",
37
+ volume = "8",
38
+ year = "2020",
39
+ address = "Cambridge, MA",
40
+ publisher = "MIT Press",
41
+ url = "https://aclanthology.org/2020.tacl-1.30",
42
+ doi = "10.1162/tacl_a_00317",
43
+ pages = "454--470",
44
+ }
45
+ ```
46
+
47
+ ```bibtex
48
+ @article{DBLP:journals/corr/abs-1911-02116,
49
+ author = {Alexis Conneau and
50
+ Kartikay Khandelwal and
51
+ Naman Goyal and
52
+ Vishrav Chaudhary and
53
+ Guillaume Wenzek and
54
+ Francisco Guzm{\'{a}}n and
55
+ Edouard Grave and
56
+ Myle Ott and
57
+ Luke Zettlemoyer and
58
+ Veselin Stoyanov},
59
+ title = {Unsupervised Cross-lingual Representation Learning at Scale},
60
+ journal = {CoRR},
61
+ volume = {abs/1911.02116},
62
+ year = {2019},
63
+ url = {http://arxiv.org/abs/1911.02116},
64
+ eprinttype = {arXiv},
65
+ eprint = {1911.02116},
66
+ timestamp = {Mon, 11 Nov 2019 18:38:09 +0100},
67
+ biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib},
68
+ bibsource = {dblp computer science bibliography, https://dblp.org}
69
  }
70
  ```