Weizhe Yuan commited on
Commit
37313af
1 Parent(s): bac48f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -21,23 +21,23 @@ In such a pre-training paradigm,
21
 
22
 
23
  ## Model Description
24
- We release all models introduced in our [paper](), covering 13 different application scenarios. Each model contains 11 billion parameters.
25
 
26
  | Model | Description | Recommended Application
27
  | ----------- | ----------- |----------- |
28
- | **rst-all-11b** | **description here** | |
29
- | rst-fact-retrieval-11b | description here | |
30
- | rst-summarization-11b | description here | |
31
- | rst-temporal-reasoning-11b | description here | |
32
- | rst-information-extraction-11b | description here | |
33
- | rst-intent-detection-11b | description here | |
34
- | rst-topic-classification-11b | description here | |
35
- | rst-word-sense-disambiguation-11b | description here | |
36
- | rst-natural-language-inference-11b | description here | |
37
- | rst-sentiment-classification-11b | description here | |
38
- | rst-gaokao-rc-11b | description here | |
39
- | rst-gaokao-cloze-11b | description here | |
40
- | rst-gaokao-writing-11b | description here | |
41
 
42
 
43
 
 
21
 
22
 
23
  ## Model Description
24
+ We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
25
 
26
  | Model | Description | Recommended Application
27
  | ----------- | ----------- |----------- |
28
+ | **rst-all-11b** | **Trained with all the signals below except signals that are used to train Gaokao models** | **All applications below** |
29
+ | rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Fact retrieval |
30
+ | rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization |
31
+ | rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning |
32
+ | rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction|
33
+ | rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction |
34
+ | rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | Topic classification |
35
+ | rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging |
36
+ | rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering |
37
+ | rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment Classification |
38
+ | rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | Multiple-choice question answering, Gaokao reading comprehension |
39
+ | rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | Cloze filling, Gaokao cloze questions |
40
+ | rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, grammar error correction |
41
 
42
 
43