xyma commited on
Commit
6494ee2
·
1 Parent(s): 1da98c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -9,11 +9,14 @@ datasets:
9
  ---
10
 
11
 
12
- # PROP-wiki
13
 
14
  **PROP**, **P**re-training with **R**epresentative w**O**rds **P**rediction, is a new pre-training method tailored for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the “ideal” document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. The full paper can be found [here](https://arxiv.org/pdf/2010.10137.pdf).
15
 
16
 
 
 
 
17
  # Citation
18
  If you find our work useful, please consider citing our paper:
19
  ```bibtex
 
9
  ---
10
 
11
 
12
+ # PROP-marco-step400k
13
 
14
  **PROP**, **P**re-training with **R**epresentative w**O**rds **P**rediction, is a new pre-training method tailored for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the “ideal” document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. The full paper can be found [here](https://arxiv.org/pdf/2010.10137.pdf).
15
 
16
 
17
+ This model is pre-trained with more steps than [PROP-marco](https://huggingface.co/xyma/PROP-marco) on MS MARCO document corpus, and used at the MS MARCO Document Ranking Leaderboard where we reached 1st place.
18
+
19
+
20
  # Citation
21
  If you find our work useful, please consider citing our paper:
22
  ```bibtex