Fill-Mask
Transformers
PyTorch
xlm-roberta
Inference Endpoints
luciusssss commited on
Commit
549b458
1 Parent(s): ca66086

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -8,7 +8,7 @@ language:
8
  - mn
9
  - kk
10
  ---
11
- # [MC^2XLMR-large]
12
  [Github Repo](https://github.com/luciusssss/mc2_corpus)
13
 
14
 
@@ -17,6 +17,8 @@ We continually pretrain XLM-RoBERTa-large with [MC^2](https://huggingface.co/dat
17
 
18
  See details in the [paper](https://arxiv.org/abs/2311.08348).
19
 
 
 
20
  ## Citation
21
  ```
22
  @misc{zhang2023mc2,
 
8
  - mn
9
  - kk
10
  ---
11
+ # MC^2XLMR-large
12
  [Github Repo](https://github.com/luciusssss/mc2_corpus)
13
 
14
 
 
17
 
18
  See details in the [paper](https://arxiv.org/abs/2311.08348).
19
 
20
+ *We have also released another model trained on MC^2: [MC^2Llama-13B](https://huggingface.co/pkupie/mc2-llama-13b).*
21
+
22
  ## Citation
23
  ```
24
  @misc{zhang2023mc2,