Text Generation
Transformers
Safetensors
English
qwen2
conversational
text-generation-inference
Inference Endpoints
t1101675 commited on
Commit
e82f9bb
1 Parent(s): 1852226

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -13,7 +13,7 @@ pipeline_tag: text-generation
13
 
14
  # MinPLM-QWen-500M
15
 
16
- [paper]() | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
  **MiniPLM-QWen-500M** is a 500M model with QWen achitecture pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial QWen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
19
 
@@ -37,4 +37,11 @@ MiniPLM models achieves better performance given the same computation and scales
37
 
38
  ## Citation
39
 
40
- TODO
 
 
 
 
 
 
 
 
13
 
14
  # MinPLM-QWen-500M
15
 
16
+ [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
17
 
18
  **MiniPLM-QWen-500M** is a 500M model with QWen achitecture pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial QWen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
19
 
 
37
 
38
  ## Citation
39
 
40
+ ```bibtex
41
+ @article{miniplm,
42
+ title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
43
+ author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
44
+ journal={arXiv preprint arXiv:2410.17215},
45
+ year={2024}
46
+ }
47
+ ```