mllmTeam commited on
Commit
bd2d8d3
1 Parent(s): d6f57bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -70,5 +70,19 @@ The training dataset PhoneLM used is comprised of a filtered mixture of open-sou
70
  | Qwen 1.5-500M | 49.2 | 55.7 | 69.5 | 82.5 | 49.5 | 52.3 | 29.4 | 55.44 |
71
  | Cerebras-GPT-590M | 32.3 | 49.8 | 62.8 | 68.2 | 59.2 | 41.2 | 23.5 | 48.14 |
72
 
73
- ## LICENSE
74
- * This repository is released under the [Apache-2.0](https://huggingface.co/mllmTeam/PhoneLM-0.5B/blob/main/README.md) License.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
  | Qwen 1.5-500M | 49.2 | 55.7 | 69.5 | 82.5 | 49.5 | 52.3 | 29.4 | 55.44 |
71
  | Cerebras-GPT-590M | 32.3 | 49.8 | 62.8 | 68.2 | 59.2 | 41.2 | 23.5 | 48.14 |
72
 
73
+ ## License
74
+ * This repository is released under the [Apache-2.0](https://huggingface.co/mllmTeam/PhoneLM-0.5B/blob/main/README.md) License.
75
+
76
+
77
+ ## Citation
78
+ ```
79
+ @misc{yi2024phonelmanefficientcapablesmall,
80
+ title={PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training},
81
+ author={Rongjie Yi and Xiang Li and Weikai Xie and Zhenyan Lu and Chenghua Wang and Ao Zhou and Shangguang Wang and Xiwen Zhang and Mengwei Xu},
82
+ year={2024},
83
+ eprint={2411.05046},
84
+ archivePrefix={arXiv},
85
+ primaryClass={cs.CL},
86
+ url={https://arxiv.org/abs/2411.05046},
87
+ }
88
+ ```