sail
/

Text Generation
Transformers
English
llama
Inference Endpoints
SivilTaram commited on
Commit
fcb60ec
1 Parent(s): 777b5b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -8
README.md CHANGED
@@ -71,6 +71,7 @@ valid:
71
  valid_the_pile_pile_cc: 1.0
72
  model_name: tinyllama_1_1b
73
  ```
 
74
 
75
  ## Model Variants
76
 
@@ -98,14 +99,11 @@ We evaluated each model using [lm-evaluation-harness](https://github.com/Eleuthe
98
  If you use these models in your research, please cite the RegMix paper:
99
 
100
  ```
101
- @misc{liu2024regmix,
102
- title={RegMix: Data Mixture as Regression for Language Model Pre-training},
103
- author={Qian Liu and Xiaosen Zheng and Niklas Muennighoff and Guangtao Zeng and Longxu Dou and Tianyu Pang and Jing Jiang and Min Lin},
104
- year={2024},
105
- eprint={2407.01492},
106
- archivePrefix={arXiv},
107
- primaryClass={cs.CL},
108
- url={https://arxiv.org/abs/2407.01492},
109
  }
110
  ```
111
 
 
71
  valid_the_pile_pile_cc: 1.0
72
  model_name: tinyllama_1_1b
73
  ```
74
+ > Domain weights will be normalized to make sure their sum is 1.0 for train sets in our code.
75
 
76
  ## Model Variants
77
 
 
99
  If you use these models in your research, please cite the RegMix paper:
100
 
101
  ```
102
+ @article{liu2024regmix,
103
+ title={RegMix: Data Mixture as Regression for Language Model Pre-training},
104
+ author={Liu, Qian and Zheng, Xiaosen and Muennighoff, Niklas and Zeng, Guangtao and Dou, Longxu and Pang, Tianyu and Jiang, Jing and Lin, Min},
105
+ journal={arXiv preprint arXiv:2407.01492},
106
+ year={2024}
 
 
 
107
  }
108
  ```
109