Update README.md
Browse files
README.md
CHANGED
@@ -39,4 +39,15 @@ The Llama-3.1_OpenScholar-8B us trained on the [os-data](https://huggingface.co/
|
|
39 |
|
40 |
## License
|
41 |
|
42 |
-
Llama-3.1_OpenScholar-8B is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It is licensed under Apache 2.0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
## License
|
41 |
|
42 |
+
Llama-3.1_OpenScholar-8B is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It is licensed under Apache 2.0.
|
43 |
+
|
44 |
+
Citation
|
45 |
+
If you find Tulu 2 is useful in your work, please cite it with:
|
46 |
+
```
|
47 |
+
@article{openscholar,
|
48 |
+
title={{OpenScholar}: Synthesizing Scientific Literature with Retrieval-Augmented Language Models},
|
49 |
+
author={ Asai, Akari and He*, Jacqueline and Shao*, Rulin and Shi, Weijia and Singh, Amanpreet and Chang, Joseph Chee and Lo, Kyle and Soldaini, Luca and Feldman, Tian, Sergey and Mike, D’arcy and Wadden, David and Latzke, Matt and Minyang and Ji, Pan and Liu, Shengyan and Tong, Hao and Wu, Bohao and Xiong, Yanyu and Zettlemoyer, Luke and Weld, Dan and Neubig, Graham and Downey, Doug and Yih, Wen-tau and Koh, Pang Wei and Hajishirzi, Hannaneh},
|
50 |
+
journal={Arxiv},
|
51 |
+
year={2024},
|
52 |
+
}
|
53 |
+
```
|