--- license: apache-2.0 language: - en base_model: - meta-llama/Llama-3.1-8B --- # Model Card for Llama-3.1_OpenScholar-8B Llama-3.1_OpenScholar-8B is a fine-tuned 8B for scientific literature synthesis. The Llama-3.1_OpenScholar-8B us trained on the [os-data](https://huggingface.co/datasets/OpenScholar/os-data) dataset. ### Model Description - **Developed by:** University of Washigton, Allen Institute for AI (AI2) - **Model type:** a Transformer style autoregressive language model. - **Language(s) (NLP):** English - **License:** The code and model are released under Apache 2.0. - **Date cutoff:** Training data is based on peS2o v2, which includes papers up to January 2023. We also mix training data from Tulu3 and [SciRIFF](https://huggingface.co/datasets/allenai/SciRIFF-train-mix). ### Model Sources - **Project Page:** https://open-scholar.allen.ai/ - **Repositories:** - Core repo (training, inference, fine-tuning etc.): https://github.com/AkariAsai/OpenScholar - Evaluation code: https://github.com/AkariAsai/ScholarQABench - **Paper:** [Link](https://openscholar.allen.ai/paper) - **Technical blog post:** https://allenai.org/blog/openscholar ## License Llama-3.1_OpenScholar-8B is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It is licensed under Apache 2.0. Citation If you find this is useful in your work, please cite it with: ``` @article{openscholar, title={{OpenScholar}: Synthesizing Scientific Literature with Retrieval-Augmented Language Models}, author={ Asai, Akari and He*, Jacqueline and Shao*, Rulin and Shi, Weijia and Singh, Amanpreet and Chang, Joseph Chee and Lo, Kyle and Soldaini, Luca and Feldman, Tian, Sergey and Mike, D’arcy and Wadden, David and Latzke, Matt and Minyang and Ji, Pan and Liu, Shengyan and Tong, Hao and Wu, Bohao and Xiong, Yanyu and Zettlemoyer, Luke and Weld, Dan and Neubig, Graham and Downey, Doug and Yih, Wen-tau and Koh, Pang Wei and Hajishirzi, Hannaneh}, journal={Arxiv}, year={2024}, } ```