san_deva_10mb / README.md
goldfish-models's picture
Upload README.md with huggingface_hub
8f2f9ed verified
---
license: apache-2.0
language:
- san
datasets:
- allenai/MADLAD-400
- allenai/nllb
- oscar-corpus/OSCAR-2109
- cis-lmu/Glot500
library_name: transformers
pipeline_tag: text-generation
tags:
- goldfish
- arxiv:2408.10441
---
# san_deva_10mb
Goldfish is a suite of monolingual language models trained for 350 languages.
This model is the <b>Sanskrit</b> (Devanagari script) model trained on 10MB of data, after accounting for an estimated byte premium of 2.54; content-matched text in Sanskrit takes on average 2.54x as many UTF-8 bytes to encode as English.
The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
Note: This language is available in Goldfish with other scripts (writing systems). See: san_latn.
Note: san_deva is a [macrolanguage](https://iso639-3.sil.org/code_tables/639/data) code. None of its contained individual languages are included in Goldfish (for script deva).
All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://www.arxiv.org/abs/2408.10441).
Training code and sample usage: https://github.com/tylerachang/goldfish
Sample usage also in this Google Colab: [link](https://colab.research.google.com/drive/1rHFpnQsyXJ32ONwCosWZ7frjOYjbGCXG?usp=sharing)
## Model details:
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)!
Details for this model specifically:
* Architecture: gpt2
* Parameters: 39087104
* Maximum sequence length: 512 tokens
* Training text data (raw): 25.43MB
* Training text data (byte premium scaled): 10.005MB
* Training tokens: 2752000 (x10 epochs)
* Vocabulary size: 50000
* Compute cost: 2081674572595200.0 FLOPs or ~0.2 NVIDIA A6000 GPU hours
Training datasets (percentages prior to deduplication):
* 39.60135%: [MADLAD-400 (CommonCrawl)](https://huggingface.co/datasets/allenai/MADLAD-400)
* 37.07458%: [NLLB (CommonCrawl and ParaCrawl)](https://huggingface.co/datasets/allenai/nllb)
* 9.73656%: [Wikipedia 2023/08](https://dumps.wikimedia.org/)
* 7.30509%: [OSCAR 2021/09](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)
* 5.49767%: [Glot500](https://huggingface.co/datasets/cis-lmu/Glot500), including [CCNet](https://github.com/facebookresearch/cc_net), [Hindialect](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-4839), [Wortschatz Leipzig Data](https://wortschatz.uni-leipzig.de/en/download), [OSCAR](https://oscar-project.org/)
* 0.78475%: [eBible](https://ebible.org/find/)
## Citation
If you use this model, please cite:
```
@article{chang-etal-2024-goldfish,
title={Goldfish: Monolingual Language Models for 350 Languages},
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
journal={Preprint},
year={2024},
url={https://www.arxiv.org/abs/2408.10441},
}
```