Model Card for Model ID
This model is obtained by adapting bloom-1b7 to the Italian language. Among the languages supported by the BLOOM model, there is no Italian, making its use in that context challenging. We adapt the original BLOOM model using the MAD-X language adaptation strategy. Then, the model is fine-tuned over the Italian translation of the dolly dataset.
Model Details
Model Description
We adapt the bloom-1b7 to the Italian language using the MAD-X language adaptation strategy. To produce a valuable model, we follow the same procedure proposed in: https://arxiv.org/abs/2212.09535
We use default script parameters and select a sample of 100,000 examples in the Italian language. We decided to sample data from the Filtered Oscar Dataset for the Italian Language released by Sarti.
Then, the adopted model is fine-tuned over the Italian translation of the dolly dataset. The dolly dataset was automatically transtalted by an open-source machine translation tool: https://pypi.org/project/argostranslate/
To fine-tune the adapted model, we use the script available here: https://github.com/hyintell/BLOOM-fine-tuning/tree/main
It is important to underline that when you use the adapted LLM or one of its fine-tuned models is necessary to use the tokenizer of the adapted model. The BLOOM model adapted to the Italian language is available here: https://huggingface.co/basilepp19/bloom-1b7_it.
- Developed by: Pierpaolo Basile, Pierluigi Cassotti, Marco Polignano, Lucia Siciliani, Giovanni Semeraro. Department of Computer Science, University of Bari Aldo Moro, Italy
- Model type: BLOOM
- Language(s) (NLP): Italian
- License: BigScience BLOOM RAIL 1.0
Citation
Pierpaolo Basile, Pierluigi Cassotti, Marco Polignano, Lucia Siciliani, Giovanni Semeraro. On the impact of Language Adaptation for Large Language Models: A case study for the Italian language using only open resources. Proceedings of the Ninth Italian Conference on Computational Linguistics (CLiC-it 2023).
- Downloads last month
- 11