--- license: llama2 datasets: - HiTZ/euscrawl language: - eu - en metrics: - accuracy - f1 - perplexity pipeline_tag: text-generation --- # **Model Card for Latxa 70b**
IMPORTANT: This model is outdated and made available publicly for reproducibility purposes only. Please utilize the most recent version found in [our HuggingFace collection](https://huggingface.co/collections/HiTZ/latxa-65a697e6838b3acc53677304). Latxa is a collection of foundation models specifically tuned for Basque. Based on Meta’s LLaMA 2 model family, these models were further trained with Euscrawl, a highly curated Basque corpora ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)). Ranging from 7 billion to 70 billion parameters, these models are currently the biggest and best-performing LLMs built for Basque. This is the 70b repository, links to other models can be found in the [Latxa Collection](https://huggingface.co/collections/HiTZ/latxa-65a697e6838b3acc53677304). Read more about Latxa in our [website](https://www.hitz.eus/en/node/340) or in [LinkedIn](https://www.linkedin.com/pulse/presenting-latxa-largest-language-model-built-basque-hitz-zentroa-63qdf)! # **Model Details** ## **Model Description** Latxa is a family of Large Language Models (LLM) based on Meta’s [LLaMA models](https://huggingface.co/meta-llama). Current LLMs exhibit incredible performance for high-resource languages such as English, but, in the case of Basque and other low-resource languages, their performance is close to a random guesser. These limitations widen the gap between high- and low-resource languages when it comes to digital development. We present Latxa to overcome these limitations and promote the development of LLM-based technology and research for the Basque language. Latxa models follow the same architecture as their original counterparts and were further trained in Euscrawl v1 ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)), a high-quality Basque corpora. The models are released in three sizes: 7B, 13B and 70B. * **Developed by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU) * **Model type:** Language model * **Language(s) (NLP):** en, eu * **License:** llama2 * **Parent Model:** meta-llama/Llama-2-70b * **Contact:** hitz@ehu.eus ## **Getting started** Use the code below to get started with the model. ```python from transformers import pipeline pipe = pipeline("text-generation", model=”HiTZ/latxa-70b-v1”) text = "Euskara adimen artifizialera iritsi da!" pipe(text, max_new_tokens=50, num_beams=5) >> [ { 'generated_text': 'Euskara adimen artifizialera iritsi da!\nEuskararen eta adimen artifizialaren arteko harremana aspaldikoa da,' ' baina azken urteotan aurrerapauso handiak eman dira arlo horretan' } ] ``` # **Uses** Latxa models are intended to be used with Basque data; for any other language the performance is not guaranteed. Same as the original, Latxa inherits the [LLaMA-2 License](https://ai.meta.com/llama/license/) which allows for commercial and research use. ## **Direct Use** Latxa family models are pre-trained LLMs without any task-specific or instruction fine-tuning. That is, the model can either be prompted to perform a specific task or further fine-tuned for specific use cases. ## **Out-of-Scope Use** The model was not fine-tuned to follow instructions or to work as a chat assistant, therefore, this kind of usage is not tested nor recommended. # **Bias, Risks, and Limitations** In an effort to alleviate the potentially disturbing or harmful content, Latxa has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Euscrawl below). Still, the model is based on LLaMA models and can potentially carry the same bias, risk and limitations. Please see the LLaMA’s _Ethical Considerations and Limitations _for further information. # **Training Details** ## **Training Data** The models were trained on EusCrawl v1, a high-quality corpus for Basque comprising 1.72M documents, 288M words, totalling 2.1GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to general-purpose approaches. See more details in the [EusCrawl](https://huggingface.co/datasets/HiTZ/euscrawl) dataset card. Additionally, 100K documents of English data randomly selected from the [Pile](https://huggingface.co/datasets/EleutherAI/pile) dataset were also included to avoid catastrophic forgetting. ## **Training Procedure** The models were trained using the GPT-Neox library on the HPC CINECA computing cluster. All the models were approximately trained with an effective batch size of 2M tokens for 1000 to 2000 steps.
Model | Steps | Sequence length | Effective Batch size | Total tokens | GPU hours |
Latxa 7B | 2000 |
4096 |
2M tokens/step |
4B |
359.2h |
Latxa 13B | 1000 |
4096 |
2M tokens/step |
2B |
468.8h |
Latxa 70B | 1680 |
4096 |
2M tokens/step |
3.4B |
*6475.52h |
Model | Belebele | X-StoryCloze | BEC | Vaxx | BHTC | coref | QNLI | WiC | Average |
Random | 25.00 | 50.00 | 33.33 | 33.33 | 8.33 | 50.00 | 50.00 | 50.00 | 37.50 |
LLaMA 2 7B | 26.22 | 50.43 | 41.63 | 18.60 | 20.06 | 50.94 | 48.32 | 49.64 | 38.23 |
LLaMA 2 13B | 32.00 | 50.63 | 41.09 | 18.25 | 27.35 | 49.23 | 48.74 | 49.21 | 39.56 |
LLaMA 2 70B | 33.56 | 51.62 | 47.47 | 21.01 | 31.01 | 52.98 | 51.26 | 51.57 | 42.56 |
BLOOM 7B | 27.00 | 57.18 | 37.94 | 20.72 | 39.10 | 48.21 | 47.48 | 47.57 | 40.65 |
XGLM 7B | 23.88 | 57.71 | 39.94 | 21.58 | 36.73 | 50.94 | 50.42 | 49.21 | 41.30 |
Latxa 7B | 35.67 | 63.13 | 55.61 | 45.93 | 44.44 | 50.43 | 55.04 | 50.14 | 50.05 |
Latxa 13B | 53.56 | 65.85 | 53.23 | 48.66 | 53.61 | 62.52 | 57.14 | 54.21 | 56.10 |
Latxa 70B | 71.78 | 67.57 | 63.52 | 48.95 | 49.51 | 79.90 | 58.82 | 55.50 | 61.94 |