|
--- |
|
license: llama2 |
|
datasets: |
|
- HiTZ/euscrawl |
|
language: |
|
- eu |
|
- en |
|
metrics: |
|
- accuracy |
|
- f1 |
|
- perplexity |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# **Model Card for Latxa 70b** |
|
|
|
<p align="center"> |
|
<img src="https://github.com/hitz-zentroa/latxa/blob/b9aa705f60ee2cc03c9ed62fda82a685abb31b07/assets/latxa_round.png?raw=true" style="height: 350px;"> |
|
</p> |
|
|
|
<span style="color: red; font-weight: bold">IMPORTANT:</span> This model is outdated and made available publicly for reproducibility purposes only. Please utilize the most recent version found in [our HuggingFace collection](https://huggingface.co/collections/HiTZ/latxa-65a697e6838b3acc53677304). |
|
|
|
Latxa is a collection of foundation models specifically tuned for Basque. Based on Meta’s LLaMA 2 model family, these models were further trained with Euscrawl, a highly curated Basque corpora ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)). Ranging from 7 billion to 70 billion parameters, these models are currently the biggest and best-performing LLMs built for Basque. This is the 70b repository, links to other models can be found in the [Latxa Collection](https://huggingface.co/collections/HiTZ/latxa-65a697e6838b3acc53677304). |
|
|
|
Read more about Latxa in our [website](https://www.hitz.eus/en/node/340) or in [LinkedIn](https://www.linkedin.com/pulse/presenting-latxa-largest-language-model-built-basque-hitz-zentroa-63qdf)! |
|
|
|
|
|
# **Model Details** |
|
|
|
|
|
## **Model Description** |
|
|
|
Latxa is a family of Large Language Models (LLM) based on Meta’s [LLaMA models](https://huggingface.co/meta-llama). Current LLMs exhibit incredible performance for high-resource languages such as English, but, in the case of Basque and other low-resource languages, their performance is close to a random guesser. These limitations widen the gap between high- and low-resource languages when it comes to digital development. We present Latxa to overcome these limitations and promote the development of LLM-based technology and research for the Basque language. Latxa models follow the same architecture as their original counterparts and were further trained in Euscrawl v1 ([Artetxe et al., 2022](https://aclanthology.org/2022.emnlp-main.499/)), a high-quality Basque corpora. |
|
|
|
The models are released in three sizes: 7B, 13B and 70B. |
|
|
|
|
|
|
|
* **Developed by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU) |
|
* **Model type:** Language model |
|
* **Language(s) (NLP):** en, eu |
|
* **License:** llama2 |
|
* **Parent Model:** meta-llama/Llama-2-70b |
|
* **Contact:** hitz@ehu.eus |
|
|
|
|
|
## **Getting started** |
|
|
|
Use the code below to get started with the model. |
|
|
|
```python |
|
|
|
from transformers import pipeline |
|
|
|
pipe = pipeline("text-generation", model=”HiTZ/latxa-70b-v1”) |
|
|
|
text = "Euskara adimen artifizialera iritsi da!" |
|
|
|
pipe(text, max_new_tokens=50, num_beams=5) |
|
|
|
>> [ |
|
{ |
|
'generated_text': 'Euskara adimen artifizialera iritsi da!\nEuskararen eta adimen artifizialaren arteko harremana aspaldikoa da,' |
|
' baina azken urteotan aurrerapauso handiak eman dira arlo horretan' |
|
} |
|
] |
|
|
|
``` |
|
|
|
|
|
# **Uses** |
|
|
|
Latxa models are intended to be used with Basque data; for any other language the performance is not guaranteed. Same as the original, Latxa inherits the [LLaMA-2 License](https://ai.meta.com/llama/license/) which allows for commercial and research use. |
|
|
|
|
|
## **Direct Use** |
|
|
|
Latxa family models are pre-trained LLMs without any task-specific or instruction fine-tuning. That is, the model can either be prompted to perform a specific task or further fine-tuned for specific use cases. |
|
|
|
|
|
## **Out-of-Scope Use** |
|
|
|
The model was not fine-tuned to follow instructions or to work as a chat assistant, therefore, this kind of usage is not tested nor recommended. |
|
|
|
|
|
# **Bias, Risks, and Limitations** |
|
|
|
In an effort to alleviate the potentially disturbing or harmful content, Latxa has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Euscrawl below). Still, the model is based on LLaMA models and can potentially carry the same bias, risk and limitations. |
|
|
|
Please see the LLaMA’s _Ethical Considerations and Limitations _for further information. |
|
|
|
|
|
# **Training Details** |
|
|
|
|
|
## **Training Data** |
|
|
|
The models were trained on EusCrawl v1, a high-quality corpus for Basque comprising 1.72M documents, 288M words, totalling 2.1GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to general-purpose approaches. |
|
|
|
See more details in the [EusCrawl](https://huggingface.co/datasets/HiTZ/euscrawl) dataset card. |
|
|
|
Additionally, 100K documents of English data randomly selected from the [Pile](https://huggingface.co/datasets/EleutherAI/pile) dataset were also included to avoid catastrophic forgetting. |
|
|
|
|
|
## **Training Procedure** |
|
|
|
The models were trained using the GPT-Neox library on the HPC CINECA computing cluster. All the models were approximately trained with an effective batch size of 2M tokens for 1000 to 2000 steps. |
|
|
|
|
|
<table> |
|
<tr> |
|
<td>Model |
|
</td> |
|
<td>Steps |
|
</td> |
|
<td>Sequence length |
|
</td> |
|
<td>Effective Batch size |
|
</td> |
|
<td>Total tokens |
|
</td> |
|
<td>GPU hours |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>Latxa 7B |
|
</td> |
|
<td><p style="text-align: right"> |
|
2000</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
4096</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
2M tokens/step</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
4B</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
359.2h</p> |
|
|
|
</td> |
|
</tr> |
|
<tr> |
|
<td>Latxa 13B |
|
</td> |
|
<td><p style="text-align: right"> |
|
1000</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
4096</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
2M tokens/step</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
2B</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
468.8h</p> |
|
|
|
</td> |
|
</tr> |
|
<tr> |
|
<td>Latxa 70B |
|
</td> |
|
<td><p style="text-align: right"> |
|
1680</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
4096</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
2M tokens/step</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
3.4B</p> |
|
|
|
</td> |
|
<td><p style="text-align: right"> |
|
*6475.52h</p> |
|
|
|
</td> |
|
</tr> |
|
</table> |
|
|
|
|
|
* indicates the time for the entire training process (2000 steps), however the weights of the step 1680 are shared as it is the best checkpoint according to validation loss. |
|
|
|
|
|
# **Evaluation** |
|
|
|
We evaluated the models on zero-shot and few-shot settings on generative, multiple-choice and classification tasks. We used the basque partitions of each dataset. |
|
|
|
|
|
## **Testing Data, Factors & Metrics** |
|
|
|
|
|
### **Testing Data** |
|
|
|
|
|
|
|
* **Belebele** ([Bandarkar et al.](https://arxiv.org/abs/2308.16884)): Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. We evaluated the model in a 5-shot fashion. |
|
* Data card: [https://huggingface.co/datasets/facebook/belebele](https://huggingface.co/datasets/facebook/belebele) |
|
* **X-StoryCloze** ([Lin et al.](https://arxiv.org/abs/2112.10668)): XStoryCloze consists of the professionally translated version of the English StoryCloze dataset to 10 non-English languages. Story Cloze is a commonsense reasoning dataset which consists of choosing the correct ending to a four-sentence story. We evaluated the model in a 0-shot fashion. |
|
* Data card: [https://huggingface.co/datasets/juletxara/xstory_cloze](https://huggingface.co/datasets/juletxara/xstory_cloze) |
|
* **BasqueGLUE** ([Urbizu et al.](https://aclanthology.org/2022.lrec-1.172.pdf)): BasqueGLUE is a NLU benchmark for Basque. We evaluated the model in a 5-shot fashion on the following tasks: |
|
* Data card:[ https://huggingface.co/datasets/orai-nlp/basqueGLUE](https://huggingface.co/datasets/orai-nlp/basqueGLUE). |
|
* Tasks: |
|
* **BEC2016eu**: Sentiment analysis on tweets about the 2016 Basque elections campaign. |
|
* **VaxxStance**: Stance detection on tweets around the anti-vaccine movement. |
|
* **BTHCv2**: Topic classification of news extracts with 12 categories. |
|
* **EpecKorrefBin**: Correference detection task similar to WSC. |
|
* **QNLIeu**: Q&A NLI built from the Basque Wikipedia. |
|
* **WiCeu**: Basque Word-in-Context task. |
|
|
|
|
|
### **Metrics** |
|
|
|
|
|
|
|
* **Accuracy**: Belebele, X-StoryCloze, EpecKorrefBin, QNLI-eu, and, WiC-eu |
|
* **Micro F1**: BEC2016-eu and BHTCv2 |
|
* **Macro F1**: VaxxStance (favor & against) |
|
|
|
|
|
## **Results** |
|
|
|
The model was evaluated using the LM Evaluation harness library from Eleuther AI. |
|
In order to reproduce our results please follow the instructions in Latxa's [Github repository](https://github.com/hitz-zentroa/latxa?tab=readme-ov-file#evaluation). |
|
|
|
|
|
<table> |
|
<tr> |
|
<td><strong>Model</strong> |
|
</td> |
|
<td><strong>Belebele</strong> |
|
</td> |
|
<td><strong>X-StoryCloze</strong> |
|
</td> |
|
<td><strong>BEC</strong> |
|
</td> |
|
<td><strong>Vaxx</strong> |
|
</td> |
|
<td><strong>BHTC</strong> |
|
</td> |
|
<td><strong>coref</strong> |
|
</td> |
|
<td><strong>QNLI</strong> |
|
</td> |
|
<td><strong>WiC</strong> |
|
</td> |
|
<td><strong>Average</strong> |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>Random |
|
</td> |
|
<td>25.00 |
|
</td> |
|
<td>50.00 |
|
</td> |
|
<td>33.33 |
|
</td> |
|
<td>33.33 |
|
</td> |
|
<td>8.33 |
|
</td> |
|
<td>50.00 |
|
</td> |
|
<td>50.00 |
|
</td> |
|
<td>50.00 |
|
</td> |
|
<td>37.50 |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>LLaMA 2 7B |
|
</td> |
|
<td>26.22 |
|
</td> |
|
<td>50.43 |
|
</td> |
|
<td>41.63 |
|
</td> |
|
<td>18.60 |
|
</td> |
|
<td>20.06 |
|
</td> |
|
<td>50.94 |
|
</td> |
|
<td>48.32 |
|
</td> |
|
<td>49.64 |
|
</td> |
|
<td>38.23 |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>LLaMA 2 13B |
|
</td> |
|
<td>32.00 |
|
</td> |
|
<td>50.63 |
|
</td> |
|
<td>41.09 |
|
</td> |
|
<td>18.25 |
|
</td> |
|
<td>27.35 |
|
</td> |
|
<td>49.23 |
|
</td> |
|
<td>48.74 |
|
</td> |
|
<td>49.21 |
|
</td> |
|
<td>39.56 |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>LLaMA 2 70B |
|
</td> |
|
<td>33.56 |
|
</td> |
|
<td>51.62 |
|
</td> |
|
<td>47.47 |
|
</td> |
|
<td>21.01 |
|
</td> |
|
<td>31.01 |
|
</td> |
|
<td>52.98 |
|
</td> |
|
<td>51.26 |
|
</td> |
|
<td>51.57 |
|
</td> |
|
<td>42.56 |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>BLOOM 7B |
|
</td> |
|
<td>27.00 |
|
</td> |
|
<td>57.18 |
|
</td> |
|
<td>37.94 |
|
</td> |
|
<td>20.72 |
|
</td> |
|
<td>39.10 |
|
</td> |
|
<td>48.21 |
|
</td> |
|
<td>47.48 |
|
</td> |
|
<td>47.57 |
|
</td> |
|
<td>40.65 |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>XGLM 7B |
|
</td> |
|
<td>23.88 |
|
</td> |
|
<td>57.71 |
|
</td> |
|
<td>39.94 |
|
</td> |
|
<td>21.58 |
|
</td> |
|
<td>36.73 |
|
</td> |
|
<td>50.94 |
|
</td> |
|
<td>50.42 |
|
</td> |
|
<td>49.21 |
|
</td> |
|
<td>41.30 |
|
</td> |
|
</tr> |
|
<tr> |
|
<td><strong>Latxa 7B</strong> |
|
</td> |
|
<td>35.67 |
|
</td> |
|
<td>63.13 |
|
</td> |
|
<td>55.61 |
|
</td> |
|
<td>45.93 |
|
</td> |
|
<td>44.44 |
|
</td> |
|
<td>50.43 |
|
</td> |
|
<td>55.04 |
|
</td> |
|
<td>50.14 |
|
</td> |
|
<td>50.05 |
|
</td> |
|
</tr> |
|
<tr> |
|
<td><strong>Latxa 13B</strong> |
|
</td> |
|
<td>53.56 |
|
</td> |
|
<td>65.85 |
|
</td> |
|
<td>53.23 |
|
</td> |
|
<td>48.66 |
|
</td> |
|
<td><strong>53.61</strong> |
|
</td> |
|
<td>62.52 |
|
</td> |
|
<td>57.14 |
|
</td> |
|
<td>54.21 |
|
</td> |
|
<td>56.10 |
|
</td> |
|
</tr> |
|
<tr> |
|
<td><strong>Latxa 70B</strong> |
|
</td> |
|
<td><strong>71.78</strong> |
|
</td> |
|
<td><strong>67.57</strong> |
|
</td> |
|
<td><strong>63.52</strong> |
|
</td> |
|
<td><strong>48.95</strong> |
|
</td> |
|
<td>49.51 |
|
</td> |
|
<td><strong>79.90</strong> |
|
</td> |
|
<td><strong>58.82</strong> |
|
</td> |
|
<td><strong>55.50</strong> |
|
</td> |
|
<td><strong>61.94</strong> |
|
</td> |
|
</tr> |
|
</table> |
|
|
|
|
|
|
|
# **Environmental Impact** |
|
|
|
Carbon emissions are estimated using the[ Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in[ Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
|
|
|
|
* **Hardware Type:** HPC Cluster, 4x A100 64Gb nodes |
|
* **Hours used:** 359.2h + 468.8h + 6475.52h = 7303.52h |
|
* **Compute cluster:** CINECA HPC |
|
* **Compute Region:** Italy |
|
* **Carbon Emitted:** 673.75kg CO<sub>2</sub> eq |
|
|
|
|
|
# **Acknowledgements** |
|
|
|
This work has been partially supported by the Basque Government (IKER-GAITU project). The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2023E01-013. |