File size: 5,291 Bytes
dc169e3
 
077b94d
 
 
 
 
dc169e3
077b94d
977e5c9
 
 
 
4fd8490
a834968
 
 
 
 
4fd8490
0eded69
077b94d
4fd8490
077b94d
 
 
2d6bec7
077b94d
 
 
 
 
 
 
 
 
 
 
 
 
b749458
 
484f05d
b749458
 
077b94d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90b296d
 
 
6f126e4
90b296d
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: apache-2.0
language:
- el
- en
library_name: transformers
pipeline_tag: text-generation
---


# 🚨 NEWER VERSION AVAILABLE 
## **This model has been superseded by a newer version (v1.5) [here](https://huggingface.co/ilsp/Meltemi-7B-v1.5)**


# Meltemi: A large foundation Language Model for the Greek language

We introduce Meltemi, the first Greek Large Language Model (LLM) trained by the [Institute for Language and Speech Processing](https://www.athenarc.gr/en/ilsp) at [Athena Research & Innovation Center](https://www.athenarc.gr/en).
Meltemi is built on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), extending its capabilities for Greek through continual pretraining on a large corpus of high-quality and locally relevant Greek texts. We present Meltemi-7B-v1, as well as an instruction fine-tuned version [Meltemi-7B-Instruct-v1](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1).


![image/png](https://miro.medium.com/v2/resize:fit:720/format:webp/1*IaE7RJk6JffW8og-MOnYCA.png)


# Model Information

- Vocabulary extension of the Mistral-7B tokenizer with Greek tokens
- 8192 context length
- We extend the pretraining of Mistral-7B with added proficiency for the Greek language, by utilizing a large corpus consisting of approximately **40 billion tokens**. 
  * This corpus includes 28.5 billion monolingual Greek tokens, constructed from publicly available resources. Additionaly, to mitigate catastrophic forgetting and ensure that the model has bilingual capabilities, we use additional sub-corpora with monolingual English texts (10.5 billion tokens) and Greek-English parallel data (600 million tokens).
  * This corpus has been processed, filtered, and deduplicated to ensure data quality (a detailed description of our data processing pipeline will be published in our upcoming paper) and is outlined below:


| Sub-corpus   | # Tokens         | Percentage |
|----------|------------------|------------|
| Greek    | 28,555,902,360   | 72.0%      |
| English  | 10,478,414,033   | 26.4%      |
| Parallel | 633,816,023      | 1.6%       |
| **Total**    | **39,668,132,416**   |  **100%**       |


# Usage

Please make sure that the BOS token is always included in the tokenized prompts. This might not be the default setting in all evaluation or fine-tuning frameworks.


# Evaluation

The evaluation suite we created includes 6 test sets. The suite is integrated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness).

Our evaluation suite includes: 
* Four machine-translated versions ([ARC Greek](https://huggingface.co/datasets/ilsp/arc_greek), [Truthful QA Greek](https://huggingface.co/datasets/ilsp/truthful_qa_greek), [HellaSwag Greek](https://huggingface.co/datasets/ilsp/hellaswag_greek), [MMLU Greek](https://huggingface.co/datasets/ilsp/mmlu_greek)) of established English benchmarks for language understanding and reasoning ([ARC Challenge](https://arxiv.org/abs/1803.05457), [Truthful QA](https://arxiv.org/abs/2109.07958), [Hellaswag](https://arxiv.org/abs/1905.07830), [MMLU](https://arxiv.org/abs/2009.03300)). 
* An existing benchmark for question answering in Greek ([Belebele](https://arxiv.org/abs/2308.16884))
* A novel benchmark created by the ILSP team for medical question answering based on the medical exams of [DOATAP](https://www.doatap.gr) ([Medical MCQA](https://huggingface.co/datasets/ilsp/medical_mcqa_greek)).

Our evaluation for Meltemi-7B is performed in a few-shot setting, consistent with the settings in the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We can see that our training enhances performance across all Greek test sets by a **+14.9%** average improvement. The results for the Greek test sets are shown in the following table:

|                | Medical MCQA EL (15-shot) | Belebele EL (5-shot) | HellaSwag EL (10-shot) | ARC-Challenge EL (25-shot) | TruthfulQA MC2 EL (0-shot) | MMLU EL (5-shot) | Average |
|----------------|----------------|-------------|--------------|------------------|-------------------|---------|---------|
| Mistral 7B     | 29.8%          | 45.0%       | 36.5%        | 27.1%            | 45.8%             | 35%     | 36.5%   |
| Meltemi 7B     | 41.0%          | 63.6%       | 61.6%        | 43.2%            | 52.1%             | 47%     | 51.4%   |


# Ethical Considerations

This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content.


# Acknowledgements

The ILSP team utilized Amazon's cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.


# Citation
```
@misc{voukoutis2024meltemiopenlargelanguage,
      title={Meltemi: The first open Large Language Model for Greek}, 
      author={Leon Voukoutis and Dimitris Roussis and Georgios Paraskevopoulos and Sokratis Sofianopoulos and Prokopis Prokopidis and Vassilis Papavasileiou and Athanasios Katsamanis and Stelios Piperidis and Vassilis Katsouros},
      year={2024},
      eprint={2407.20743},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.20743}, 
}
```