nicholasKluge commited on
Commit
8cae39d
1 Parent(s): 4d256e4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -0
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pt
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - text-generation-inference
8
+ datasets:
9
+ - nicholasKluge/Pt-Corpus-Instruct
10
+ metrics:
11
+ - perplexity
12
+ pipeline_tag: text-generation
13
+ widget:
14
+ - text: A PUCRS é uma universidade
15
+ example_title: Exemplo
16
+ - text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de
17
+ example_title: Exemplo
18
+ - text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para
19
+ example_title: Exemplo
20
+ inference:
21
+ parameters:
22
+ repetition_penalty: 1.2
23
+ temperature: 0.2
24
+ top_k: 20
25
+ top_p: 0.2
26
+ max_new_tokens: 150
27
+ co2_eq_emissions:
28
+ emissions: 110
29
+ source: CodeCarbon
30
+ training_type: pre-training
31
+ geographical_location: Germany
32
+ hardware_used: NVIDIA A40
33
+ ---
34
+ # Mula-8x160-v0.1
35
+
36
+ <img src="./logo-no-bg.png" alt="Mula" height="200">
37
+
38
+ ## Model Summary
39
+
40
+ Mula is a series of Sparse Mixture of Experts (SMoE) language models, all trained natively in Brazilian Portuguese, designed to help democratize LLMs for low-resource languages.
41
+
42
+ Mula-8x160-v0.1 is one of our first experiments on pre-training a SMoE, using the [Pt-Corpus-Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) dataset. It has 8 experts per layer and activates 4 for each token.
43
+
44
+ Future versions of Mula will be trained on an extensively larger Brazilian Portuguese dataset.
45
+
46
+ ## Details
47
+
48
+ - **Architecture:** a Sparse Mixture of Experts (Mixtral implementation) pre-trained via causal language modeling
49
+ - **Size:** 747,596,544 parameters (only 407,857,152 activated parameters during runtime)
50
+ - **Context length:** 2048 tokens
51
+ - **Dataset:** [Pt-Corpus Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) (6.2B tokens x 4)
52
+ - **Language:** Portuguese
53
+ - **Training time**: ~ 136 hours
54
+ - **Emissions:** 110 KgCO2eq (Germany)
55
+ - **Total energy consumption:** 300 kWh
56
+
57
+ ## Intended Uses
58
+
59
+ The primary intended use of Mula-8x160-v0.1 is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt Mula-8x160-v0.1 for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained Mula-8x160-v0.1 as a basis for your fine-tuned model, please conduct your own risk and bias assessment.
60
+
61
+ ## Out-of-scope Use
62
+
63
+ Mula-8x160-v0.1 is not intended for deployment. It is not a product and should not be used for human-facing interactions.
64
+
65
+ Mula-8x160-v0.1 models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages.
66
+
67
+ Mula-8x160-v0.1 has not been fine-tuned for downstream contexts in which language models are commonly deployed.
68
+
69
+ ## Basic usage
70
+
71
+ Using the `pipeline`:
72
+
73
+ ```python
74
+ from transformers import pipeline
75
+
76
+ generator = pipeline("text-generation", model="MulaBR/Mula-8x160-v0.1")
77
+
78
+ completions = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100)
79
+
80
+ for comp in completions:
81
+ print(f"🤖 {comp['generated_text']}")
82
+ ```
83
+
84
+ Using the `AutoTokenizer` and `AutoModelForCausalLM`:
85
+
86
+ ```python
87
+ from transformers import AutoTokenizer, AutoModelForCausalLM
88
+ import torch
89
+
90
+ # Load model and the tokenizer
91
+ tokenizer = AutoTokenizer.from_pretrained("MulaBR/Mula-8x160-v0.1", revision='main')
92
+ model = AutoModelForCausalLM.from_pretrained("MulaBR/Mula-8x160-v0.1", revision='main')
93
+
94
+ # Pass the model to your device
95
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
96
+
97
+ model.eval()
98
+ model.to(device)
99
+
100
+ # Tokenize the inputs and pass them to the device
101
+ inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device)
102
+
103
+ # Generate some text
104
+ completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100)
105
+
106
+ # Print the generated text
107
+ for i, completion in enumerate(completions):
108
+ print(f'🤖 {tokenizer.decode(completion)}')
109
+ ```
110
+
111
+ ## Limitations
112
+
113
+ Like almost all other language models trained on large text datasets scraped from the web, Mula-8x160-v0.1 exhibits behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following:
114
+
115
+ - **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
116
+
117
+ - **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
118
+
119
+ - **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions.
120
+
121
+ - **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response.
122
+
123
+ - **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
124
+
125
+ Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model.
126
+
127
+ ## Benchmarks
128
+
129
+ Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used.
130
+
131
+ | | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** |
132
+ |----------------------|-----------|---------------|-----------|----------------|
133
+ | **Mula-4x160-v0.1** | 27.09 | 31.41 | 28.15 | 39.81 |
134
+ | **Mula-8x160-v0.1** | 26.15 | 33.06 | 28.14 | 41.69 |
135
+
136
+ Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
137
+
138
+ | | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **PT Hate Speech** | **OAB Exams** | **TweetSentBR** |
139
+ |-----------------------|----------------|----------------|-----------|----------|----------------|------------|--------------------|---------------|-----------------|
140
+ | **Mula-4x160-v0.1** | 33.57 | 11.35 | 25.17 | 21.34 | 43.97 | 41.50 | 22.99 | 25.06 | 11.24 |
141
+ | **Mula-8x160-v0.1** | 33.51 | 0 | 20.17 | 19.94 | 43.97 | 33.33 | 42.69 | 24.37 | 24.60 |
142
+ ## Cite as 🤗
143
+
144
+ ```latex
145
+
146
+ @misc{mula2024BR,
147
+ title = {Mula: a Sparse Mixture of Experts Language Model trained in Brazilian Portuguese},
148
+ author = {Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza},
149
+ howpublished = {\url{https://huggingface.co/MulaBR}},
150
+ year={2024}
151
+ }
152
+
153
+ ```
154
+
155
+ ## License
156
+
157
+ Mula-8x160-v0.1 is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.