New markdown
Browse files
README.md
CHANGED
@@ -1,199 +1,127 @@
|
|
1 |
-
---
|
2 |
-
library_name: transformers
|
3 |
-
|
4 |
-
---
|
5 |
-
|
6 |
-
# Model Card for Model ID
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
|
31 |
|
32 |
-
- **Repository:** [
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
-
### Downstream Use
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
|
|
|
73 |
|
74 |
-
|
|
|
|
|
75 |
|
76 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
|
78 |
-
|
|
|
|
|
|
|
|
|
79 |
|
80 |
-
|
81 |
|
82 |
-
|
|
|
83 |
|
84 |
### Training Procedure
|
|
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
|
|
|
|
|
|
91 |
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
|
109 |
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
|
115 |
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
|
121 |
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
- **
|
148 |
-
- **
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: mit
|
4 |
+
---
|
|
|
|
|
|
|
|
|
5 |
|
6 |
+
# Helsinki-NLP-opus-mt-ug
|
7 |
|
8 |
+
This model translates from multiple Ugandan languages (Acoli, Luganda, Lumasaaba, Runyakore, Kiswahili) to English. It is fine-tuned from the Helsinki-NLP/opus-mt-mul-en model and has been trained and evaluated on a diverse set of multilingual datasets.
|
9 |
|
10 |
## Model Details
|
11 |
|
12 |
### Model Description
|
13 |
|
14 |
+
This model translates text from multiple Ugandan languages to English. It has been fine-tuned on a dataset containing translations in Acoli, Luganda, Lumasaaba, Runyakore, and Kiswahili.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
+
- **Developed by:** Mubarak B.
|
17 |
+
- **Model type:** Sequence-to-Sequence (Seq2Seq) model
|
18 |
+
- **Language(s) (NLP):** Acoli (ach), Luganda (lug), Lumasaaba (lsa), Runyakore (nyn), Kiswahili (swa), English (en)
|
19 |
+
- **License:** Apache 2.0
|
20 |
+
- **Finetuned from model:** Helsinki-NLP/opus-mt-mul-en
|
21 |
|
22 |
+
### Model Sources
|
23 |
|
24 |
+
- **Repository:** [Helsinki-NLP-opus-mt-ug](https://huggingface.co/MubarakB/Helsinki-NLP-opus-mt-ug)
|
|
|
|
|
25 |
|
26 |
## Uses
|
27 |
|
|
|
|
|
28 |
### Direct Use
|
29 |
|
30 |
+
The model can be used directly for translating text from the mentioned Ugandan languages to English without further fine-tuning.
|
|
|
|
|
31 |
|
32 |
+
### Downstream Use
|
33 |
|
34 |
+
The model can be integrated into applications requiring multilingual translation support for Ugandan languages to English.
|
|
|
|
|
35 |
|
36 |
### Out-of-Scope Use
|
37 |
|
38 |
+
The model is not suitable for languages or domains outside those it was trained on, and it may not perform well on highly domain-specific language.
|
|
|
|
|
39 |
|
40 |
## Bias, Risks, and Limitations
|
41 |
|
42 |
+
Users should be aware that the model may inherit biases present in the training data, and it may not perform equally well across all dialects or contexts. It is recommended to validate the model's outputs in the intended use case to ensure suitability.
|
|
|
|
|
43 |
|
44 |
### Recommendations
|
45 |
|
46 |
+
Users should consider additional fine-tuning or domain adaptation if using the model in a highly specialized context. Monitoring and human-in-the-loop verification are recommended for critical applications.
|
|
|
|
|
47 |
|
48 |
## How to Get Started with the Model
|
49 |
|
50 |
+
```python
|
51 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
52 |
|
53 |
+
model_name = "MubarakB/Helsinki-NLP-opus-mt-ug"
|
54 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
|
55 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
56 |
|
57 |
+
def translate(text, source_lang, target_lang):
|
58 |
+
tokenizer.src_lang = source_lang
|
59 |
+
tokenizer.tgt_lang = target_lang
|
60 |
+
inputs = tokenizer(text, return_tensors="pt", padding=True)
|
61 |
+
translated_tokens = model.generate(**inputs, max_length=128)
|
62 |
+
translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
|
63 |
+
return translation
|
64 |
|
65 |
+
# Example translation
|
66 |
+
luganda_sentence = "Abantu bangi abafudde olw'endwadde z'ekikaba."
|
67 |
+
english_translation = translate(luganda_sentence, "lug", "en")
|
68 |
+
print("Luganda: ", luganda_sentence)
|
69 |
+
print("English: ", english_translation)
|
70 |
|
71 |
+
## Training Details
|
72 |
|
73 |
+
### Training Data
|
74 |
+
The training data consists of a multilingual parallel corpus including Acoli, Luganda, Lumasaaba, Runyakore, and Kiswahili sentences paired with their English translations.
|
75 |
|
76 |
### Training Procedure
|
77 |
+
**Training Regime**: FP16 mixed precision
|
78 |
|
79 |
+
**Training Hyperparameters:**
|
80 |
+
- Batch size: 20
|
81 |
+
- Gradient accumulation steps: 150
|
82 |
+
- Learning rate: 2e-5
|
83 |
+
- Epochs: 30
|
84 |
+
- Label smoothing factor: 0.1
|
85 |
+
- Evaluation steps interval: 10
|
86 |
+
- Weight decay: 0.01
|
87 |
|
88 |
+
### Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
89 |
|
90 |
#### Testing Data
|
91 |
+
The testing data includes samples from all the languages mentioned in the training data section, with a focus on evaluating BLEU scores for each language.
|
|
|
|
|
|
|
92 |
|
93 |
#### Factors
|
94 |
+
The evaluation disaggregates performance by language (Acoli, Luganda, Lumasaaba, Runyakore, Kiswahili).
|
|
|
|
|
|
|
95 |
|
96 |
#### Metrics
|
97 |
+
The primary evaluation metric used is BLEU score, which measures the quality of the translated text against reference translations.
|
|
|
|
|
|
|
98 |
|
99 |
### Results
|
100 |
|
|
|
|
|
101 |
#### Summary
|
102 |
+
- **Validation Loss**: 2.124478
|
103 |
+
- **BLEU Scores**:
|
104 |
+
- BLEU Ach: 21.37250
|
105 |
+
- BLEU Lug: 58.25520
|
106 |
+
- BLEU Lsa: 25.23430
|
107 |
+
- BLEU Nyn: 49.76010
|
108 |
+
- BLEU Swa: 60.66220
|
109 |
+
- BLEU Mean: 43.05690
|
110 |
+
|
111 |
+
### Model Examination
|
112 |
+
|
113 |
+
#### Environmental Impact
|
114 |
+
- **Hardware Type**: V100 GPUs
|
115 |
+
- **Hours used**: 30 hours
|
116 |
+
- **Cloud Provider**: [More Information Needed]
|
117 |
+
- **Compute Region**: [More Information Needed]
|
118 |
+
- **Carbon Emitted**: [More Information Needed]
|
119 |
+
|
120 |
+
#### Technical Specifications
|
121 |
+
|
122 |
+
**Model Architecture and Objective**
|
123 |
+
The model uses a Transformer-based Seq2Seq architecture aimed at translating text from multiple source languages to English.
|
124 |
+
|
125 |
+
**Compute Infrastructure**
|
126 |
+
- **Hardware**: NVIDIA V100 GPUs
|
127 |
+
- **Software**: PyTorch, Transformers library
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|