Update README.md
Browse files
README.md
CHANGED
@@ -14,8 +14,7 @@ To train the model, we sample as uniformly as possible across languages while li
|
|
14 |
We combine [WURA data](https://huggingface.co/datasets/castorini/wura) with high-quality English documents from [FineWeb-Edu](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1) and [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) which results into improved Lugha-Llama-Edu and Lugha-Llama-Maths models respectively.
|
15 |
Our models consistently achieve the best performance amongst similary-sized baselines.
|
16 |
|
17 |
-
In a separate ablation experiment, we translate English education documents to Swahili to study whether the performance gains from FineWeb-Edu data is due to its content or English source language.
|
18 |
-
* Translated Swahili data 200M tokens: [FineWeb_Edu-swahili-translated](https://huggingface.co/datasets/princeton-nlp/fineweb_edu-swahili-translated)
|
19 |
|
20 |
|
21 |
We demonstrate the findings in our paper [Adapting Large Language Models for African Languages:
|
@@ -23,7 +22,7 @@ The Lugha-Llama Model]()
|
|
23 |
|
24 |
Authors: [Happy Buzaaba](https://buzaabah.github.io/)\*, [Alexander Wettig](https://www.cs.princeton.edu/~awettig/)\*, [David Ifeoluwa Adelani](https://dadelani.github.io/), [Christiane Fellbaum](https://www.cs.princeton.edu/people/profile/fellbaum) (* equal contribution)
|
25 |
|
26 |
-
Contact
|
27 |
|
28 |
|
29 |
## Lugha-Llama models
|
|
|
14 |
We combine [WURA data](https://huggingface.co/datasets/castorini/wura) with high-quality English documents from [FineWeb-Edu](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1) and [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) which results into improved Lugha-Llama-Edu and Lugha-Llama-Maths models respectively.
|
15 |
Our models consistently achieve the best performance amongst similary-sized baselines.
|
16 |
|
17 |
+
In a separate ablation experiment, we translate English education documents to Swahili to study whether the performance gains from FineWeb-Edu data is due to its content or English source language. [FineWeb_Edu-swahili-translated](https://huggingface.co/datasets/princeton-nlp/fineweb_edu-swahili-translated).
|
|
|
18 |
|
19 |
|
20 |
We demonstrate the findings in our paper [Adapting Large Language Models for African Languages:
|
|
|
22 |
|
23 |
Authors: [Happy Buzaaba](https://buzaabah.github.io/)\*, [Alexander Wettig](https://www.cs.princeton.edu/~awettig/)\*, [David Ifeoluwa Adelani](https://dadelani.github.io/), [Christiane Fellbaum](https://www.cs.princeton.edu/people/profile/fellbaum) (* equal contribution)
|
24 |
|
25 |
+
Contact *{happy.buzaaba@, awettig@cs}princeton.edu*
|
26 |
|
27 |
|
28 |
## Lugha-Llama models
|