Mollel commited on
Commit
875147f
1 Parent(s): 4166963

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -18,3 +18,42 @@ base_model: llama-2
18
  - **License:** apache-2.0
19
  - ** continue pre trained and Finetuned from model :** Llama-2
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  - **License:** apache-2.0
19
  - ** continue pre trained and Finetuned from model :** Llama-2
20
 
21
+
22
+ **Notes:**
23
+ * Swahili_LLaMA is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
24
+ * Direct adoption for production tasks is out of the scope of this research project. As a result, the swahili_llama model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
25
+ * Any use of this model is at your own risk.
26
+
27
+ ## Limitations of Swahili LLaMA
28
+
29
+ * Generate Inaccurate Facts as the base model
30
+
31
+ * Limited Scope for code: It performs poorly on code
32
+
33
+ * Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
34
+
35
+ * Language Limitations: The model is primarily designed to understand standard Swahili. The checkpoint of this model also leads to more inaccurate responses. Any Informal Swahili, slang, or any other language might challenge its comprehension, leading to potential misinterpretations or errors in response.
36
+
37
+ * Potential Societal Biases: it fed with limited text it might be bias
38
+
39
+ * Toxicity: It might be toxic; however, most of the dataset trained in Swahili comes from newspapers, which makes it less toxic.
40
+
41
+ * Verbosity: Swahili LLaMa, being a base model, often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily news and blogspot, which results in random response.
42
+
43
+ ## Training
44
+
45
+ ### Model
46
+
47
+ * Architecture: LLaMA-2a (Transformer-based model with next-word prediction objective)
48
+
49
+ * Context length: LLaMA-2 (2048 tokens)
50
+
51
+ * Dataset size: 600M tokens(LLaMA-2) from C100 swahili and other craw from swahili newspaper and blogspots.
52
+
53
+ * Training tokens: 1.4T tokens
54
+
55
+ * GPUs: 2xA6000-48G
56
+
57
+ * Training time: Expected 13 days
58
+
59
+