Update README.md
Browse files
README.md
CHANGED
@@ -12,4 +12,56 @@ metrics:
|
|
12 |
- bleu
|
13 |
library_name: peft
|
14 |
pipeline_tag: question-answering
|
15 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
- bleu
|
13 |
library_name: peft
|
14 |
pipeline_tag: question-answering
|
15 |
+
---
|
16 |
+
# Model Card for UlizaLlama
|
17 |
+
|
18 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
19 |
+
|
20 |
+
|
21 |
+
## Model Details
|
22 |
+
UlizaLlama7b-1 is a language model that builds upon the foundation of Jacaranda/kiswallama-pretrained7B. Jacaranda/kiswallama-pretrained is a large language model continually-pretrained with 321,530,045 swahili tokens and a customized tokenizer with a swahili vocabulary of 20,000 tokens to extend the capabilities of Meta/Llama2. It offers significant improvements in both encoding and decoding for Swahili text, surpassing the Swahili performance of Meta/Llama2. Moreover, Jacaranda/kiswallama-pretrained excels in providing accurate next-word completions in Swahili, a capability which Meta/Llama2 falls short of.
|
23 |
+
### Model Description
|
24 |
+
Origin: Adaptation of the Jacaranda/kiswallama-pretrained model.
|
25 |
+
Data: Instructional dataset in Swahili and English consisting of prompt-response pairs.
|
26 |
+
Training: Alignment to standard methodologies, incorporation of task-centric heads, neural network weight optimization via backpropagation, and task-specific adjustments.
|
27 |
+
Fine-tuning: Utilized the LoRA approach, refining two matrices that mirror the main matrix from Jacaranda/kiswallama-pretrained. This Low Rank Adapter (LoRa) was vital for instruction-focused fine-tuning. Post-training, the developed LoRa was extracted, and Hugging Face's merge and unload() function facilitated the amalgamation of adapter weights with the base model. This fusion enables standalone inference with the merged model
|
28 |
+
<!-- Provide a longer summary of what this model is. -->
|
29 |
+
|
30 |
+
|
31 |
+
|
32 |
+
- **Developed by:** [Jacaranda Health]
|
33 |
+
- **Funded by [optional]:** [Google]
|
34 |
+
- **Model type:** [LlamaModelForCausalLm]
|
35 |
+
- **Language(s) (NLP):** [English and Swahili]
|
36 |
+
- **License:** [More Information Needed]
|
37 |
+
- **Model Developers:** [Stanslaus Mwongela]
|
38 |
+
- **Finetuned from model:** [ Jacaranda/kiswallama-pretrained model which builds upon Meta/Llama2]
|
39 |
+
## Uses
|
40 |
+
UlizaLlama7b-1 is optimized for downstream tasks, notably those demanding instructional datasets in Swahili, English, or both. Organizations can further fine-tune it for their specific domains. Potential areas include:
|
41 |
+
1. Question-answering within specific domains.
|
42 |
+
2. Assistant-driven chat capabilities: healthcare, agriculture, legal, education, tourism and hospitality, public services, financial sectors, communication, customer assistance, commerce, etcpublic services, financial sectors, communication, customer assistance, commerce, etc.
|
43 |
+
Meanwhile, Jacaranda/kiswallama-pretrained offers versatility in:
|
44 |
+
Text Summarization
|
45 |
+
Autoregressive Text Completion
|
46 |
+
Content Generation
|
47 |
+
Text Rewording
|
48 |
+
Grammar Refinement and Editing
|
49 |
+
Further Research-The current UlizaLlama is available as a 7 Billion parameters model, further research can also explore availing bigger variants of UlizaLlama.
|
50 |
+
|
51 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
52 |
+
|
53 |
+
|
54 |
+
### Out-of-Scope Use
|
55 |
+
To ensure the ethical and responsible use of UlizaLlama, we have outlined a set of guidelines. These guidelines categorize activities and practices into three main areas: prohibited actions, high-risk activities, and deceptive practices. By understanding and adhering to these directives, users can contribute to a safer and more trustworthy environment.
|
56 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
57 |
+
|
58 |
+
[More Information Needed]
|
59 |
+
|
60 |
+
## Bias, Risks, and Limitations
|
61 |
+
UlizaLlama7b-1 is a cutting-edge technology brimming with possibilities, yet is not without inherent risks. The extensive testing conducted thus far has been predominantly in Swahili, English, however leaving an expansive terrain of uncharted scenarios. Consequently, like its LLM counterparts, UlizaLlama7b-1 outcome predictability remains elusive, and there's the potential for it to occasionally generate responses that are either inaccurate, biased, or otherwise objectionable in nature when prompted by users.
|
62 |
+
|
63 |
+
With this in mind, the responsible course of action dictates that, prior to deploying UlizaLlama7b-1 in any applications, developers must embark on a diligent journey of safety testing and meticulous fine-tuning, customized to the unique demands of their specific use cases.
|
64 |
+
|
65 |
+
|
66 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
67 |
+
|