Mollel's picture
Update README.md
875147f verified
|
raw
history blame
2.35 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama-2
  - trl
  - lora
base_model: llama-2

Uploaded model

  • Developed by: Mollel
  • License: apache-2.0
  • ** continue pre trained and Finetuned from model :** Llama-2

Notes:

  • Swahili_LLaMA is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
  • Direct adoption for production tasks is out of the scope of this research project. As a result, the swahili_llama model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
  • Any use of this model is at your own risk.

Limitations of Swahili LLaMA

  • Generate Inaccurate Facts as the base model

  • Limited Scope for code: It performs poorly on code

  • Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.

  • Language Limitations: The model is primarily designed to understand standard Swahili. The checkpoint of this model also leads to more inaccurate responses. Any Informal Swahili, slang, or any other language might challenge its comprehension, leading to potential misinterpretations or errors in response.

  • Potential Societal Biases: it fed with limited text it might be bias

  • Toxicity: It might be toxic; however, most of the dataset trained in Swahili comes from newspapers, which makes it less toxic.

  • Verbosity: Swahili LLaMa, being a base model, often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily news and blogspot, which results in random response.

Training

Model

  • Architecture: LLaMA-2a (Transformer-based model with next-word prediction objective)

  • Context length: LLaMA-2 (2048 tokens)

  • Dataset size: 600M tokens(LLaMA-2) from C100 swahili and other craw from swahili newspaper and blogspots.

  • Training tokens: 1.4T tokens

  • GPUs: 2xA6000-48G

  • Training time: Expected 13 days