--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: gemma-7b-bnb-4bit --- # Uploaded model - **Developed by:** Mollel - **License:** apache-2.0 - **Finetuned from model :** gemma-7b-bnb-4bit ```python3 from llama_index.llms.huggingface import HuggingFaceLLM llm = HuggingFaceLLM( context_window=4096, max_new_tokens=256, generate_kwargs={"temperature": 0.7, "do_sample": False}, tokenizer_name="Mollel/Swahili_Gemma", model_name="Mollel/Swahili_Gemma", device_map="auto", stopping_ids=[50278, 50279, 50277, 1, 0], tokenizer_kwargs={"max_length": 4096}, model_kwargs={"torch_dtype": torch.float16} ) ``` Examples 1. Load Lora and use for evaluation [kaggle](https://www.kaggle.com/code/mikemollel/evaluator-swahili-llms) [GitRepo](https://github.com/msamwelmollel/swahili_model_evals) 2. Supervised Finetuning Dataset Creation using Swahili Gemma [kaggle](https://www.kaggle.com/code/mikemollel/swahili-gemma-dataset-creation) [GitRepo](https://github.com/msamwelmollel/swahili_Gemma) 3. Rag using Swahili gemma [Kaggle](https://www.kaggle.com/code/mikemollel/rag-gemma-swahili)[GitRepo](https://github.com/msamwelmollel/swahili_Gemma)