|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- mistral |
|
- trl |
|
base_model: SeaLLMs/SeaLLM-7B-v2 |
|
datasets: |
|
- 922-Narra/synthetic_tagalog_test_02102024 |
|
--- |
|
# AKA Tagamistral-7b-v1: |
|
* Yet another archived test/toy model, fine-tuned on a synthetic Tagalog dataset partially produced by [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) |
|
* Base: [SeaLLM](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) |
|
* [GGUF](https://huggingface.co/922-Narra/tagalog-seallm-7b-v1) |
|
|
|
### USAGE |
|
This is meant to be mainly a chat model. |
|
|
|
Best results with "Human" and "Assistant" and prompt with Tagalog. Example: |
|
|
|
"Ito ay isang chat log sa pagitan ng AI Assistant na nagta-Tagalog at isang Pilipino. Magsimula ng chat:\nHuman: Hello po?\nAssistant:" |
|
|
|
### HYPERPARAMS |
|
* Trained for ~1 epoch |
|
* rank: 32 |
|
* lora alpha: 32 |
|
* lora dropout: 0 |
|
* lr: 2e-4 |
|
* batch size: 2 |
|
* warmup ratio: 0.075 |
|
* grad steps: 4 |
|
|
|
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
### WARNINGS AND DISCLAIMERS |
|
There is still a chance that the model may switch to English or Taglish. |
|
|
|
It is possible that the Tagalog capability still comes mostly from the fine-tuned base more than the dataset. |
|
|
|
Finally, this model is not guaranteed to output aligned or safe outputs nor is it meant for production use - use at your own risk! |