File size: 1,657 Bytes
1a25ed1 dfbf4ee 547e41e 1a25ed1 f180213 5732bec dfbf4ee feaa68c 1a25ed1 dfbf4ee 1a25ed1 dfbf4ee b16261a dfbf4ee 1a25ed1 dfbf4ee 547e41e dfbf4ee 547e41e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: SeaLLMs/SeaLLM-7B-v2
datasets:
- 922-Narra/synthetic_tagalog_test_02102024
---
# AKA Tagamistral-7b-v1:
* Yet another archived test/toy model, fine-tuned on a synthetic Tagalog dataset partially produced by [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) based on [this dataset](https://huggingface.co/datasets/jfernandez/cebuano-filipino-sentences)
* Base: [SeaLLM](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2)
* [GGUF](https://huggingface.co/922-Narra/tagalog-seallm-7b-v1-gguf)
### USAGE
This is meant to be mainly a chat model.
Best results with "Human" and "Assistant" and prompt with Tagalog. Example:
"Ito ay isang chat log sa pagitan ng AI Assistant na nagta-Tagalog at isang Pilipino. Magsimula ng chat:\nHuman: Hello po?\nAssistant:"
### HYPERPARAMS
* Trained for ~1 epoch
* rank: 32
* lora alpha: 32
* lora dropout: 0
* lr: 2e-4
* batch size: 2
* grad steps: 4
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
### WARNINGS AND DISCLAIMERS
There is still a chance that the model may switch to English or Taglish.
It is possible that the Tagalog capability still comes mostly from the fine-tuned base more than the dataset.
Finally, this model is not guaranteed to output aligned or safe outputs nor is it meant for production use - use at your own risk! |