Edit model card

AKA Tagamistral-7b-v1:

  • Yet another archived test/toy model, fine-tuned on a synthetic Tagalog dataset partially produced by Mistral based on this dataset
  • Base: SeaLLM
  • GGUF

USAGE

This is meant to be mainly a chat model.

Best results with "Human" and "Assistant" and prompt with Tagalog. Example:

"Ito ay isang chat log sa pagitan ng AI Assistant na nagta-Tagalog at isang Pilipino. Magsimula ng chat:\nHuman: Hello po?\nAssistant:"

HYPERPARAMS

  • Trained for ~1 epoch
  • rank: 32
  • lora alpha: 32
  • lora dropout: 0
  • lr: 2e-4
  • batch size: 2
  • grad steps: 4

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

WARNINGS AND DISCLAIMERS

There is still a chance that the model may switch to English or Taglish.

It is possible that the Tagalog capability still comes mostly from the fine-tuned base more than the dataset.

Finally, this model is not guaranteed to output aligned or safe outputs nor is it meant for production use - use at your own risk!

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for 922-Narra/tagalog-seallm-7b-v1

Finetuned
(20)
this model
Quantizations
1 model