Text Generation
Adapters
Thai
instruction-finetuning
Thaweewat commited on
Commit
6a77b68
1 Parent(s): 06ed7b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -11,4 +11,10 @@ datasets:
11
  - tatsu-lab/alpaca
12
  - wongnai_reviews
13
  - wisesight_sentiment
14
- ---
 
 
 
 
 
 
 
11
  - tatsu-lab/alpaca
12
  - wongnai_reviews
13
  - wisesight_sentiment
14
+ ---
15
+
16
+ # 🐃🇹🇭 Buffala-LoRA-TH
17
+
18
+ Buffala-LoRA is a 7B-parameter LLaMA model finetuned to follow instructions. It is trained on the Stanford Alpaca (TH), WikiTH, Pantip and IAppQ&A dataset and makes use of the Huggingface LLaMA implementation. For more information, please visit [the project's website](https://github.com/tloen/alpaca-lora)."
19
+
20
+