language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
Synatra-7B-v0.3-RPπ§
Support Me
μλνΈλΌλ κ°μΈ νλ‘μ νΈλ‘, 1μΈμ μμμΌλ‘ κ°λ°λκ³ μμ΅λλ€. λͺ¨λΈμ΄ λ§μμ λμ ¨λ€λ©΄ μ½κ°μ μ°κ΅¬λΉ μ§μμ μ΄λ¨κΉμ?
Wanna be a sponser? Contact me on Telegram AlzarTakkarsen
License
This model is strictly non-commercial (cc-by-nc-4.0) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
Model Details
Base Model
mistralai/Mistral-7B-Instruct-v0.1
Trained On
A6000 48GB * 8
Instruction format
It follows ChatML format.
TODO
βRP κΈ°λ° νλ λͺ¨λΈ μ μ
βλ°μ΄ν°μ μ μ
- μΈμ΄ μ΄ν΄λ₯λ ₯ κ°μ
βμμ 보μ
- ν ν¬λμ΄μ λ³κ²½
Model Benchmark
Ko-LLM-Leaderboard
On Benchmarking...
Implementation Code
Since, chat_template already contains insturction format above. You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-7B-v0.3-RP")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-7B-v0.3-RP")
messages = [
{"role": "user", "content": "λ°λλλ μλ νμμμ΄μΌ?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Why It's benchmark score is lower than preview version?
Apparently, Preview model uses Alpaca Style prompt which has no pre-fix. But ChatML do.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 57.38 |
ARC (25-shot) | 62.2 |
HellaSwag (10-shot) | 82.29 |
MMLU (5-shot) | 60.8 |
TruthfulQA (0-shot) | 52.64 |
Winogrande (5-shot) | 76.48 |
GSM8K (5-shot) | 21.15 |
DROP (3-shot) | 46.06 |