YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
ft-solar-10.7b-v2.1-dpo - GGUF
- Model creator: https://huggingface.co/ifuseok/
- Original model: https://huggingface.co/ifuseok/ft-solar-10.7b-v2.1-dpo/
Original model description:
language: - ko pipeline_tag: text-generation datasets: - nlpai-lab/databricks-dolly-15k-ko - kyujinpy/KOR-OpenOrca-Platypus-v3 - KETI-AIR/kor_boolq - heegyu/open-korean-instructions license: cc-by-nc-sa-4.0
Input Models input text only.
Output Models generate text only.
Base Model yanolja/KoSOLAR-10.7B-v0.1
Training Dataset
- nlpai-lab/databricks-dolly-15k-ko
- kyujinpy/KOR-OpenOrca-Platypus-v3
- heegyu/open-korean-instructions
- KETI-AIR/kor_boolq
- AIhub μν λ²μ λ°μ΄ν° μΌλΆ
Implementation Code
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "ifuseok/sft-solar-10.7b-v2.1-dpo"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
Prompt Example
### System:
μμ€ν
λ©μμ§ μ
λλ€.
### User:
μ μ μ
λλ€.
### Assistant
μ΄μμ€ν΄νΈ μ
λλ€.
- Downloads last month
- 9