YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
mistral-ko-7b-tech - GGUF
- Model creator: https://huggingface.co/shleeeee/
- Original model: https://huggingface.co/shleeeee/mistral-ko-7b-tech/
Name | Quant method | Size |
---|---|---|
mistral-ko-7b-tech.Q2_K.gguf | Q2_K | 2.53GB |
mistral-ko-7b-tech.IQ3_XS.gguf | IQ3_XS | 2.81GB |
mistral-ko-7b-tech.IQ3_S.gguf | IQ3_S | 2.96GB |
mistral-ko-7b-tech.Q3_K_S.gguf | Q3_K_S | 2.95GB |
mistral-ko-7b-tech.IQ3_M.gguf | IQ3_M | 3.06GB |
mistral-ko-7b-tech.Q3_K.gguf | Q3_K | 3.28GB |
mistral-ko-7b-tech.Q3_K_M.gguf | Q3_K_M | 3.28GB |
mistral-ko-7b-tech.Q3_K_L.gguf | Q3_K_L | 3.56GB |
mistral-ko-7b-tech.IQ4_XS.gguf | IQ4_XS | 3.67GB |
mistral-ko-7b-tech.Q4_0.gguf | Q4_0 | 3.83GB |
mistral-ko-7b-tech.IQ4_NL.gguf | IQ4_NL | 3.87GB |
mistral-ko-7b-tech.Q4_K_S.gguf | Q4_K_S | 3.86GB |
mistral-ko-7b-tech.Q4_K.gguf | Q4_K | 4.07GB |
mistral-ko-7b-tech.Q4_K_M.gguf | Q4_K_M | 4.07GB |
mistral-ko-7b-tech.Q4_1.gguf | Q4_1 | 4.24GB |
mistral-ko-7b-tech.Q5_0.gguf | Q5_0 | 4.65GB |
mistral-ko-7b-tech.Q5_K_S.gguf | Q5_K_S | 4.65GB |
mistral-ko-7b-tech.Q5_K.gguf | Q5_K | 4.78GB |
mistral-ko-7b-tech.Q5_K_M.gguf | Q5_K_M | 4.78GB |
mistral-ko-7b-tech.Q5_1.gguf | Q5_1 | 5.07GB |
mistral-ko-7b-tech.Q6_K.gguf | Q6_K | 5.53GB |
mistral-ko-7b-tech.Q8_0.gguf | Q8_0 | 7.17GB |
Original model description:
language: - ko pipeline_tag: text-generation tags: - finetune license: other
Model Card for mistral-ko-7b-tech
It is a fine-tuned model using Korean in the mistral-7b model.
Model Details
- Model Developers : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
- Repository : To be added
- Model Architecture : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1.
- Lora target modules : q_proj, k_proj, v_proj, o_proj,gate_proj
- train_batch : 4
- Max_step : 500
Dataset
Korean Custom Dataset(2000)
Prompt template: Mistral
<s>[INST]{['instruction']}[/INST]{['output']}</s>
Usage
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-7b-tech")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-7b-tech")
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="shleeeee/mistral-ko-7b-tech")
Evaluation
- Downloads last month
- 128