RichardErkhov commited on
Commit
19e5d5f
1 Parent(s): 2e0359f

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ K2S3-Mistral-7b-v1.3 - GGUF
11
+ - Model creator: https://huggingface.co/Changgil/
12
+ - Original model: https://huggingface.co/Changgil/K2S3-Mistral-7b-v1.3/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [K2S3-Mistral-7b-v1.3.Q2_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q2_K.gguf) | Q2_K | 2.6GB |
18
+ | [K2S3-Mistral-7b-v1.3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.IQ3_XS.gguf) | IQ3_XS | 2.89GB |
19
+ | [K2S3-Mistral-7b-v1.3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.IQ3_S.gguf) | IQ3_S | 3.04GB |
20
+ | [K2S3-Mistral-7b-v1.3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q3_K_S.gguf) | Q3_K_S | 3.02GB |
21
+ | [K2S3-Mistral-7b-v1.3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.IQ3_M.gguf) | IQ3_M | 3.14GB |
22
+ | [K2S3-Mistral-7b-v1.3.Q3_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q3_K.gguf) | Q3_K | 3.35GB |
23
+ | [K2S3-Mistral-7b-v1.3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q3_K_M.gguf) | Q3_K_M | 3.35GB |
24
+ | [K2S3-Mistral-7b-v1.3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q3_K_L.gguf) | Q3_K_L | 3.64GB |
25
+ | [K2S3-Mistral-7b-v1.3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.IQ4_XS.gguf) | IQ4_XS | 3.76GB |
26
+ | [K2S3-Mistral-7b-v1.3.Q4_0.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q4_0.gguf) | Q4_0 | 3.91GB |
27
+ | [K2S3-Mistral-7b-v1.3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.IQ4_NL.gguf) | IQ4_NL | 3.95GB |
28
+ | [K2S3-Mistral-7b-v1.3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q4_K_S.gguf) | Q4_K_S | 3.94GB |
29
+ | [K2S3-Mistral-7b-v1.3.Q4_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q4_K.gguf) | Q4_K | 4.15GB |
30
+ | [K2S3-Mistral-7b-v1.3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q4_K_M.gguf) | Q4_K_M | 2.62GB |
31
+ | [K2S3-Mistral-7b-v1.3.Q4_1.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q4_1.gguf) | Q4_1 | 3.13GB |
32
+ | [K2S3-Mistral-7b-v1.3.Q5_0.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q5_0.gguf) | Q5_0 | 4.75GB |
33
+ | [K2S3-Mistral-7b-v1.3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q5_K_S.gguf) | Q5_K_S | 4.75GB |
34
+ | [K2S3-Mistral-7b-v1.3.Q5_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q5_K.gguf) | Q5_K | 4.87GB |
35
+ | [K2S3-Mistral-7b-v1.3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q5_K_M.gguf) | Q5_K_M | 4.87GB |
36
+ | [K2S3-Mistral-7b-v1.3.Q5_1.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q5_1.gguf) | Q5_1 | 5.16GB |
37
+ | [K2S3-Mistral-7b-v1.3.Q6_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q6_K.gguf) | Q6_K | 5.63GB |
38
+ | [K2S3-Mistral-7b-v1.3.Q8_0.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.3-gguf/blob/main/K2S3-Mistral-7b-v1.3.Q8_0.gguf) | Q8_0 | 5.73GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: cc-by-nc-4.0
46
+ language:
47
+ - en
48
+ - ko
49
+ ---
50
+
51
+ ---
52
+ ## Developed by :
53
+ * K2S3
54
+
55
+ ## Model Number:
56
+ * K2S3-Mistral-7b-v1.3
57
+
58
+ ## Base Model :
59
+ * mistralai/Mistral-7B-v0.1
60
+ * Changgil/K2S3-Mistral-7b-v1.2
61
+
62
+ ### Training Data
63
+ * The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.
64
+ * 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.
65
+
66
+ ### Training Method
67
+ * This model was fine-tuned on the "Changgil/K2S3-Mistral-7b-v1.2" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).
68
+ * 이 모델은 "Changgil/K2S3-Mistral-7b-v1.2" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.
69
+
70
+ ### Hardware
71
+ * Hardware: Utilized two A100 (80G*2EA) GPUs for training.
72
+ * Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp.
73
+ * 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다.
74
+