s3nh commited on
Commit
9aeceb2
1 Parent(s): d7442ff

Upload ./ with huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ AlexWortega-Vikhr-7b-0.1.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ AlexWortega-Vikhr-7b-0.1.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ AlexWortega-Vikhr-7b-0.1.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
39
+ AlexWortega-Vikhr-7b-0.1.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ AlexWortega-Vikhr-7b-0.1.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
41
+ AlexWortega-Vikhr-7b-0.1.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
42
+ AlexWortega-Vikhr-7b-0.1.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
AlexWortega-Vikhr-7b-0.1.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f78cb740e9d60ff835610999cbd3bb8c39a439aedb57b41df5de14fe52d51935
3
+ size 3120964256
AlexWortega-Vikhr-7b-0.1.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:394af1203055009d651eb37927cbd0242621c9526bbd922959aca45de961062b
3
+ size 3205761696
AlexWortega-Vikhr-7b-0.1.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bef10dcde7d1581b7155e2ce769e4ef22f84f8635697d723e866cb24dbef48ae
3
+ size 4413985440
AlexWortega-Vikhr-7b-0.1.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f63c96adf4d30891c2751c5c6f6213227bfc421e56cc55427f0ccba3e82f30d
3
+ size 4185920160
AlexWortega-Vikhr-7b-0.1.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9d6ceb4c56675969ee640a75c06f8bc8c44ba37cddf609b7ea5370b01fec5b0
3
+ size 5047358112
AlexWortega-Vikhr-7b-0.1.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20b2a6acdbdb0a51926c0017dd8f30254eff11a3a96a270ec572c2c2ecfe10e2
3
+ size 5996059296
AlexWortega-Vikhr-7b-0.1.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d38e322eea40a9ba976900207fc4e2cf2660c3034f6b3bf39653fea4eec374ea
3
+ size 7765723808
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: openrail
4
+ pipeline_tag: text-generation
5
+ library_name: transformers
6
+ language:
7
+ - zh
8
+ - en
9
+ ---
10
+
11
+
12
+ ## Original model card
13
+
14
+ Buy me a coffee if you like this project ;)
15
+ <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
16
+
17
+ #### Description
18
+
19
+ GGUF Format model files for [This project](https://huggingface.co/AlexWortega/Vikhr-7b-0.1).
20
+
21
+ ### GGUF Specs
22
+
23
+ GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
24
+
25
+ Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
26
+ Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
27
+ mmap compatibility: models can be loaded using mmap for fast loading and saving.
28
+ Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
29
+ Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
30
+ The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
31
+ This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
32
+ inference or for identifying the model.
33
+
34
+ ### Perplexity params
35
+
36
+ Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
37
+ 7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
38
+ 13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
39
+
40
+
41
+
42
+ ### inference
43
+
44
+
45
+ TODO
46
+
47
+ # Original model card