Text Generation
Transformers
GGUF
PyTorch
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
llama
en
dataset:HuggingFaceH4/ultrafeedback_binarized
dataset:allenai/tulu-v2-sft-mixture
arxiv:2305.18290
arxiv:2311.10702
Inference Endpoints
has_space
text-generation-inference
MaziyarPanahi
commited on
Commit
•
c38af0a
1
Parent(s):
0ae1934
1953497a0ed1ef17bdcbb047869cea24729350f1da753b02eda86e7a25170549
Browse files- .gitattributes +1 -0
- tulu-2-dpo-13b.Q4_K_S.gguf +3 -0
.gitattributes
CHANGED
@@ -38,3 +38,4 @@ tulu-2-dpo-13b.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
|
38 |
tulu-2-dpo-13b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
tulu-2-dpo-13b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
tulu-2-dpo-13b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
38 |
tulu-2-dpo-13b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
tulu-2-dpo-13b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
tulu-2-dpo-13b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
tulu-2-dpo-13b.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
tulu-2-dpo-13b.Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:88247de4d2832d16463285c14851a6c688406b6892b8533a8d78e7a9b13a448c
|
3 |
+
size 7423178720
|