Text Generation
Transformers
GGUF
PyTorch
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
llama
en
dataset:HuggingFaceH4/ultrafeedback_binarized
dataset:allenai/tulu-v2-sft-mixture
arxiv:2305.18290
arxiv:2311.10702
Inference Endpoints
has_space
text-generation-inference
MaziyarPanahi
commited on
Commit
•
44032d2
1
Parent(s):
c38af0a
9c82bc16ed666029d7d094597f5a702d7af56283ca75934dd89c5e6b23d87901
Browse files- .gitattributes +1 -0
- tulu-2-dpo-13b.Q5_K_M.gguf +3 -0
.gitattributes
CHANGED
@@ -39,3 +39,4 @@ tulu-2-dpo-13b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
|
39 |
tulu-2-dpo-13b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
tulu-2-dpo-13b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
tulu-2-dpo-13b.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
39 |
tulu-2-dpo-13b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
tulu-2-dpo-13b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
tulu-2-dpo-13b.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
tulu-2-dpo-13b.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
tulu-2-dpo-13b.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b8003c551efebdc2ed458b90a7f2e7b0e171d7c25d88cd20d1148843d163f5c2
|
3 |
+
size 9229924320
|