apepkuss79's picture
Update README.md
c92486c verified
|
raw
history blame
3.76 kB
metadata
base_model: sentence-transformers/all-MiniLM-L6-v2
license: apache-2.0
library_name: sentence-transformers
model_creator: Sentence Transformers
quantized_by: Second State Inc.
language: en
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity
  - transformers

All-MiniLM-L6-v2-GGUF

Original Model

sentence-transformers/all-MiniLM-L6-v2

Run with LlamaEdge

  • LlamaEdge version: coming soon

  • Context size: 384

Quantized GGUF Models

Name Quant method Bits Size Use case
all-MiniLM-L6-v2-Q2_K.gguf Q2_K 2 19.2 MB smallest, significant quality loss - not recommended for most purposes
all-MiniLM-L6-v2-Q3_K_L.gguf Q3_K_L 3 20.5 MB small, substantial quality loss
all-MiniLM-L6-v2-Q3_K_M.gguf Q3_K_M 3 19.9 MB very small, high quality loss
all-MiniLM-L6-v2-Q3_K_S.gguf Q3_K_S 3 19.2 MB very small, high quality loss
all-MiniLM-L6-v2-Q4_0.gguf Q4_0 4 19.7 MB legacy; small, very high quality loss - prefer using Q3_K_M
all-MiniLM-L6-v2-Q4_K_M.gguf Q4_K_M 4 21 MB medium, balanced quality - recommended
all-MiniLM-L6-v2-Q4_K_S.gguf Q4_K_S 4 20.7 MB small, greater quality loss
all-MiniLM-L6-v2-Q5_0.gguf Q5_0 5 21 MB legacy; medium, balanced quality - prefer using Q4_K_M
all-MiniLM-L6-v2-Q5_K_M.gguf Q5_K_M 5 21.7 MB large, very low quality loss - recommended
all-MiniLM-L6-v2-Q5_K_S.gguf Q5_K_S 5 21.5 MB large, low quality loss - recommended
all-MiniLM-L6-v2-Q6_K.gguf Q6_K 6 24.2 MB very large, extremely low quality loss
all-MiniLM-L6-v2-Q8_0.gguf Q8_0 8 25 MB very large, extremely low quality loss - not recommended
all-MiniLM-L6-v2-ggml-model-f16.gguf Q8_0 8 45.9 MB very large, extremely low quality loss - not recommended

Quantized with llama.cpp b2334