LLMs quantized with GPTQ
Irina Proskurina
iproskurina
AI & ML interests
LLMs: quantization, pre-training
Recent Activity
new activity
18 days ago
TheBloke/Mistral-7B-Instruct-v0.2-GPTQ:weights not used when initializing MistralForCausalLM
updated
a model
23 days ago
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g3
updated
a model
23 days ago
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g2
Organizations
Collections
4
models
43
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g3
Text Generation
•
Updated
•
52
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g2
Text Generation
•
Updated
•
32
iproskurina/Mistral-7B-v0.3-GPTQ2-4bit-g1
Text Generation
•
Updated
•
10
iproskurina/opt-125m-gptq2
Text Generation
•
Updated
•
14
iproskurina/distilbert-base-alternate-layers
Updated
•
2
iproskurina/en_grammar_checker
Updated
•
11
•
4
iproskurina/Mistral-7B-v0.3-gptq-3bit
Text Generation
•
Updated
•
14
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
12
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
19
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
10