Request: DOI
#21 opened 2 months ago
by
TheDandyMan
Has anyone tried running this model on Ollama?
6
#20 opened 2 months ago
by
Yuxin362
vLLM on A100s
4
#19 opened 2 months ago
by
fsaudm
Fine-tuning roadmap
4
#18 opened 2 months ago
by
RonanMcGovern

CUDA out of memory error during fp8 to bf16 model conversion + fix
1
#17 opened 2 months ago
by
sszymczyk
when llm leaderboard?
3
#14 opened 2 months ago
by
blazespinnaker
Update README.md
#13 opened 2 months ago
by
BANblongz

Please make V3-lite
4
#12 opened 3 months ago
by
rombodawg

minimum vram?
11
#9 opened 3 months ago
by
CHNtentes
Update README.md
#7 opened 3 months ago
by
Spestly

Converted bf16 Model on Hugging Face
2
#5 opened 3 months ago
by
OpenSourceRonin
Update README.md
#3 opened 3 months ago
by
reach-vb

Smaller version for Home User GPU's
10
#2 opened 3 months ago
by
apcameron
How can we thank you enough, whale bros?
10
#1 opened 3 months ago
by
KrishnaKaasyap