FP8 LLMs for vLLM Collection Accurate FP8 quantized models by Neural Magic, ready for use with vLLM! • 44 items • Updated Oct 17, 2024 • 67
notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1 Text Generation • Updated Apr 11, 2024 • 25
notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-30 Text Generation • Updated Mar 14, 2024 • 558
Running on CPU Upgrade 90 90 LLM Safety Leaderboard 🥇 View and submit machine learning model evaluations