Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
DeusImperator
's Collections
EXL2 70B for 24GB VRAM
Small models 8bpw EXL2 quants
EXL2 70B for 24GB VRAM
updated
23 days ago
70B LLMs that fit in 24GB VRAM with over 16k context (with exl2 Q4 cache)
Upvote
-
DeusImperator/Midnight-Miqu-70B-v1.5_exl2_2.4bpw_rpcal
Text Generation
•
Updated
May 19
•
3
DeusImperator/Midnight-Miqu-70B-v1.5_exl2_2.4bpw_rpcal_mk2
Text Generation
•
Updated
May 19
•
9
DeusImperator/Midnight-Miqu-70B-v1.5_exl2_2.4bpw
Text Generation
•
Updated
May 19
•
2
DeusImperator/Dark-Miqu-70B_exl2_2.4bpw
Text Generation
•
Updated
May 24
•
1
DeusImperator/Dark-Miqu-70B_exl2_2.4bpw_rpcal_mk2
Text Generation
•
Updated
May 25
•
3
DeusImperator/Dark-Miqu-70B_exl2_2.4bpw_rpcal_long
Updated
27 days ago
DeusImperator/Midnight-Miqu-70B-v1.5_exl2_2.4bpw_rpcal_long
Text Generation
•
Updated
23 days ago
Upvote
-
Share collection
View history
Collection guide
Browse collections