1B AWQ
Collection
1 billion parameter models, quantized with AWQ.
•
4 items
•
Updated
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.