Rombos-LLM-V2.6-Nemotron-70b by Rombodawg


ExLlamaV2 Quantization

Quantized with ExLlamaV2 v0.2.3

2.2 Bits Per Weight

4.65 Bits Per Weight


image/jpeg

I applied the last step of my continuous finetuning method to the Nemotron-70b model from Nvidia. More details bellow:

Quants: (Coming Soon)

Open-LLM-Leaderboard scores: (Coming soon)

Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.