inference: false | |
language: | |
- en | |
pipeline_tag: text-generation | |
tags: | |
- llama | |
- llama-2 | |
# Aetheria-L2-70B-exl2 | |
Exllama v2 quant of [royallab/Aetheria-L2-70B](https://huggingface.co/royallab/Aetheria-L2-70B) | |
Branches: | |
- main: measurement.json calculated at 2048 token calibration rows on PIPPA | |
- 5.0bpw-h6: 5 decoder bits per weight, 6 head bits | |
- ideal for 2x 24gb GPUs at 8192 context, or 1x 48gb GPU at 8192 context with CFG cache |