Apel-sin commited on
Commit
5fba40a
1 Parent(s): 3ab35e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -2,6 +2,25 @@
2
  library_name: transformers
3
  license: llama3
4
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  # Llama-3-8B-Instruct-abliterated-v3 Model Card
6
 
7
  [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)
 
2
  library_name: transformers
3
  license: llama3
4
  ---
5
+ # Exllama v2 mlabonne/NeuralDaredevil-8B-abliterated
6
+
7
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization.
8
+
9
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model</b>
10
+
11
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
12
+
13
+ Original model: <a href="https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated">mlabonne/NeuralDaredevil-8B-abliterated</a><br>
14
+ Calibration dataset: <a href="https://huggingface.co/datasets/cosmicvalor/toxic-qna">toxic-qna</a>
15
+
16
+ ## Available sizes
17
+
18
+ | Branch | Bits | lm_head bits | Description |
19
+ | ----- | ---- | ------- | ------------ |
20
+ | [8_0](https://huggingface.co/Apel-sin/llama-3-NeuralDaredevil-8B-abliterated-exl2/tree/8_0) | 8.0 | 8.0 | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
21
+ | [6_5](https://huggingface.co/Apel-sin/llama-3-NeuralDaredevil-8B-abliterated-exl2/tree/6_5) | 6.5 | 8.0 | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
22
+ | [5_5](https://huggingface.co/Apel-sin/llama-3-NeuralDaredevil-8B-abliterated-exl2/tree/5_5) | 5.5 | 8.0 | Slightly lower quality vs 6.5, but usable on 8GB cards. |
23
+
24
  # Llama-3-8B-Instruct-abliterated-v3 Model Card
25
 
26
  [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb)