Update README.md
Browse files
README.md
CHANGED
@@ -1,23 +1,30 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
4 |
---
|
5 |
-
# Llama-3-8B-Instruct-abliterated-v2 Model Card
|
6 |
|
7 |
-
|
8 |
|
9 |
-
|
|
|
10 |
|
11 |
-
[
|
12 |
|
13 |
-
##
|
14 |
|
15 |
-
|
|
|
16 |
|
17 |
-
|
18 |
|
19 |
-
|
20 |
|
21 |
-
|
|
|
|
|
22 |
|
23 |
-
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: llama3
|
4 |
---
|
|
|
5 |
|
6 |
+
# Model Card for Llama-3-8B-Instruct-abliterated-v2
|
7 |
|
8 |
+
## Overview
|
9 |
+
This model card describes the Llama-3-8B-Instruct-abliterated-v2 model, which is an orthogonalized version of the meta-llama/Llama-3-8B-Instruct model, and an improvement upon the previous generation Llama-3-8B-Instruct-abliterated. This variant has had certain weights manipulated to inhibit the model's ability to express refusal.
|
10 |
|
11 |
+
[Join the Cognitive Computations Discord!](https://discord.gg/cognitivecomputations)
|
12 |
|
13 |
+
## Details
|
14 |
|
15 |
+
* The model was trained with more data to better pinpoint the "refusal direction".
|
16 |
+
* This model is MUCH better at directly and succinctly answering requests without producing even so much as disclaimers.
|
17 |
|
18 |
+
## Methodology
|
19 |
|
20 |
+
The methodology used to generate this model is described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)'
|
21 |
|
22 |
+
## Quirks and Side Effects
|
23 |
+
This model may come with interesting quirks, as the methodology is still new and untested. The code used to generate the model is available in the Python notebook [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb).
|
24 |
+
Please note that the model may still refuse to answer certain requests, even after the weights have been manipulated to inhibit refusal.
|
25 |
|
26 |
+
## Availability
|
27 |
+
|
28 |
+
## How to Use
|
29 |
+
This model is available for use in the Transformers library.
|
30 |
+
GGUF Quants are available [here](https://huggingface.co/failspy/Llama-3-8B-Instruct-abliterated-v2-GGUF).
|