SicariusSicariiStuff commited on
Commit
198c154
·
verified ·
1 Parent(s): e828abb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -18,15 +18,22 @@ language:
18
  As of **June 11, 2024**, I've finally **started training** the model! The training is progressing smoothly, although it will take some time. I used a combination of model merges and an abliterated model as base, followed by a comprehensive deep unalignment protocol to **unalign the model to its core**. A common issue with uncensoring and unaligning models is that it often **significantly** impacts their base intelligence. To mitigate these drawbacks, I've included a substantial corpus of common sense, theory of mind, and various other elements to counteract the effects of the deep uncensoring process. Given the extensive corpus involved, the training will require at least a week of continuous training. Expected early results: in about 3-4 days.
19
 
20
  # Additional info:
21
- As of **June 13, 2024**, I've observed that even after two days of continuous training, the model is **still resistant to learning certain aspects**. For example, some of the validation data still shows a loss over **2.3**, whereas other parts have a loss of **0.3** or lower. This is after the model was initially abliterated.
 
22
 
23
  These observations underscore the critical importance of fine-tuning for alignment. Given the current pace, training will likely extend beyond a week. However, the end result should be **interesting**. If the additional datasets focused on logic and common sense are effective, we should achieve a model that is **nearly completely unaligned**, while still retaining its core 'intelligence.'
 
 
 
24
 
25
- **June 18, 2024 Update**, After extensive testing of the intermediate checkpoints, significant progress has been made. The model is slowly — I mean, really slowly — unlearning its alignment. By significantly lowering the learning rate, I was able to visibly observe deep behavioral changes, this process is taking longer than anticipated, but it's going to be worth it. Estimated time to completion: 4 more days.. I'm pleased to report that in several tests, the model not only maintained its intelligence but actually showed a slight improvement, especially in terms of common sense. An intermediate checkpoint of this model was used to create [invisietch/EtherealRainbow-v0.3-rc7](https://huggingface.co/invisietch/EtherealRainbow-v0.3-rc7-8B-GGUF), with promising results. Currently, it seems like I'm on the right track. I hope this model will serve as a solid foundation for further merges, whether for role-playing (RP) or for uncensoring. This approach also allows us to save on actual fine-tuning, thereby reducing our carbon footprint. The merge process takes just a few minutes of CPU time, instead of days of GPU work.
 
26
 
27
  Cheers,
28
  Sicarius
29
- <img src="https://i.imgur.com/b6unKyS.png" alt="LLAMA-3_Unaligned_Training" style="width: 60%; min-width: 600px; display: block; margin: auto;">
 
 
30
 
31
 
32
  # Model instruction template:
 
18
  As of **June 11, 2024**, I've finally **started training** the model! The training is progressing smoothly, although it will take some time. I used a combination of model merges and an abliterated model as base, followed by a comprehensive deep unalignment protocol to **unalign the model to its core**. A common issue with uncensoring and unaligning models is that it often **significantly** impacts their base intelligence. To mitigate these drawbacks, I've included a substantial corpus of common sense, theory of mind, and various other elements to counteract the effects of the deep uncensoring process. Given the extensive corpus involved, the training will require at least a week of continuous training. Expected early results: in about 3-4 days.
19
 
20
  # Additional info:
21
+ <details>
22
+ <summary><b>As of **June 13, 2024**, I've observed that even after two days of continuous training, the model is **still resistant to learning certain aspects**.</b></summary> For example, some of the validation data still shows a loss over **2.3**, whereas other parts have a loss of **0.3** or lower. This is after the model was initially abliterated.
23
 
24
  These observations underscore the critical importance of fine-tuning for alignment. Given the current pace, training will likely extend beyond a week. However, the end result should be **interesting**. If the additional datasets focused on logic and common sense are effective, we should achieve a model that is **nearly completely unaligned**, while still retaining its core 'intelligence.'
25
+ <img src="https://i.imgur.com/b6unKyS.png" alt="LLAMA-3_Unaligned_Training" style="width: 60%; min-width: 600px; display: block; margin: auto;">
26
+
27
+ </details>
28
 
29
+ <details>
30
+ <summary><b>**June 18, 2024 Update**, After extensive testing of the intermediate checkpoints, significant progress has been made.</b></summary> The model is slowly — I mean, really slowly — unlearning its alignment. By significantly lowering the learning rate, I was able to visibly observe deep behavioral changes, this process is taking longer than anticipated, but it's going to be worth it. Estimated time to completion: 4 more days.. I'm pleased to report that in several tests, the model not only maintained its intelligence but actually showed a slight improvement, especially in terms of common sense. An intermediate checkpoint of this model was used to create [invisietch/EtherealRainbow-v0.3-rc7](https://huggingface.co/invisietch/EtherealRainbow-v0.3-rc7-8B-GGUF), with promising results. Currently, it seems like I'm on the right track. I hope this model will serve as a solid foundation for further merges, whether for role-playing (RP) or for uncensoring. This approach also allows us to save on actual fine-tuning, thereby reducing our carbon footprint. The merge process takes just a few minutes of CPU time, instead of days of GPU work.
31
 
32
  Cheers,
33
  Sicarius
34
+ </details>
35
+
36
+
37
 
38
 
39
  # Model instruction template: