T145 commited on
Commit
0f5b018
1 Parent(s): 621ea70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -43
README.md CHANGED
@@ -265,49 +265,8 @@ model.save_pretrained(FIXED_ID)
265
  tokenizer.save_pretrained(FIXED_ID)
266
  ```
267
 
268
- ## Merge Details
269
- ### Merge Method
270
-
271
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base.
272
-
273
- ### Models Merged
274
-
275
- The following models were included in the merge:
276
- * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
277
- * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
278
- * [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)
279
-
280
- ### Configuration
281
-
282
- The following YAML configuration was used to produce this model:
283
-
284
- ```yaml
285
- base_model: unsloth/Meta-Llama-3.1-8B-Instruct
286
- dtype: bfloat16
287
- merge_method: dare_ties
288
- parameters:
289
- int8_mask: 1.0
290
- slices:
291
- - sources:
292
- - layer_range: [0, 32]
293
- model: akjindal53244/Llama-3.1-Storm-8B
294
- parameters:
295
- density: 0.8
296
- weight: 0.25
297
- - layer_range: [0, 32]
298
- model: arcee-ai/Llama-3.1-SuperNova-Lite
299
- parameters:
300
- density: 0.8
301
- weight: 0.33
302
- - layer_range: [0, 32]
303
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
304
- parameters:
305
- density: 0.8
306
- weight: 0.42
307
- - layer_range: [0, 32]
308
- model: unsloth/Meta-Llama-3.1-8B-Instruct
309
- tokenizer_source: base
310
- ```
311
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
312
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/T145__ZEUS-8B-V2-abliterated-details)!
313
  Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=T145%2FZEUS-8B-V2-abliterated&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
 
265
  tokenizer.save_pretrained(FIXED_ID)
266
  ```
267
 
268
+ According to the script, **layer 19** is the primary target for abliteration.
269
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
270
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
271
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/T145__ZEUS-8B-V2-abliterated-details)!
272
  Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=T145%2FZEUS-8B-V2-abliterated&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!