lucyknada commited on
Commit
a5eed49
1 Parent(s): cf3ac78

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - fr
6
+ - de
7
+ - es
8
+ - it
9
+ - pt
10
+ - ru
11
+ - zh
12
+ - ja
13
+ pipeline_tag: text-generation
14
+ tags:
15
+ - chat
16
+ ---
17
+
18
+ ## This repo contains EXL2 quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v2-72b).
19
+ ## Base repo only contains the measurement file, see revisions for your quant of choice.
20
+
21
+ - [measurement.json](https://huggingface.co/anthracite-org/magnum-v2-72b-exl2/tree/main)
22
+ - [3.0bpw](https://huggingface.co/anthracite-org/magnum-v2-72b-exl2/tree/2.7bpw)
23
+ - [4.0bpw](https://huggingface.co/anthracite-org/magnum-v2-72b-exl2/tree/4.0bpw)
24
+ - [6.0bpw](https://huggingface.co/anthracite-org/magnum-v2-72b-exl2/tree/6.0bpw)
25
+
26
+
27
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/u8B-5bEeroN549uxUIisV.png)
28
+
29
+ This is the seventh (Lucky!) in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen-2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct).
30
+
31
+ ## Prompting
32
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
33
+
34
+ ```py
35
+ """<|im_start|>user
36
+ Hi there!<|im_end|>
37
+ <|im_start|>assistant
38
+ Nice to meet you!<|im_end|>
39
+ <|im_start|>user
40
+ Can I ask a question?<|im_end|>
41
+ <|im_start|>assistant
42
+ """
43
+ ```
44
+
45
+ ## Credits
46
+ - [anthracite-org/Stheno-Data-Filtered](https://huggingface.co/datasets/anthracite-org/Stheno-Data-Filtered)
47
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
48
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
49
+
50
+ This model has been a team effort, and the credits goes to all members of Anthracite.
51
+
52
+ ## Training
53
+ The training was done for 2 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model.
54
+
55
+ We also trained with a weight decay of 0.01 to help further stabilize the loss trajectory and mitigate catastrophic forgetting, and utilize a peak learning rate of 4e-6 to prevent the 2nd epoch loss from dropping too significantly (as it is a strong indicator of overfitting).
56
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6491e00e057b0928b3e07b75/hVd5gNqSLOlWTkUb0A7iE.png)
57
+
58
+ Sample Packing was done for 16k tokens rather than the 8k tokens used in our previous runs.
59
+
60
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
61
+
62
+ ## Safety
63
+ ...