JosephusCheung commited on
Commit
10b2aef
1 Parent(s): 1188206

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -1
README.md CHANGED
@@ -29,4 +29,46 @@ tags:
29
  - llama
30
  - llama2
31
  ---
32
- ![](https://huggingface.co/JosephusCheung/tmp/resolve/main/14.17b.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  - llama
30
  - llama2
31
  ---
32
+ ![](https://huggingface.co/JosephusCheung/tmp/resolve/main/14.17b.png)
33
+
34
+ # Read Me:
35
+
36
+ This model was trained based on the model weights of Qwen and LLaMA2. The training process utilized a model structure that was identical to LLaMA2, using the same attention calculation method as the original MHA LLaMA2 models, and no additional scaling applied to the Relative Positional Encoding (RoPE).
37
+
38
+ We manually curated a dataset of 1.3 billion sentences for training, utilizing open source datasets from Hugging Face. For most of these sentences, we performed manual or synthetic rewrites and generated alternate language versions using larger language models. Additionally, we conducted augmented text training using carefully selected entries from Wikipedia, as well as featured entries from Fandom and filtered entries from Moegirlpedia. In order to strike a balance between efficiency and quality, 100% of the data used for training was synthetic data, no direct use of text from the internet or original texts from publicly available datasets was employed for fine-tuning.
39
+
40
+ The 7B version of the model is a distilled version of the 14B model, specifically designed for speculative sampling. However, it is important to exercise caution when directly using the model, as it may produce hallucinations or unreliable outputs.
41
+
42
+ Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.
43
+
44
+ Bonus: The model underwent some fine-tuning on the prompt format introduced in LLaVA1.5 that is unrelated to image attention calculation. Therefore, aligning the ViT Projection module with frozen LMunder visual instructions would enable rapid implementation of effective multimodal capabilities.
45
+
46
+ ## MMLU:
47
+ stem ACC: 64.19
48
+
49
+ Humanities ACC: 61.40
50
+
51
+ other ACC: 71.64
52
+
53
+ social ACC: 75.37
54
+
55
+ **AVERAGE ACC:67.36 **
56
+
57
+ **AVERAGE ACC:63.82**
58
+
59
+ ## CEval (Val):
60
+ STEM ACC: 66.71
61
+
62
+ Social Science ACC: 85.10
63
+
64
+ Humanities ACC: 76.68
65
+
66
+ Other ACC: 70.23
67
+
68
+ Hard ACC:54.71
69
+
70
+ **AVERAGE ACC:73.10**
71
+
72
+ ## GSM8K
73
+
74
+ **Zero-shot ACC**