JosephusCheung commited on
Commit
cc054cf
1 Parent(s): 5e04d1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -31,7 +31,7 @@ tags:
31
  - qwen
32
  - causallm
33
  ---
34
- ![](https://huggingface.co/JosephusCheung/tmp/resolve/main/14.17b.png)
35
 
36
  *Image drawn by GPT-4 DALL·E 3* **TL;DR: Perhaps better than all existing models < 70B, in most quantitative evaluations...**
37
 
@@ -58,7 +58,7 @@ It is not recommended to use any form of quantization, but rather to use smaller
58
 
59
  Also see [7B Version](https://huggingface.co/CausalLM/7B)
60
 
61
- This model was trained based on the model weights of Qwen (and LLaMA2 was used, yes, for calculating some initial weights), you may also need to comply with the commercial use restrictions of these two models depending on the situation. The training process utilized a model structure that was identical to LLaMA2, using the same attention calculation method as the original MHA LLaMA2 models, and no additional scaling applied to the Rotary Positional Encoding (RoPE).
62
 
63
  We manually curated a SFT dataset of 1.3B tokens for training, utilizing open source datasets from Hugging Face. For most of these sentences, we performed manual or synthetic rewrites and generated alternate language versions using larger language models. Additionally, we conducted augmented text training using carefully selected entries from Wikipedia, as well as featured entries from Fandom and filtered entries from Moegirlpedia. In order to strike a balance between efficiency and quality, 100% of the data used for training was synthetic data, no direct use of text from the internet or original texts from publicly available datasets was employed for fine-tuning.
64
 
 
31
  - qwen
32
  - causallm
33
  ---
34
+ [![CausalLM](https://huggingface.co/JosephusCheung/tmp/resolve/main/14.17b.png)](https://causallm.org/)
35
 
36
  *Image drawn by GPT-4 DALL·E 3* **TL;DR: Perhaps better than all existing models < 70B, in most quantitative evaluations...**
37
 
 
58
 
59
  Also see [7B Version](https://huggingface.co/CausalLM/7B)
60
 
61
+ This model was trained based on the model weights of Qwen (and LLaMA2 was used, yes, for calculating some initial weights), you may also need to comply with the commercial use restrictions of these two models depending on the situation. The training process utilized a model architecture that was identical to LLaMA2, using the same attention calculation method as the original MHA LLaMA2 models, and no additional scaling applied to the Rotary Positional Encoding (RoPE).
62
 
63
  We manually curated a SFT dataset of 1.3B tokens for training, utilizing open source datasets from Hugging Face. For most of these sentences, we performed manual or synthetic rewrites and generated alternate language versions using larger language models. Additionally, we conducted augmented text training using carefully selected entries from Wikipedia, as well as featured entries from Fandom and filtered entries from Moegirlpedia. In order to strike a balance between efficiency and quality, 100% of the data used for training was synthetic data, no direct use of text from the internet or original texts from publicly available datasets was employed for fine-tuning.
64