JingzeShi commited on
Commit
b9c1616
·
verified ·
1 Parent(s): 4af8536

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -25
README.md CHANGED
@@ -22,9 +22,9 @@ tags:
22
  <a href="https://discord.gg/P2yYH95N" target="_blank" style="margin: 2px;">
23
  <img alt="Discord" src="https://img.shields.io/badge/Discord-Small%20Doges-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
24
  </a>
25
- <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
26
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
27
- </a>
28
  <a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
29
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
30
  </a>
@@ -33,7 +33,7 @@ tags:
33
  </a>
34
  </div>
35
 
36
- Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by [SmallDoge](https://huggingface.co/SmallDoge) community, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), all training details and code are publicly available on the [small-doge](https://github.com/SmallDoges/small-doge) repository.
37
 
38
 
39
  ## Uses
@@ -52,28 +52,28 @@ Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-La
52
 
53
  ## Model Details
54
 
55
- We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
56
-
57
- > NOTE: If you want to continue pre-training this model, you can find the unconverged checkpoint [here](https://huggingface.co/SmallDoge/Doge-60M-checkpoint).
58
-
59
- > NOTE: These models has not been fine-tuned for instruction, the instruction model is [here](https://huggingface.co/SmallDoge/Doge-60M-Instruct).
60
 
61
- > TODO: The larger model is under training and will be uploaded soon.
62
 
63
  **Pre-Training**:
64
 
65
- | Model | Training Data | Steps | Content Length | Tokens | LR | Batch Size | Precision |
66
- |---|---|---|---|---|---|---|---|
67
- | [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 8k | 2048 | 4B | 8e-3 | 0.5M | bfloat16 |
68
- | [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 16k | 2048 | 16B | 6e-3 | 1M | bfloat16 |
 
 
69
 
70
  **Evaluation**:
71
 
72
- | Model | MMLU | TriviaQA | ARC-E | ARC-C | PIQA | HellaSwag | OBQA | Winogrande | tokens / s on CPU |
73
- |---|---|---|---|---|---|---|---|---|---|
74
- | [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M) | 25.43 | 0.03 | 36.83 | 22.78 | 58.38 | 27.25 | 25.60 | 50.20 | 142 |
75
- | [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M) | 26.41 | 0.18 | 50.46 | 25.34 | 61.43 | 31.45 | 28.00 | 50.75 | 62 |
 
 
76
 
 
77
  > All evaluations are done using five-shot settings, without additional training on the benchmarks.
78
 
79
  **Procedure**:
@@ -91,13 +91,10 @@ We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.c
91
  ## Citation
92
 
93
  ```bibtex
94
- @misc{shi2024wonderfulmatrices,
95
- title={Wonderful Matrices: Combining for a More Efficient and Effective Foundation Model Architecture},
96
- author={Jingze Shi and Bingheng Wu},
97
- year={2024},
98
- eprint={2412.11834},
99
- archivePrefix={arXiv},
100
- primaryClass={cs.LG},
101
- url={https://arxiv.org/abs/2412.11834},
102
  }
103
  ```
 
22
  <a href="https://discord.gg/P2yYH95N" target="_blank" style="margin: 2px;">
23
  <img alt="Discord" src="https://img.shields.io/badge/Discord-Small%20Doges-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
24
  </a>
25
+ <!-- <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
26
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
27
+ </a> -->
28
  <a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
29
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
30
  </a>
 
33
  </a>
34
  </div>
35
 
36
+ Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by [SmallDoge](https://huggingface.co/SmallDoge) community, for detailed algorithm and model architecture, paper coming soon, all training details and code are available in the [small-doge](https://github.com/SmallDoges/small-doge) repository.
37
 
38
 
39
  ## Uses
 
52
 
53
  ## Model Details
54
 
55
+ We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus). If you want to continue pre-training this model, you can find the unconverged checkpoint [here](https://huggingface.co/SmallDoge/Doge-320M-checkpoint). These models has not been fine-tuned for instruction, the instruction model is [here](https://huggingface.co/SmallDoge/Doge-320M-Instruct).
 
 
 
 
56
 
 
57
 
58
  **Pre-Training**:
59
 
60
+ | Model | Training Data | Steps | Content Length | Tokens | LR | Batch Size | Precision | RTX 4090 GPU hours |
61
+ |---|---|---|---|---|---|---|---|---|
62
+ | [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 8k | 2048 | 4B | 8e-3 | 0.5M | bfloat16 | 14 |
63
+ | [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 16k | 2048 | 16B | 6e-3 | 1M | bfloat16 | 128 |
64
+ | [Doge-160M](https://huggingface.co/SmallDoge/Doge-160M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 24k | 2048 | 32B | 4e-3 | 1.5M | bfloat16 | 522 |
65
+ | [Doge-320M](https://huggingface.co/SmallDoge/Doge-320M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 32k | 2048 | 64B | 2e-3 | 2M | bfloat16 | 1856 |
66
 
67
  **Evaluation**:
68
 
69
+ | Model | MMLU | TriviaQA | ARC | PIQA | HellaSwag | OBQA | Winogrande | tokens / s on i7-11 CPU |
70
+ |---|---|---|---|---|---|---|---|---|
71
+ | [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M) | 25.4 | 0.03 | 29.8 | 58.4 | 27.3 | 25.6 | 50.2 | 142 |
72
+ | [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M) | 26.4 | 0.2 | 37.9 | 61.4 | 31.5 | 28.0 | 50.8 | 62 |
73
+ | [Doge-160M](https://huggingface.co/SmallDoge/Doge-160M) | 29.2 | 4.8 | 44.4 | 70.1 | 43.4 | 34.4 | 52.2 | 28 |
74
+ | [Doge-320M](https://huggingface.co/SmallDoge/Doge-320M) | 33.8 | 9.4 | 52.1 | 73.9 | 52.7 | 37.9 | 55.0 | 16 |
75
 
76
+ > [!NOTE]
77
  > All evaluations are done using five-shot settings, without additional training on the benchmarks.
78
 
79
  **Procedure**:
 
91
  ## Citation
92
 
93
  ```bibtex
94
+ @misc{smalldoges,
95
+ title={SmallDoges},
96
+ author={SmallDoge Team and Jingze, Shi and Yifan, Wu and Bingheng, Wu},
97
+ year={2025},
98
+ month={March},
 
 
 
99
  }
100
  ```