Update README.md (#1)
Browse files- Update README.md (d1751998087dcf43b1a671e76f510632f1996439)
README.md
CHANGED
@@ -16,7 +16,6 @@ Doge is an ongoing research project where we aim to train a series of small lang
|
|
16 |
In addition, Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by Jingze Shi, it only allows text input and text generation, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), the ongoing research repository is [Wonderful Matrices](https://github.com/LoserCheems/WonderfulMatrices).
|
17 |
|
18 |
|
19 |
-
|
20 |
## Uses
|
21 |
|
22 |
```python
|
@@ -37,18 +36,24 @@ In addition, Doge uses Dynamic Mask Attention as sequence transformation and can
|
|
37 |
|
38 |
> TODO: The larger model is under training and will be uploaded soon.
|
39 |
|
40 |
-
|
41 |
-
|
42 |
|---|---|---|---|---|---|---|---|---|
|
43 |
-
| [Doge-20M](https://huggingface.co/
|
44 |
-
| [Doge-60M](https://huggingface.co/
|
45 |
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
-
**
|
48 |
- Image: nvcr.io/nvidia/pytorch:24.10-py3
|
49 |
- Hardware: 1x NVIDIA RTX 4090
|
50 |
- Software: Transformers
|
51 |
|
|
|
52 |
## Citation
|
53 |
|
54 |
```bibtex
|
|
|
16 |
In addition, Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by Jingze Shi, it only allows text input and text generation, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), the ongoing research repository is [Wonderful Matrices](https://github.com/LoserCheems/WonderfulMatrices).
|
17 |
|
18 |
|
|
|
19 |
## Uses
|
20 |
|
21 |
```python
|
|
|
36 |
|
37 |
> TODO: The larger model is under training and will be uploaded soon.
|
38 |
|
39 |
+
**Training**:
|
40 |
+
| Model | Training Data | Epochs | Steps | Content Length | Tokens | LR | Batch Size | Precision |
|
41 |
|---|---|---|---|---|---|---|---|---|
|
42 |
+
| [Doge-20M](https://huggingface.co/JingzeShi/Doge-20M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 2 | 10k | 2048 | 5B | 8e-4 | 0.25M | bfloat16 |
|
43 |
+
| [Doge-60M](https://huggingface.co/JingzeShi/Doge-60M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 2 | 20k | 2048 | 20B | 6e-4 | 0.5M | bfloat16 |
|
44 |
|
45 |
+
**Evaluation**:
|
46 |
+
| Model | TriviaQA | MMLU | ARC | PIQA | HellaSwag | OBQA | Winogrande |
|
47 |
+
|---|---|---|---|---|---|---|---|
|
48 |
+
| [Doge-20M](https://huggingface.co/JingzeShi/Doge-20M) | - | 26.01 | 36.15 | 56.26 | 26.60 | 26.60 | 50.12 |
|
49 |
+
| [Doge-60M](https://huggingface.co/JingzeShi/Doge-60M) | - | 25.81 | 45.49 | 61.37 | 29.65 | 27.40 | 52.57 |
|
50 |
|
51 |
+
**Environment**:
|
52 |
- Image: nvcr.io/nvidia/pytorch:24.10-py3
|
53 |
- Hardware: 1x NVIDIA RTX 4090
|
54 |
- Software: Transformers
|
55 |
|
56 |
+
|
57 |
## Citation
|
58 |
|
59 |
```bibtex
|