Update README.md
#6
by
TikaToka
- opened
README.md
CHANGED
@@ -5,7 +5,7 @@ license: apache-2.0
|
|
5 |
datasets:
|
6 |
- simplescaling/s1K-1.1
|
7 |
base_model:
|
8 |
-
- Qwen/Qwen2.5-
|
9 |
library_name: transformers
|
10 |
---
|
11 |
|
@@ -13,7 +13,7 @@ library_name: transformers
|
|
13 |
|
14 |
> s1.1 is our sucessor of [s1](https://huggingface.co/simplescaling/s1-32B) with better reasoning performance by leveraging reasoning traces from r1 instead of Gemini.
|
15 |
|
16 |
-
- **Logs:** https://wandb.ai/
|
17 |
- **Repository:** [simplescaling/s1](https://github.com/simplescaling/s1)
|
18 |
- **Paper:** https://arxiv.org/abs/2501.19393
|
19 |
|
@@ -24,14 +24,8 @@ Thanks to [Bespoke Labs](https://huggingface.co/bespokelabs) ([Ryan Marten](http
|
|
24 |
|
25 |
The model usage is documented [here](https://github.com/simplescaling/s1?tab=readme-ov-file#inference).
|
26 |
|
27 |
-
|
28 |
|
29 |
-
|
30 |
-
|---|---|---|---|---|---|---|
|
31 |
-
| # examples | 1K | 1K | ? | ? | >800K | 800K |
|
32 |
-
| AIME2024 | 56.7 | 56.7 | 40.0 | 74.4 | 79.8 | 72.6 |
|
33 |
-
| AIME2025 I | 26.7 | 60.0 | 37.5 | ? | 65.0 | 46.1 |
|
34 |
-
| MATH500 | 93.0 | 95.4 | 81.4 | 94.8 | 97.3 | 94.3 |
|
35 |
-
| GPQA-Diamond | 59.6 | 63.6 | 75.2 | 77.3 | 71.5 | 62.1 |
|
36 |
|
37 |
-
|
|
|
5 |
datasets:
|
6 |
- simplescaling/s1K-1.1
|
7 |
base_model:
|
8 |
+
- Qwen/Qwen2.5-0.5B-Instruct
|
9 |
library_name: transformers
|
10 |
---
|
11 |
|
|
|
13 |
|
14 |
> s1.1 is our sucessor of [s1](https://huggingface.co/simplescaling/s1-32B) with better reasoning performance by leveraging reasoning traces from r1 instead of Gemini.
|
15 |
|
16 |
+
- **Logs:** https://wandb.ai/tikatoka-snu/s1/runs/x4q29quz
|
17 |
- **Repository:** [simplescaling/s1](https://github.com/simplescaling/s1)
|
18 |
- **Paper:** https://arxiv.org/abs/2501.19393
|
19 |
|
|
|
24 |
|
25 |
The model usage is documented [here](https://github.com/simplescaling/s1?tab=readme-ov-file#inference).
|
26 |
|
27 |
+
---
|
28 |
|
29 |
+
Note that s1-32B and s1.1-32B use budget forcing in this table; specifically ignoring end-of-thinking and appending "Wait" up to four times.
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
+
Model is trained with block_size 20000
|