Files changed (1) hide show
  1. README.md +5 -11
README.md CHANGED
@@ -5,7 +5,7 @@ license: apache-2.0
5
  datasets:
6
  - simplescaling/s1K-1.1
7
  base_model:
8
- - Qwen/Qwen2.5-32B-Instruct
9
  library_name: transformers
10
  ---
11
 
@@ -13,7 +13,7 @@ library_name: transformers
13
 
14
  > s1.1 is our sucessor of [s1](https://huggingface.co/simplescaling/s1-32B) with better reasoning performance by leveraging reasoning traces from r1 instead of Gemini.
15
 
16
- - **Logs:** https://wandb.ai/hashimoto-group/o1/runs/m1ilia77/overview
17
  - **Repository:** [simplescaling/s1](https://github.com/simplescaling/s1)
18
  - **Paper:** https://arxiv.org/abs/2501.19393
19
 
@@ -24,14 +24,8 @@ Thanks to [Bespoke Labs](https://huggingface.co/bespokelabs) ([Ryan Marten](http
24
 
25
  The model usage is documented [here](https://github.com/simplescaling/s1?tab=readme-ov-file#inference).
26
 
27
- # Evaluation
28
 
29
- | Metric | s1-32B | s1.1-32B | o1-preview | o1 | DeepSeek-R1 | DeepSeek-R1-Distill-Qwen-32B |
30
- |---|---|---|---|---|---|---|
31
- | # examples | 1K | 1K | ? | ? | >800K | 800K |
32
- | AIME2024 | 56.7 | 56.7 | 40.0 | 74.4 | 79.8 | 72.6 |
33
- | AIME2025 I | 26.7 | 60.0 | 37.5 | ? | 65.0 | 46.1 |
34
- | MATH500 | 93.0 | 95.4 | 81.4 | 94.8 | 97.3 | 94.3 |
35
- | GPQA-Diamond | 59.6 | 63.6 | 75.2 | 77.3 | 71.5 | 62.1 |
36
 
37
- Note that s1-32B and s1.1-32B use budget forcing in this table; specifically ignoring end-of-thinking and appending "Wait" up to four times.
 
5
  datasets:
6
  - simplescaling/s1K-1.1
7
  base_model:
8
+ - Qwen/Qwen2.5-0.5B-Instruct
9
  library_name: transformers
10
  ---
11
 
 
13
 
14
  > s1.1 is our sucessor of [s1](https://huggingface.co/simplescaling/s1-32B) with better reasoning performance by leveraging reasoning traces from r1 instead of Gemini.
15
 
16
+ - **Logs:** https://wandb.ai/tikatoka-snu/s1/runs/x4q29quz
17
  - **Repository:** [simplescaling/s1](https://github.com/simplescaling/s1)
18
  - **Paper:** https://arxiv.org/abs/2501.19393
19
 
 
24
 
25
  The model usage is documented [here](https://github.com/simplescaling/s1?tab=readme-ov-file#inference).
26
 
27
+ ---
28
 
29
+ Note that s1-32B and s1.1-32B use budget forcing in this table; specifically ignoring end-of-thinking and appending "Wait" up to four times.
 
 
 
 
 
 
30
 
31
+ Model is trained with block_size 20000