MetaStoneTec commited on
Commit
e9b0215
·
verified ·
1 Parent(s): 086a883

Upload 2 files

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +83 -3
  3. introduction.png +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ introduction.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,83 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Introduction
2
+ MetaStone-L1 is the lite reasoning model of the MetaStone series, which aims to enhance the performance in hard downstream tasks.
3
+
4
+ On core reasoning benchmarks including mathematics and code, MetaStone-L1-7B achieved SOTA results in the parallel-level models, and it also achieved the comparable results as the API models such as Claude-3.5-Sonnet-1022 and GPT4o-0513.
5
+ <img src="./introduction.png" alt="Logo" width="800">
6
+
7
+ This repo contains the MetaStone-L1-7B model, which is trained based on DeepSeek-R1-Distill-Qwen-7B by GRPO. For full details of this model please refer to our release blog.
8
+
9
+ ## Requirements
10
+ We advise you to use the latest version of transformers(```transformers==4.48.3```). For the best experience, please review the [Usage Guidelines](#usage-guidelines).
11
+
12
+ ## Quickstart
13
+ Here give the example of how to use our model.
14
+ ```
15
+ from transformers import AutoModelForCausalLM, AutoTokenizer
16
+
17
+ model_name = "MetaStoneTec/MetaStone-L1-7B"
18
+
19
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
20
+ model = AutoModelForCausalLM.from_pretrained(model_name)
21
+
22
+ messages = [
23
+ {"role": "user", "content": "Complete the square for the following quadratic: $-x^2+7 x-11$\n\nPlease reason step by step, and put your final answer within \\boxed{}."}
24
+ ]
25
+
26
+ text = tokenizer.apply_chat_template(
27
+ messages,
28
+ tokenize=False,
29
+ add_generation_prompt=True
30
+ )
31
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
32
+
33
+ generated_ids = model.generate(
34
+ **model_inputs,
35
+ max_new_tokens=32768
36
+ )
37
+ generated_ids = [
38
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
39
+ ]
40
+
41
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
42
+ ```
43
+
44
+ ## Usage Guidelines
45
+ To achieve optimal performance, we recommend the following settings:
46
+
47
+ 1. Enhace the thoughful output:
48
+
49
+ a. Make sure the model starts with ```<think>\n``` to prevent generating empty think content. If you use ```apply_chat_template``` and set ```add_generation_prompt=True```, this is automatically implemented, but this may result in replies not having a <think> tag at the beginning, which is normal.
50
+
51
+ b. Ensure the final input of the model is in the format of ```<|User|> [your prompt] <|Assistant|><think>```.
52
+
53
+ 2. Use a temperature of 0.6, a top sampling probability of 0.95, a maximum generation length of 32k.
54
+
55
+ 3. Standardize output format: We recommend using hints to standardize model outputs when benchmarking.
56
+
57
+ a. Math questions: Add a statement ```Please reason step by step, and put your final answer within \\boxed{}.``` to the prompt.
58
+
59
+ b. Code problems: Add “### Format: Read the inputs from stdin solve the problem and write the answer to stdout. Enclose your code within delimiters as follows.\n ```python\n# YOUR CODE HERE\n```\n### Answer: (use the provided format with backticks)” to the prompt.
60
+
61
+ 4. In particular, we use ```latex2sympy2``` and ```sympy``` to assist in judging complex Latex formats for the Math500 evaluation script. For all datasets, we generate 64 responses per query to estimate pass@1.
62
+
63
+ ## Citation
64
+ If you find our work helpful, feel free to give us a cite.
65
+ ```
66
+ @misc{MetaStoneL17B,
67
+ title = {MetastoneL17B},
68
+ url = {https://huggingface.co/MetaStoneTec/MetaStone-L1-7B},
69
+ author = {MetaStone Team},
70
+ month = {March},
71
+ year = {2025}
72
+ }
73
+ ```
74
+
75
+ ```
76
+ @article{wang2024graph,
77
+ title={A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Reasoning Instructions},
78
+ author={Wang, Jiankang and Xu, Jianjun and Wang, Xiaorui and Wang, Yuxin and Xing, Mengting and Fang, Shancheng and Chen, Zhineng and Xie, Hongtao and Zhang, Yongdong},
79
+ journal={arXiv preprint arXiv:2412.08864},
80
+ year={2024}
81
+ }
82
+ ```
83
+
introduction.png ADDED

Git LFS Details

  • SHA256: e10e3eedae2b1bb22a48320e50aadd80f9971ed66ba3604e29e932aa58c5995a
  • Pointer size: 131 Bytes
  • Size of remote file: 225 kB