File size: 7,886 Bytes
ce322ef
e42a1ec
 
7734269
 
3ed578d
 
 
 
7734269
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e42a1ec
 
 
 
 
 
 
 
 
 
 
b9d8026
 
e42a1ec
70d33e6
e42a1ec
 
 
 
70d33e6
 
e42a1ec
 
 
 
 
 
 
 
b02dec0
 
 
 
 
 
 
 
e42a1ec
b02dec0
e42a1ec
 
 
 
b02dec0
e42a1ec
 
 
 
 
 
 
 
 
 
 
70d33e6
 
e42a1ec
 
7734269
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
---
language:
- en
license: mit
library_name: transformers
datasets:
- TIGER-Lab/WebInstructSub
metrics:
- accuracy
model-index:
- name: MAmmoTH2-7B-Plus
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: HuggingFaceH4/ifeval
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 55.75
      name: strict accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: BBH
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 18.93
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: hendrycks/competition_math
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 16.09
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 4.03
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 10.11
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 22.41
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=TIGER-Lab/MAmmoTH2-7B-Plus
      name: Open LLM Leaderboard
---
# 🦣 MAmmoTH2: Scaling Instructions from the Web

Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)

Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)

Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)


## Introduction
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.

|      | **Base Model** | **MAmmoTH2**                                                 | **MAmmoTH2-Plus**                                                  |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B   | Mistral              | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B)      | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus)     |
| 8B   | Llama-3             | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B)      | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus)     |
| 8x7B | Mixtral              | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B)  | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

![Project Framework](webinstruct.png)

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.

## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:

| **Model**                              | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------|
| **MAmmoTH2-7B** (Updated)              | 29.0          | 36.7     | 68.4      | 32.4     | 62.4        | 58.6    | 81.7      | 52.7    |
| **MAmmoTH2-8B** (Updated)              | 30.3          | 35.8     | 70.4      | 35.2     | 64.2        | 62.1    | 82.2      | 54.3    |
| **MAmmoTH2-8x7B**                      | 32.2          | 39.0     | 75.4      | 36.8     | 67.4        | 71.1    | 87.5      | 58.9    |
| **MAmmoTH2-7B-Plus** (Updated)         | 31.2          | 46.0     | 84.6      | 33.8     | 63.8        | 63.3    | 84.4      | 58.1    |
| **MAmmoTH2-8B-Plus** (Updated)         | 31.5          | 43.0     | 85.2      | 35.8     | 66.7        | 69.7    | 84.3      | 59.4    |
| **MAmmoTH2-8x7B-Plus**                 | 34.1          | 47.0     | 86.4      | 37.8     | 72.4        | 74.1    | 88.4      | 62.9    |

To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval.


## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2

## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.


## Citation
If you use the models, data, or code from this project, please cite the original paper:

```
@article{yue2024mammoth2,
  title={MAmmoTH2: Scaling Instructions from the Web},
  author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu},
  journal={arXiv preprint arXiv:2405.03548},
  year={2024}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TIGER-Lab__MAmmoTH2-7B-Plus)

|      Metric       |Value|
|-------------------|----:|
|Avg.               |21.22|
|IFEval (0-Shot)    |55.75|
|BBH (3-Shot)       |18.93|
|MATH Lvl 5 (4-Shot)|16.09|
|GPQA (0-shot)      | 4.03|
|MuSR (0-shot)      |10.11|
|MMLU-PRO (5-shot)  |22.41|