Text Generation
Transformers
Safetensors
mixtral
reasoning
preference_learning
nca
Inference Endpoints
text-generation-inference
hanbin commited on
Commit
c3f157c
•
1 Parent(s): 9e4788f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +124 -0
README.md CHANGED
@@ -1,3 +1,127 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - openbmb/UltraInteract_sft
5
+ - openbmb/UltraInteract_pair
6
+ - openbmb/UltraFeedback
7
+ tags:
8
+ - reasoning
9
+ - preference_learning
10
+ - nca
11
+ pipeline_tag: text-generation
12
  ---
13
+
14
+ <div align="center">
15
+
16
+ <img src="https://huggingface.co/openbmb/Eurus-7b-sft/resolve/main/figures/Eurus-logo.png" width="200px">
17
+
18
+ **Eurus: A suite of open-source LLMs optimized for reasoning**
19
+
20
+ <p align="center">
21
+ <a href="#introduction"> Introduction</a> •
22
+ <a href="#evaluation">Evaluation</a>
23
+ </p>
24
+
25
+
26
+ </div>
27
+
28
+ # Links
29
+
30
+ - 📜 [Paper](https://arxiv.org/abs/2404.02078)
31
+ - 🤗 [Eurus Collection](https://huggingface.co/collections/openbmb/eurus-660bc40bec5376b3adc9d1c5)
32
+ - 🤗 UltraInteract
33
+ - [SFT](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
34
+ - [Preference Learning](https://huggingface.co/datasets/openbmb/UltraInteract_pair)
35
+ - [GitHub Repo](https://github.com/OpenBMB/Eurus)
36
+
37
+ # Introduction
38
+
39
+ Eurux-8x22B-KTO is SFT and [KTO](https://arxiv.org/abs/2402.01306) fine-tuned from [Mixtral-8x22B](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) on all multi-turn trajectory pairs in [UltraInteract](https://huggingface.co/openbmb/UltraInteract) and all pairs in [UltraFeedback](https://huggingface.co/openbmb/UltraFeedback).
40
+
41
+ It achieves superb reasoning performance as well as exellent chat & instruction-following capabilities.
42
+
43
+ ## Evaluation
44
+ We conducted overall coding, math, reasoning, knowledge, instruction-following and chat benchmarking. Results are shown below, with the best scores in open-source models **bolded**:
45
+
46
+ | Models/Benchmarks | Coding | | | Math | | | Reasoning | Knowledge | Ins-Following | Chat |
47
+ |-------------------|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:-------------:|:---------:|
48
+ | | HumanEval | MBPP | LeetCode | GSMPLUS | MATH | TheoremQA | BBH (CoT) | MMLU | IFEval | MT-Bench |
49
+ | GPT-3.5-Turbo | 76.8 | 82.5 | 23.3 | 61.2 | 37.8 | 35.6 | 70.1 | 70.0 | 56.6 | 7.94 |
50
+ | GPT-4 | 85.4 | 83.5 | 41.8 | 85.6 | 69.7 | 52.4 | 86.7 | 86.4 | 79.7 | 8.96 |
51
+ | Mixtral-8x7B-Ins | 50.6 | 50.1 | 5.6 | 49.6 | 25.9 | 20.4 | 73.5 | 70.3 | 48.8 | 8.30 |
52
+ | DS-LM-67B-Chat | 70.7 | 65.7 | 20.0 | 65.0 | 41.0 | 17.9 | 78.9 | 72.3 | 52.7 | 8.35 |
53
+ | QWen-1.5-72B | 71.3 | 56.9 | 15.6 | 65.4 | 43.4 | 18.5 | 78.0 | 72.9 | 53.4 | **8.61** |
54
+ | Eurus-70b-NCA | **79.3** | **71.9** | 33.3 | 62.8 | 41.7 | 32.6 | 80.0 | 59.4 | 49.2 | 7.54 |
55
+ | Eurux-8x22b-KTO | 71.3 | 68.9 | 29.4 | **68.3** | 48.4 | 35.3 | **83.6** | **75.9** | **67.1** | 8.58 |
56
+ | Eurux-8x22b-NCA | 75.0 | 69.7 | **35.0** | 68.1 | **49.0** | **35.5** | 83.5 | 75.6 | **67.1** | 8.46 |
57
+
58
+
59
+
60
+ ## Usage
61
+
62
+ ```python
63
+ # pip install 'transformers>=4.39.3'
64
+ # pip install accelerate
65
+
66
+ import torch
67
+ from transformers import pipeline
68
+
69
+ pipe = pipeline(
70
+ "text-generation",
71
+ model="openbmb/Eurux-8x22b-kto",
72
+ device_map="auto",
73
+ torch_dtype=torch.bfloat16,
74
+ )
75
+ messages = [
76
+ {"role": "user", "content": "What does Eurus mean?"},
77
+ ]
78
+ outputs = pipe(
79
+ messages,
80
+ max_new_tokens=512,
81
+ do_sample=True,
82
+ temperature=0.7,
83
+ top_k=50,
84
+ top_p=0.95,
85
+ )
86
+ print(outputs[0]["generated_text"][-1]["content"])
87
+ ```
88
+
89
+ We apply tailored prompts for coding and math, consistent with UltraInteract data formats:
90
+
91
+ **Coding**
92
+
93
+ ```
94
+ [INST] Write Python code to solve the task:
95
+ {Instruction} [/INST]
96
+ ```
97
+ **Math-CoT**
98
+
99
+ ```
100
+ [INST] Solve the following math problem step-by-step.
101
+ Simplify your answer as much as possible. Present your final answer as \\boxed{Your Answer}.
102
+ {Instruction} [/INST]
103
+ ```
104
+
105
+ **Math-PoT**
106
+
107
+ ```
108
+ [INST] Tool available:
109
+ [1] Python interpreter
110
+ When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment.
111
+ Solve the following math problem step-by-step.
112
+ Simplify your answer as much as possible.
113
+ {Instruction} [/INST]
114
+ ```
115
+
116
+
117
+ ## Citation
118
+ ```
119
+ @misc{yuan2024advancing,
120
+ title={Advancing LLM Reasoning Generalists with Preference Trees},
121
+ author={Lifan Yuan and Ganqu Cui and Hanbin Wang and Ning Ding and Xingyao Wang and Jia Deng and Boji Shan and Huimin Chen and Ruobing Xie and Yankai Lin and Zhenghao Liu and Bowen Zhou and Hao Peng and Zhiyuan Liu and Maosong Sun},
122
+ year={2024},
123
+ eprint={2404.02078},
124
+ archivePrefix={arXiv},
125
+ primaryClass={cs.AI}
126
+ }
127
+ ```