stelterlab commited on
Commit
af9e9da
1 Parent(s): ac0f040

Update README.md

Browse files

added original model card data

Files changed (1) hide show
  1. README.md +106 -0
README.md CHANGED
@@ -136,3 +136,109 @@ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not
136
 
137
  response = generate(model, tokenizer, prompt=prompt, verbose=True)
138
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
 
137
  response = generate(model, tokenizer, prompt=prompt, verbose=True)
138
  ```
139
+
140
+ Original Weights by VAGOsolutions. Original Model Card follows:
141
+
142
+ ![SauerkrautLM-v2-14b-SFT](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-3.png "SauerkrautLM-v2-14b-SFT")
143
+ ## VAGO solutions SauerkrautLM-v2-14b-SFT
144
+
145
+ **Fine-tuned Model** - *Celebrating one year of SauerkrautLM with our most advanced model yet, showcasing two-phase Spectrum Fine-Tuning*
146
+
147
+ Introducing **SauerkrautLM-14b-v2-SFT** – our latest Sauerkraut version based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B), celebrating the one-year anniversary of SauerkrautLM!
148
+
149
+ - Two-phase Spectrum Fine-Tuning approach
150
+ - Phase 1: 25% layer targeting with 0.6B tokens
151
+ - Phase 2: 20% layer targeting with 0.6B tokens
152
+ - Enhanced mathematical capabilities, function calling, and multilingual performance
153
+
154
+ # Table of Contents
155
+ 1. [Overview of all SauerkrautLM-14b-v2 Models](#all-SauerkrautLM-v2-14b)
156
+ 2. [Model Details](#model-details)
157
+ - [Training procedure](#training-procedure)
158
+ 3. [Evaluation](#evaluation)
159
+ 5. [Disclaimer](#disclaimer)
160
+ 6. [Contact](#contact)
161
+ 7. [Collaborations](#collaborations)
162
+ 8. [Acknowledgement](#acknowledgement)
163
+
164
+ ## All SauerkrautLM-v2-14b
165
+
166
+ | Model | HF | EXL2 | GGUF | AWQ |
167
+ |-------|-------|-------|-------|-------|
168
+ | SauerkrautLM-v2-14b-SFT | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-SFT) | coming soon | coming soon | coming soon |
169
+ | SauerkrautLM-v2-14b-DPO | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO) | coming soon | coming soon | coming soon |
170
+
171
+ ## Model Details
172
+ **SauerkrautLM-v2-14b-SFT**
173
+ - **Model Type:** SauerkrautLM-v2-14b-SFT is a fine-tuned Model based on [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B)
174
+ - **Language(s):** German, English
175
+ - **License:** Apache 2.0
176
+ - **Contact:** [VAGO solutions](https://vago-solutions.ai)
177
+
178
+ ## Training Procedure
179
+
180
+ This model represents a significant advancement in our fine-tuning methodology, utilizing a two-phase Spectrum Fine-Tuning approach:
181
+
182
+ **Phase 1 (25% Layer Targeting)**:
183
+ - Training on 0.6B tokens with four distinct components:
184
+ 1. Mathematics data (curated using proprietary classifier)
185
+ 2. English performance data (from Sauerkraut-v1)
186
+ 3. High-quality German training data (from Sauerkraut-v1)
187
+ 4. Function calling data (from Sauerkraut-v2)
188
+
189
+ **Phase 2 (20% Layer Targeting)**:
190
+ - Training on additional 0.6B tokens with partial overlap:
191
+ 1. New mathematics data (classifier-selected)
192
+ 2. New English performance data (from Sauerkraut-v2)
193
+ 3. New German training data (from Sauerkraut-v2)
194
+ 4. Function calling data (from Sauerkraut-v2)
195
+
196
+ **Dataset Composition**:
197
+ - Carefully curated mathematical content using a proprietary classification model
198
+ - Premium multilingual data from both Sauerkraut-v1 and Sauerkraut-v2
199
+ - Specialized function calling training data
200
+ - High-quality German-English content across various domains
201
+
202
+ ## Objective and Results
203
+
204
+ This release marks the one-year anniversary of SauerkrautLM, showcasing our most advanced training methodology to date. The two-phase Spectrum Fine-Tuning approach allows for more nuanced learning while maintaining efficiency in resource usage. The model demonstrates significant improvements in:
205
+
206
+ - Mathematical reasoning capabilities
207
+ - Function calling proficiency
208
+ - Multilingual performance
209
+ - Instruction following
210
+ - Common-sense reasoning
211
+
212
+ ## Evaluation
213
+
214
+ **AGIEVAL**
215
+ ![SauerkrautLM-v2-14b-SFT-AGIEVAL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-AGIEVAL.png "SauerkrautLM-v2-14b-SFT-AGIEVAL")
216
+
217
+ **GPT4ALL**
218
+ ![SauerkrautLM-v2-14b-SFT-GPT4ALL](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-GPT4ALL.png "SauerkrautLM-v2-14b-SFT-GPT4ALL")
219
+
220
+ **TRUTHFULQA**
221
+ ![SauerkrautLM-v2-14b-SFT-TRUTHFULQA](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-TRUTHFULQA.png "SauerkrautLM-v2-14b-SFT-TRUTHFULQA")
222
+
223
+ **OPENLEADERBOARD 2**
224
+ ![SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-OPENLEADERBOARD.png "SauerkrautLM-v2-14b-SFT-OPENLEADERBOARD")
225
+
226
+ **MMLU 5-shot**
227
+ ![SauerkrautLM-v2-14b-SFT-MMLU-5shot](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-MMLU-5shot.png "SauerkrautLM-v2-14b-SFT-MMLU-5shot")
228
+
229
+ **Berkeley Function Calling Leaderboard**
230
+ ![SauerkrautLM-v2-14b-SFT-BERKELEY](https://vago-solutions.ai/wp-content/uploads/2024/11/SauerkrautLM-v2-14b-DPO-BERKELEY.png "SauerkrautLM-v2-14b-SFT-BERKELEY")
231
+
232
+ Please note that our benchmark results in absolute numbers may differ from the Hugging Face Leaderboard due to variations in benchmark evaluation pipelines. However, the relative differences remain consistent.
233
+
234
+ ## Disclaimer
235
+ We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
236
+
237
+ ## Contact
238
+ If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions.
239
+
240
+ ## Collaborations
241
+ We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.ai)
242
+
243
+ ## Acknowledgement
244
+ Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such a valuable model to the Open-Source community.