fblgit's picture
Update README.md
a4e85da verified
|
raw
history blame
1.78 kB
metadata
license: apache-2.0
datasets:
  - fblgit/simple-math
  - jondurbin/bagel-v0.3
base_model: abacusai/Smaug-34B-v0.1
tags:
  - UNA
  - simple-math
  - juanako

UNA-SimpleSmaug-34b-v1beta

Scoring 04-February-2024 #1 34B model, outperforming its original base model Smaug-34B-v0.1 with 77.41 😎 Oh, btw.. this one went thru SFT so the abacus inside Smaug is back to normal.. so you can further train/dpo him .. RESET!

UNA Applied UNA only on the Attention, not on the MLP's

  • Is based on Smaug
  • SimpleMath dataset
  • It was trained on Axolotl

Experiment

The thing here is to understand whats the impact of SimpleMath applied at the attention layer during a SFT session and how it impacts on the neural network overall.

Results: Improving mathematican and reasoning capabilities without degrading and presserving previous training sessions.

Evals

Pending, but so far this one

|    Task     |Version| Metric |Value            |
|-------------|------:|--------|----------------:|
|arc_challenge|     HF|acc_norm| 0.7457337883959 |
|gsm8k        |     HF|acc     | 0.7247915087187 |
|mmlu         |     HF|acc     | 0.7649553475572 |
|mmlu         |     HF|acc_norm| 0.7681713551647 |
|hellaswag    |     HF|acc_norm| 0.8673571001792 | 
|truthfulqa   |     HF|mc2     | 0.7016557407771 |
|winogrande   |     HF|acc     | 0.8382004735595 |
|------------------------------------------------|

Increasing GSM, MMLU, ARC, WINO.

Citations

To abacusai for making Smaug-34B, the Bagel, and all the magic behind the base model.

If you use the model, provide citation even for merges or anything. And enjoy our ModelSimilarities tool detector https://github.com/fblgit/model-similarity