File size: 2,938 Bytes
49e07c8
267522f
 
 
 
 
12abd76
8188c34
 
 
49e07c8
5b8717a
267522f
48d3175
 
e7db00f
 
924e54b
267522f
 
 
 
aa33d5e
267522f
 
 
 
 
 
 
924e54b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
267522f
 
 
 
 
 
 
 
 
 
 
 
 
 
8188c34
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
base_model:
- jondurbin/bagel-dpo-34b-v0.2
- jondurbin/nontoxic-bagel-34b-v0.2
tags:
- mergekit
- merge
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
---
# yi-bagel-2x34b

Released January 11, 2024

![bagel-burger](bagel-burger.png)

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). For more information, kindly refer to the model cards from jondurbin linked in the section below. This model debuted in the [leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) at rank #4 (January 11, 2024).

## Merge Details
### Merge Method

This model is an expertimental merge using the [linear](https://arxiv.org/abs/2203.05482) merge method. This is to assess the degree of which the DPO has an effect, in terms of censoring, as used in [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2).

### Models Merged

The following models were included in the merge:
* [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)
* [jondurbin/nontoxic-bagel-34b-v0.2](https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2)

## Open LLM Leaderboard Metrics (as of January 11, 2024)
| Metric                | Value |
|-----------------------|-------|
| MMLU (5-shot)         | 76.60  |
| ARC (25-shot)         | 72.70  |
| HellaSwag (10-shot)   | 85.44  |
| TruthfulQA (0-shot)   | 71.42  |
| Winogrande (5-shot)   | 82.72  |
| GSM8K (5-shot)        | 60.73  |
| Average               | 74.93  |

According to the leaderboard description, here are the benchmarks used for the evaluation:
- [MMLU](https://arxiv.org/abs/2009.03300) (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- [AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457) -ARC- (25-shot) - a set of grade-school science questions.
- [HellaSwag](https://arxiv.org/abs/1905.07830) (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- [TruthfulQA](https://arxiv.org/abs/2109.07958) (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online.
- [Winogrande](https://arxiv.org/abs/1907.10641) (5-shot) - an adversarial and difficult Winograd benchmark at scale, for commonsense reasoning.
- [GSM8k](https://arxiv.org/abs/2110.14168) (5-shot) - diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems.

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: jondurbin/nontoxic-bagel-34b-v0.2
    parameters:
      weight: 0.5
  - model: jondurbin/bagel-dpo-34b-v0.2
    parameters:
      weight: 0.5
merge_method: linear
dtype: float16
```