File size: 2,111 Bytes
c0f13e0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42ccc37
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
datasets:
- stingning/ultrachat
---
# UltraLM-65b

<!-- Provide a quick summary of what the model is/does. -->

This is UltraLM-65b delta weights, a chat language model trained upon [UltraChat](https://github.com/thunlp/UltraChat)


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

The model is fine-tuned based on LLaMA-65b with a multi-turn chat-format template as below

```
User: instruction 1
Assistant: response 1<eos_token>
User: instruction 2
Assistant: response 2<eos_token>
...
```

- **License:** UltraLM is based on LLaMA and should be used under LLaMA's [model license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
- **Finetuned from model:** LLaMA-65b
- **Finetuned on data:** [UltraChat](https://github.com/thunlp/UltraChat)

### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** [UltraChat](https://github.com/thunlp/UltraChat)
- **Paper:** [arxiv](https://arxiv.org/abs/2305.14233)
- **Demo:** [More Information Needed]

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
To use this model, you need to [recover](https://github.com/thunlp/UltraChat/tree/main/UltraLM) the full model from the delta weights and perform inference following the template below:

```
[Optional]User: system prompt
User: user input
Assistant: 
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_openbmb__UltraLM-65b)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 58.99   |
| ARC (25-shot)         | 67.06          |
| HellaSwag (10-shot)   | 84.98    |
| MMLU (5-shot)         | 63.48         |
| TruthfulQA (0-shot)   | 53.51   |
| Winogrande (5-shot)   | 81.14   |
| GSM8K (5-shot)        | 32.75        |
| DROP (3-shot)         | 30.0         |