File size: 5,298 Bytes
bb7ae54
 
 
 
a1583d2
 
 
 
 
bb7ae54
 
 
 
 
 
21ef508
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f9849ea
21ef508
 
 
d5fe5f3
 
 
 
 
 
 
 
21ef508
d5fe5f3
0419bd9
 
 
a1583d2
21ef508
a1583d2
d5fe5f3
bb7ae54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21ef508
05600c0
d5fe5f3
bb7ae54
 
 
 
 
 
 
 
54aaffa
bb7ae54
 
 
 
 
 
 
 
 
 
 
 
feb746d
 
 
 
 
 
 
bb7ae54
 
 
33edc1c
bb7ae54
 
 
 
 
 
 
 
21ef508
bb7ae54
d5fe5f3
bb7ae54
 
 
 
 
 
 
 
21ef508
bb7ae54
d5fe5f3
bb7ae54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14718dc
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
---
language:
- en
library_name: transformers
license: other
datasets:
- psmathur/orca_mini_v1_dataset
- ehartford/dolphin
pipeline_tag: text-generation
---

# orca_mini_v3_7b

A LLama2-7b model trained on Orca Style datasets.

<br>

![orca-mini](https://huggingface.co/psmathur/orca_mini_v3_7b/resolve/main/orca_minis_small.jpeg)

<br>

πŸ€” How good is orca-mini-v3-7b? Do the evaluation results from HuggingFace Open LLM leaderboard translate to real-world use cases?

πŸ” Now you can figure it out for yourself! 

Introducing the orca-mini chatbot powered by the orca-mini-v3-7b model. Dive in and see how the open source 7b model stacks up in the world of massive language models. 🌍

⏰ Hurry up before I run out of GPU credits! πŸ˜‰

Check it out here πŸ‘‰

[https://huggingface.co/spaces/psmathur/psmathur-orca_mini_v3_7b](https://huggingface.co/spaces/psmathur/psmathur-orca_mini_v3_7b)


<br>

**P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam.**

<br>

### quantized versions

Big thanks to [@TheBloke](https://huggingface.co/TheBloke)

1) https://huggingface.co/TheBloke/orca_mini_v3_7B-GGML

2) https://huggingface.co/TheBloke/orca_mini_v3_7B-GPTQ

<br>

#### license disclaimer:

This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.

<br>

## evaluation

We evaluated orca_mini_v3_7b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. 

Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)

|||||
|:------:|:--------:|:-------:|:--------:|
|**Task**|**Metric**|**Value**|**Stderr**|
|*arc_challenge*|acc_norm|0.5717|0.0145|
|*hellaswag*|acc_norm|0.7966|0.0043|
|*mmlu*|acc_norm|0.5234|0.035|
|*truthfulqa_mc*|mc2|0.5029|0.0156|
|**Total Average**|-|**0.59865**||


<br>

## example esage

Here is prompt format

```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.

### User:
Tell me about Orcas.

### Assistant:

```

Below shows a code example on how to use this model

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("psmathur/orca_mini_v3_7b", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
  "psmathur/orca_mini_v3_7b",
  torch_dtype=torch.float16,
  load_in_8bit=True,
  low_cpu_mem_usage=True,
  device_map="auto"
)
system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n"

#generate text steps
instruction = "Tell me about Orcas."
prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)

print(tokenizer.decode(output[0], skip_special_tokens=True))

```

<br>

#### limitations & biases:

While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. 

Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. 

Exercise caution and cross-check information when necessary.


<br>

### citiation:

Please kindly cite using the following BibTeX:

```
@misc{orca_mini_v3_7b,
  author = {Pankaj Mathur},
  title = {orca_mini_v3_7b: An explain tuned Llama2-7b model},
  year = {2023},
  publisher = {GitHub, HuggingFace},
  journal = {GitHub repository, HuggingFace repository},
  howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_7b},
}
```

```
@misc{mukherjee2023orca,
      title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, 
      author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
      year={2023},
      eprint={2306.02707},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

```
@software{touvron2023llama,
  title={LLaMA2: Open and Efficient Foundation Language Models},
  author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
  journal={arXiv preprint arXiv:2302.13971},
  year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pankajmathur__orca_mini_v3_7b)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 47.98   |
| ARC (25-shot)         | 56.91          |
| HellaSwag (10-shot)   | 79.64    |
| MMLU (5-shot)         | 52.37         |
| TruthfulQA (0-shot)   | 50.51   |
| Winogrande (5-shot)   | 74.27   |
| GSM8K (5-shot)        | 7.13        |
| DROP (3-shot)         | 15.06         |