File size: 5,477 Bytes
8ca4e3b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ce05151
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
license: apache-2.0
tags:
- OpenAccess AI Collective
- MPT
- axolotl
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals

inference: false
---

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**

# Due to a bug, the first version dropped a few datasets during training. We've corrected the issue and retrained the model

# Minotaur 13B (FIXED)

Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.

Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)

# Prompts
Chat only style prompts using `USER:`,`ASSISTANT:`.

<img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>

# Training Datasets

Minotaur 13B model is fine-tuned on the following openly available datasets:

- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
  - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
  - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
  - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
  - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
  - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
  - prose generation

# Shoutouts

Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.

# Demo

HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.

## Release Notes

- https://wandb.ai/wing-lian/minotaur-13b/runs/5ystr7w6/workspace

## Build

Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 6XA100 80GB
 - 1 epochs taking approximately 7.5 hours

## Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Minotaur was fine-tuned from the base model LLaMA-13B, please refer to its model card's Limitations Section for relevant information. (included below)

## Benchmarks

TBD

## Examples - results may vary based on temperature (0.7 for this) and other settings

```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: Solve for x: 5x = 3x + 10
ASSISTANT:To solve the equation 5x = 3x + 10, we can first isolate one variable on one side of the equation by moving all the terms with x to one side. 

Subtract 3x from both sides of the equation:
5x - 3x = 10 - 3x
2x = 7

Now, divide both sides of the equation by 2 to find the value of x:
(2x)/2 = 7/2
x = 7/4

So, the solution for x is 7/4.
```
defintely worse at math than v1

```
 The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: You have 3 apples. You eat 2 pears. How many apples do you have left?
ASSISTANT:You still have 3 apples, as eating pears does not affect the number of apples.
```

```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: List three words that start with "ex". What is 33 + 77?
ASSISTANT:Three words that start with "ex" are "express", "exact", and "excite". 33 + 77 = 110.
```

```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: Write a haiku about dinosaurs.
ASSISTANT:The mighty roar echoes,
T-Rex strides with power,
Dinosaurs rule the day.
```
meh, 6/6/6 rather than 5/7/5