winglian commited on
Commit
8ca4e3b
1 Parent(s): e0df57f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - OpenAccess AI Collective
5
+ - MPT
6
+ - axolotl
7
+ datasets:
8
+ - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
9
+ - QingyiSi/Alpaca-CoT
10
+ - teknium/GPTeacher-General-Instruct
11
+ - metaeval/ScienceQA_text_only
12
+ - hellaswag
13
+ - openai/summarize_from_feedback
14
+ - riddle_sense
15
+ - gsm8k
16
+ - camel-ai/math
17
+ - camel-ai/biology
18
+ - camel-ai/physics
19
+ - camel-ai/chemistry
20
+ - winglian/evals
21
+
22
+ inference: false
23
+ ---
24
+
25
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
26
+ **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
27
+
28
+ # Due to a bug, the first version dropped a few datasets during training. We've corrected the issue and retrained the model
29
+
30
+ # Minotaur 13B (FIXED)
31
+
32
+ Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
33
+
34
+ Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org)
35
+
36
+ # Prompts
37
+ Chat only style prompts using `USER:`,`ASSISTANT:`.
38
+
39
+ <img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>
40
+
41
+ # Training Datasets
42
+
43
+ Minotaur 13B model is fine-tuned on the following openly available datasets:
44
+
45
+ - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
46
+ - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
47
+ - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
48
+ - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
49
+ - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
50
+ - [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
51
+ - [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
52
+ - [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
53
+ - [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
54
+ - [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
55
+ - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
56
+ - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
57
+ - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
58
+ - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
59
+ - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
60
+ - prose generation
61
+
62
+ # Shoutouts
63
+
64
+ Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
65
+
66
+ # Demo
67
+
68
+ HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
69
+
70
+ ## Release Notes
71
+
72
+ - https://wandb.ai/wing-lian/minotaur-13b/runs/5ystr7w6/workspace
73
+
74
+ ## Build
75
+
76
+ Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 6XA100 80GB
77
+ - 1 epochs taking approximately 7.5 hours
78
+
79
+ ## Bias, Risks, and Limitations
80
+ Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
81
+ Minotaur was fine-tuned from the base model LLaMA-13B, please refer to its model card's Limitations Section for relevant information. (included below)
82
+
83
+ ## Benchmarks
84
+
85
+ TBD
86
+
87
+ ## Examples - results may vary based on temperature and other settings
88
+
89
+ TBD