license: apache-2.0
tags:
- OpenAccess AI Collective
- MPT
- axolotl
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals
inference: false
💵 Donate to OpenAccess AI Collective to help us keep building great tools and models!
Due to a bug, the first version dropped a few datasets during training. We've corrected the issue and retrained the model
Minotaur 13B (FIXED)
Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned on only completely open datasets making this model reproducible by anyone.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our Discord or email wing@openaccessaicollective.org
Prompts
Chat only style prompts using USER:
,ASSISTANT:
.
Training Datasets
Minotaur 13B model is fine-tuned on the following openly available datasets:
- WizardLM
- subset of QingyiSi/Alpaca-CoT for roleplay and CoT
- GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only - instruct for concise responses
- openai/summarize_from_feedback - instruct augmented tl;dr summarization
- camel-ai/math
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- winglian/evals - instruct augmented datasets
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the
train
split - hellaswag - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the
train
split - riddle_sense - instruct augmented, derived from the
train
split - gsm8k - instruct augmented, derived from the
train
split - prose generation
Shoutouts
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
Demo
HF Demo in Spaces available in the Community ChatBot Arena under the OAAIC Chatbots tab.
Release Notes
Build
Minotaur was built with Axolotl on 6XA100 80GB
- 1 epochs taking approximately 7.5 hours
Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Minotaur was fine-tuned from the base model LLaMA-13B, please refer to its model card's Limitations Section for relevant information. (included below)
Benchmarks
TBD
Examples - results may vary based on temperature and other settings
TBD