Edit model card

Hyperion-3.0-Mistral-7B-DPO

Model Details

  • Model Name: Locutusque/Hyperion-3.0-Mistral-7B-DPO
  • Base Model: mistralai/Mistral-7B-v0.1
  • Publisher: Locutusque
  • Model Type: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning
  • Language: Multi-domain, English language
  • License: Apache-2.0

Model Description

Locutusque/Hyperion-3.0-Mistral-7B-DPO is an advanced language model fine-tuned with a dataset of 20,000 meticulously curated high-quality preference pairs using Direct Preference Optimization (DPO). The examples were generated by GPT-4 to ensure exceptional quality and relevance. This model is designed to provide superior performance across a wide range of complex tasks, including question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.

Intended Use

This model is intended for researchers, developers, and organizations seeking a highly capable and reliable language model for tackling challenging problems across various domains. Potential use cases include:

  • Intelligent tutoring systems and educational applications in science, medicine, mathematics, and computer science
  • Advanced conversational AI for technical support, customer service, and domain-specific chatbots
  • Code generation and analysis tools for software development and programming assistance
  • Medical text analysis and information retrieval for healthcare professionals and researchers
  • Mathematical problem-solving and logical reasoning applications for academia and industry

Training Data

The Locutusque/Hyperion-3.0-Mistral-7B-DPO model was fine-tuned on a carefully curated dataset of 20,000 preference pairs, where 4,000 examples were used to fine-tune. These examples were generated by GPT-4 to ensure the highest quality and relevance across various domains, including programming, medical texts, mathematical problems, and reasoning tasks. The training data was further optimized using Direct Preference Optimization (DPO) to align the model's outputs with human preferences and improve overall performance.

Quants

ExLlamaV2: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-exl2

GGUF: https://huggingface.co/bartowski/Hyperion-3.0-Mistral-7B-DPO-GGUF

Evaluation Results

mmlu flan cot 5-shot

Tasks Version Filter n-shot Metric Value Stderr
mmlu_flan_cot_fewshot N/A get-answer 0 exact_match 0.5833 ± 0.0118
- mmlu_flan_cot_fewshot_humanities N/A get-answer 0 exact_match 0.5039 ± 0.0205
- mmlu_flan_cot_fewshot_formal_logic 0 get-answer 0 exact_match 0.2143 ± 0.1138
- mmlu_flan_cot_fewshot_high_school_european_history 0 get-answer 0 exact_match 0.6667 ± 0.1143
- mmlu_flan_cot_fewshot_high_school_us_history 0 get-answer 0 exact_match 0.7727 ± 0.0914
- mmlu_flan_cot_fewshot_high_school_world_history 0 get-answer 0 exact_match 0.5385 ± 0.0997
- mmlu_flan_cot_fewshot_international_law 0 get-answer 0 exact_match 0.9231 ± 0.0769
- mmlu_flan_cot_fewshot_jurisprudence 0 get-answer 0 exact_match 0.5455 ± 0.1575
- mmlu_flan_cot_fewshot_logical_fallacies 0 get-answer 0 exact_match 0.7778 ± 0.1008
- mmlu_flan_cot_fewshot_moral_disputes 0 get-answer 0 exact_match 0.5526 ± 0.0817
- mmlu_flan_cot_fewshot_moral_scenarios 0 get-answer 0 exact_match 0.4000 ± 0.0492
- mmlu_flan_cot_fewshot_philosophy 0 get-answer 0 exact_match 0.7647 ± 0.0738
- mmlu_flan_cot_fewshot_prehistory 0 get-answer 0 exact_match 0.6571 ± 0.0814
- mmlu_flan_cot_fewshot_professional_law 0 get-answer 0 exact_match 0.3294 ± 0.0362
- mmlu_flan_cot_fewshot_world_religions 0 get-answer 0 exact_match 0.8947 ± 0.0723
- mmlu_flan_cot_fewshot_other N/A get-answer 0 exact_match 0.6833 ± 0.0244
- mmlu_flan_cot_fewshot_business_ethics 0 get-answer 0 exact_match 0.9091 ± 0.0909
- mmlu_flan_cot_fewshot_clinical_knowledge 0 get-answer 0 exact_match 0.5862 ± 0.0931
- mmlu_flan_cot_fewshot_college_medicine 0 get-answer 0 exact_match 0.6364 ± 0.1050
- mmlu_flan_cot_fewshot_global_facts 0 get-answer 0 exact_match 0.6000 ± 0.1633
- mmlu_flan_cot_fewshot_human_aging 0 get-answer 0 exact_match 0.6087 ± 0.1041
- mmlu_flan_cot_fewshot_management 0 get-answer 0 exact_match 0.9091 ± 0.0909
- mmlu_flan_cot_fewshot_marketing 0 get-answer 0 exact_match 0.8000 ± 0.0816
- mmlu_flan_cot_fewshot_medical_genetics 0 get-answer 0 exact_match 1.0000 ± 0.0000
- mmlu_flan_cot_fewshot_miscellaneous 0 get-answer 0 exact_match 0.8023 ± 0.0432
- mmlu_flan_cot_fewshot_nutrition 0 get-answer 0 exact_match 0.6667 ± 0.0833
- mmlu_flan_cot_fewshot_professional_accounting 0 get-answer 0 exact_match 0.4839 ± 0.0912
- mmlu_flan_cot_fewshot_professional_medicine 0 get-answer 0 exact_match 0.5806 ± 0.0901
- mmlu_flan_cot_fewshot_virology 0 get-answer 0 exact_match 0.3889 ± 0.1182
- mmlu_flan_cot_fewshot_social_sciences N/A get-answer 0 exact_match 0.7003 ± 0.0239
- mmlu_flan_cot_fewshot_econometrics 0 get-answer 0 exact_match 0.4167 ± 0.1486
- mmlu_flan_cot_fewshot_high_school_geography 0 get-answer 0 exact_match 0.9091 ± 0.0627
- mmlu_flan_cot_fewshot_high_school_government_and_politics 0 get-answer 0 exact_match 0.8095 ± 0.0878
- mmlu_flan_cot_fewshot_high_school_macroeconomics 0 get-answer 0 exact_match 0.6512 ± 0.0735
- mmlu_flan_cot_fewshot_high_school_microeconomics 0 get-answer 0 exact_match 0.5769 ± 0.0988
- mmlu_flan_cot_fewshot_high_school_psychology 0 get-answer 0 exact_match 0.9000 ± 0.0391
- mmlu_flan_cot_fewshot_human_sexuality 0 get-answer 0 exact_match 0.6667 ± 0.1421
- mmlu_flan_cot_fewshot_professional_psychology 0 get-answer 0 exact_match 0.6522 ± 0.0578
- mmlu_flan_cot_fewshot_public_relations 0 get-answer 0 exact_match 0.5833 ± 0.1486
- mmlu_flan_cot_fewshot_security_studies 0 get-answer 0 exact_match 0.4074 ± 0.0964
- mmlu_flan_cot_fewshot_sociology 0 get-answer 0 exact_match 0.8182 ± 0.0842
- mmlu_flan_cot_fewshot_us_foreign_policy 0 get-answer 0 exact_match 0.7273 ± 0.1408
- mmlu_flan_cot_fewshot_stem N/A get-answer 0 exact_match 0.4866 ± 0.0262
- mmlu_flan_cot_fewshot_abstract_algebra 0 get-answer 0 exact_match 0.0909 ± 0.0909
- mmlu_flan_cot_fewshot_anatomy 0 get-answer 0 exact_match 0.4286 ± 0.1373
- mmlu_flan_cot_fewshot_astronomy 0 get-answer 0 exact_match 0.5625 ± 0.1281
- mmlu_flan_cot_fewshot_college_biology 0 get-answer 0 exact_match 0.5000 ± 0.1291
- mmlu_flan_cot_fewshot_college_chemistry 0 get-answer 0 exact_match 0.5000 ± 0.1890
- mmlu_flan_cot_fewshot_college_computer_science 0 get-answer 0 exact_match 0.2727 ± 0.1408
- mmlu_flan_cot_fewshot_college_mathematics 0 get-answer 0 exact_match 0.3636 ± 0.1521
- mmlu_flan_cot_fewshot_college_physics 0 get-answer 0 exact_match 0.3636 ± 0.1521
- mmlu_flan_cot_fewshot_computer_security 0 get-answer 0 exact_match 0.7273 ± 0.1408
- mmlu_flan_cot_fewshot_conceptual_physics 0 get-answer 0 exact_match 0.6538 ± 0.0951
- mmlu_flan_cot_fewshot_electrical_engineering 0 get-answer 0 exact_match 0.7500 ± 0.1118
- mmlu_flan_cot_fewshot_elementary_mathematics 0 get-answer 0 exact_match 0.7317 ± 0.0701
- mmlu_flan_cot_fewshot_high_school_biology 0 get-answer 0 exact_match 0.5938 ± 0.0882
- mmlu_flan_cot_fewshot_high_school_chemistry 0 get-answer 0 exact_match 0.3636 ± 0.1050
- mmlu_flan_cot_fewshot_high_school_computer_science 0 get-answer 0 exact_match 0.5556 ± 0.1757
- mmlu_flan_cot_fewshot_high_school_mathematics 0 get-answer 0 exact_match 0.3103 ± 0.0874
- mmlu_flan_cot_fewshot_high_school_physics 0 get-answer 0 exact_match 0.2353 ± 0.1060
- mmlu_flan_cot_fewshot_high_school_statistics 0 get-answer 0 exact_match 0.3043 ± 0.0981
- mmlu_flan_cot_fewshot_machine_learning 0 get-answer 0 exact_match 0.4545 ± 0.1575
Groups Version Filter n-shot Metric Value Stderr
mmlu_flan_cot_fewshot N/A get-answer 0 exact_match 0.5833 ± 0.0118
- mmlu_flan_cot_fewshot_humanities N/A get-answer 0 exact_match 0.5039 ± 0.0205
- mmlu_flan_cot_fewshot_other N/A get-answer 0 exact_match 0.6833 ± 0.0244
- mmlu_flan_cot_fewshot_social_sciences N/A get-answer 0 exact_match 0.7003 ± 0.0239
- mmlu_flan_cot_fewshot_stem N/A get-answer 0 exact_match 0.4866 ± 0.0262

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Locutusque/Hyperion-3.0-Mistral-7B-DPO"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# For a text generation task
input_text = "<|im_start|>user\nExplain the implications of quantum entanglement in layman's terms.<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")

# Generate a response
outputs = model.generate(input_ids, max_length=200, do_sample=True, top_p=0.7, top_k=6) # These are the recommended sample settings.
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Known Limitations

While the training data has been carefully curated and optimized, there may still be some inconsistencies or biases present due to the inherent complexity and diversity of the source dataset. Users should be aware of potential limitations and carefully evaluate the model's outputs for their specific use case.

Additionally, this model is highly compliant and will attempt to respond to most requests. For enterprise-level deployment, it is strongly recommended to further fine-tune the model using DPO to align its behavior with specific requirements and constraints.

Licensing Information

This model is released under the Apache-2.0 license.

Downloads last month
86
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Locutusque/Hyperion-3.0-Mistral-7B-DPO

Merges
2 models
Quantizations
4 models

Spaces using Locutusque/Hyperion-3.0-Mistral-7B-DPO 5