Edit model card

Model Card MELT-Mixtral-8x7B-Instruct-v0.1

The MELT-Mixtral-8x7B-Instruct-v0.1 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data.

MELT-Mixtral-8x7B-Instruct-v0.1 is 68.2% accurate across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks, surpassing the pass mark (>60%) in the U.S. Medical Licensing Examination (USMLE) style questions.

To the best of our understanding our model is 6% more accurate than Google's 540 billion parameter Med-Palm, which is 10X larger.

Model Details

The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.

While the model was evaluated using publically avalable USMLE, Indian AIIMS, and NEET example questions, its use it intented to be more broadly applicable.

Model Description

Uses

MELT is intended for research purposes only. MELT models are best suited for prompts using a QA or chat format.

Out-of-Scope Use

MELT is intended for research purposes only and should not be used for medical advice.

Bias, Risks, and Limitations

MELT was training using collections publicly available, which likely contain biased and inaccurate information. The training and evaluation datasets have not been evaluated for content or accuracy.

How to Get Started with the Model

Use this model like you would the Mixtral-8x7B-Instruct-v0.1 model.

Training Details

Training Data

The following datasets were used for training:

Expert Med MedQA train MedMCQA train LiveQA MedicationQA MMLU clinical topics Medical Flashcards Wikidoc Wikidoc Patient Information MEDIQA MMMLU icliniq 10k HealthCare Magic 100k GenMedGPT-5k Mental Health Conversational

Training Procedure

Training Hyperparameters

  • Lora Rank: 64
  • Lora Alpha: 16
  • Lora Targets: "o_proj","down_proj","v_proj","gate_proj","up_proj","k_proj","q_proj"
  • LR: 2e-4
  • Epoch: 3
  • Precision: bf16

Evaluation

MELT-Mixtral-8x7B-Instruct-v0.1 demonstrated a average 4.42% improvement over Mixtral-8x7B-Instruct-v0.1 across 3 benchmarks. The base Mixtral-8x7B-Instruct-v0.1 model already performs signifigantly better (65.31%) than the base llama-2-7b-chat-hf model (35.26%) and our MELT-llama-2-7b-chat-v0.1 model (46.33%).

While there was limited improvement on the benchmarks, our training data contained a broad collection of medical text, chats, and multi-choice questions that would not be captured by the multi-choice evaluations.

Mixtral-8x7B-Instruct-v0.1

  • medqa: {'base': {'Average': 62.0, 'STEP-1': 62.24, 'STEP-2&3': 61.72}}
  • mausmle: {'base': {'Average': 73.12, 'STEP-1': 68.24, 'STEP-2': 78.16, 'STEP-3': 72.9}}
  • medmcqa: {'base': {'Average': 60.82, 'MEDICINE': 56.52, 'OPHTHALMOLOGY': 59.52, 'ANATOMY': 62.33, 'PATHOLOGY': 70.93, 'PHYSIOLOGY': 65.91, 'DENTAL': 52.37, 'RADIOLOGY': 66.07, 'BIOCHEMISTRY': 75.21, 'ANAESTHESIA': 73.91, 'GYNAECOLOGY': 55.56, 'PHARMACOLOGY': 74.72, 'SOCIAL': 50.0, 'PEDIATRICS': 60.61, 'ENT': 73.68, 'SURGERY': 61.69, 'MICROBIOLOGY': 61.64, 'FORENSIC': 69.77, 'PSYCHIATRY': 77.78, 'SKIN': 60.0, 'ORTHOPAEDICS': 71.43, 'UNKNOWN': 100.0}}
  • average: 65.31%

MELT-Mixtral-8x7B-Instruct-v0.1

  • medqa: {'base': {'Average': 67.19, 'STEP-1': 67.55, 'STEP-2&3': 66.78}}
  • mausmle: {'base': {'Average': 73.84, 'STEP-1': 74.12, 'STEP-2': 75.86, 'STEP-3': 71.96}}
  • medmcqa: {'base': {'Average': 63.58, 'MEDICINE': 63.04, 'OPHTHALMOLOGY': 66.67, 'ANATOMY': 67.12, 'PATHOLOGY': 72.48, 'PHYSIOLOGY': 67.42, 'DENTAL': 54.15, 'RADIOLOGY': 71.43, 'BIOCHEMISTRY': 80.17, 'ANAESTHESIA': 69.57, 'GYNAECOLOGY': 60.13, 'PHARMACOLOGY': 74.16, 'SOCIAL': 56.67, 'PEDIATRICS': 65.15, 'ENT': 65.79, 'SURGERY': 64.92, 'MICROBIOLOGY': 64.38, 'FORENSIC': 65.12, 'PSYCHIATRY': 88.89, 'SKIN': 70.0, 'ORTHOPAEDICS': 78.57, 'UNKNOWN': 100.0}}
  • average: 68.2%

Testing Data, Factors & Metrics

Testing Data

MedQA test MedMCQA test MA USMLE

Disclaimer:

The use of large language models, such as this one, is provided without warranties or guarantees of any kind. While every effort has been made to ensure accuracy, completeness, and reliability of the information generated, it should be noted that these models may produce responses that are inaccurate, outdated, or inappropriate for specific purposes. Users are advised to exercise discretion and judgment when relying on the information generated by these models. The outputs should not be considered as professional, legal, medical, financial, or any other form of advice. It is recommended to seek expert advice or consult appropriate sources for specific queries or critical decision-making. The creators, developers, and providers of these models disclaim any liability for damages, losses, or any consequences arising from the use, reliance upon, or interpretation of the information provided by these models. The user assumes full responsibility for their interactions and usage of the generated content. By using these language models, users agree to indemnify and hold harmless the developers, providers, and affiliates from any claims, damages, or liabilities that may arise from their use. Please be aware that these models are constantly evolving, and their capabilities, limitations, and outputs may change over time without prior notice. Your use of this language model signifies your acceptance and understanding of this disclaimer.

Downloads last month
16
Safetensors
Model size
46.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for IBI-CAAI/MELT-Mixtral-8x7B-Instruct-v0.1

Quantizations
2 models