Edit model card

MMedS-Llama3

💻Github Repo 🖨️arXiv Paper

The official codes for "Towards Evaluating and Building Versatile Large Language Models for Medicine"

Introduction

This repository hosts MMedS-Llama-3-8B. Its foundation model, MMed-Llama-3-8B, is a multilingual medical language model which has undergone additional continuous pretraining on MMedC. Furthermore, the model has been fine-tuned under supervision using MedS-Ins, a comprehensive dataset designed specifically for supervised fine-tuning (SFT), featuring 13.5 million samples across 122 tasks. For more details, please refer to our paper.

Usage

The model can be loaded as follows:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Henrychur/MMed-Llama-3-8B-EnIns")
model = AutoModelForCausalLM.from_pretrained("Henrychur/MMed-Llama-3-8B-EnIns", torch_dtype=torch.float16)
  • Inference format is the same as Llama 3, you can check the inference code here.
Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Henrychur/MMedS-Llama-3-8B

Unable to build the model tree, the base model loops to the model itself. Learn more.

Datasets used to train Henrychur/MMedS-Llama-3-8B

Space using Henrychur/MMedS-Llama-3-8B 1