TinyLlama 1.1B Medical π€π¦
Model Description
A smaller version of https://huggingface.co/therealcyberlord/llama2-qlora-finetuned-medical, which used Llama 2 7B.
Finetuned on <|user|> <|assistant|> instructions
How to Get Started with the Model
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM
config = PeftConfig.from_pretrained("therealcyberlord/TinyLlama-1.1B-Medical")
model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
model = PeftModel.from_pretrained(model, "therealcyberlord/TinyLlama-1.1B-Medical")
Training Details
Training Data
Used two data sources:
BI55/MedText: https://huggingface.co/datasets/BI55/MedText
MedQuad-MedicalQnADataset: https://huggingface.co/datasets/keivalya/MedQuad-MedicalQnADataset
Training Procedure
Trained on 1000 steps on a shuffled combined dataset
Framework versions
- PEFT 0.7.2.dev0
- Downloads last month
- 218
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for therealcyberlord/TinyLlama-1.1B-Medical
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0