Llama2 πŸ¦™ finetuned on medical diagnosis

MedText dataset: https://huggingface.co/datasets/BI55/MedText

1412 pairs of diagnosis cases

About:

The primary objective of this fine-tuning process is to equip Llama2 with the ability to assist in diagnosing various medical cases and diseases. However, it is essential to clarify that it is not designed to replace real medical professionals. Instead, its purpose is to provide helpful information to users, suggesting potential next steps based on the input data and the patterns it has learned from the MedText dataset.

Finetuned on guanaco styled instructions

###Human
###Assistant

Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float16

Framework versions

  • PEFT 0.5.0.dev0
Downloads last month
25
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for therealcyberlord/llama2-qlora-finetuned-medical

Adapter
(1076)
this model

Dataset used to train therealcyberlord/llama2-qlora-finetuned-medical

Space using therealcyberlord/llama2-qlora-finetuned-medical 1