license: mit
language:
- en
tags:
- 'phi3 '
- medical
- doctor
- assistant
- fine tunned
Model Card for sai1881/HiDoctor
Model Details
Model Description
The sai1881/HiDoctor
model is a specialized language model fine-tuned to generate medical advice and responses to queries commonly encountered in a healthcare context. It is based on Microsoft's Phi-3-mini-128k-instruct
and fine-tuned using a dataset designed for medical chatbot applications. The model aims to assist in providing preliminary medical guidance, making it a valuable tool for digital health applications.
- Developed by: Sai Manoj
- Model type: Causal Language Model
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: Microsoft's
Phi-3-mini-128k-instruct
Uses
Direct Use
This model is intended for direct integration into health tech applications where automated responses are beneficial, such as virtual health assistants and medical inquiry chatbots. The model provides language generation capabilities that can simulate a conversation between a patient and a doctor, making it useful for preliminary medical advice.
Out-of-Scope Use
The model should not be used as a substitute for professional medical advice, diagnosis, or treatment. Reliance on any information provided by this model is solely at the user's risk. Use in critical health care decisions, without human oversight, is out of scope and strongly discouraged.
Bias, Risks, and Limitations
This model, while trained on diverse dialogues, might still inherit biases from the data or exhibit unexpected behaviors in generating medical advice. The output should be monitored and evaluated in the context of its use to prevent potential misinformation or harm.
Recommendations
Users, both direct and downstream, should be informed about the model's limitations and potential biases. It's recommended that outputs be reviewed by medical professionals before being used in real-world scenarios to ensure safety and accuracy.
How to Get Started with the Model
To get started with the sai1881/HiDoctor
model, developers can use the following code snippet to integrate the model into their applications:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"sai1881/HiDoctor",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
prompt = """
You’re an AI doctor, who is friendly, meticulous, and reassuring.
Your task is to provide me with detailed health advice based on a set of symptoms I will provide. Try to be sensitive. Please give me a diagnosis and reasoning, suggestions for treatment, and any relevant lifestyle changes that can help.
Keep in mind the need to provide accurate and empathetic responses, considering the patient's well-being at all times.
"""
messages = [
{"role": "system", "content": prompt },
{"role": "user", "content": " I have fatigue"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.1,
"do_sample": True,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
Output: {"Description": "What causes fatigue?", "Doctor": "Hi, dear I have gone through your question. I can understand your concern. You may have some infection or some vitamin deficiency. You should go for complete blood count and vitamin B12 and vitamin D3 test. You should take treatment accordingly. Hope I have answered your question, if you have doubt then I will be happy to answer. Thanks for using health care magic. Wish you a very good health."}
Training Details
Enhanced Training Details
The sai1881/HiDoctor
model utilizes advanced training techniques to optimize performance and memory usage, making it suitable for deployment on various hardware configurations. Here are detailed training aspects and configurations:
Training Environment and Libraries
- PEFT Configuration: Parameter-efficient fine-tuning (PEFT) was implemented using
LoraConfig
, adjusting layers specifically for causal language modeling. This includes LoRA (Low-Rank Adaptation) adjustments with specific dropout settings, bias configurations, and targeted layers within the model architecture to enhance learning efficiency without extensive parameter modifications.
Data Processing and Augmentation
- Dataset Handling: The model was trained on the "AI Medical Chatbot Dataset" containing over 250,000 dialogues. Data was pre-processed to include chat-style formats and random character manipulations (insertions and deletions) to simulate various typing patterns and errors typically seen in real-world chat applications.
- Dynamic Tokenization: Tokenization was adjusted dynamically, employing a customized method that considers chat format and max length settings to ensure optimal sequence handling during training.
Training Strategy
- DeepSpeed Integration: Utilized DeepSpeed's ZeRO-3 optimization for memory efficiency, enabling training with higher batch sizes and reduced memory footprint.
- Training Arguments: Configured with a cosine learning rate scheduler, BF16 mixed precision for faster computation, and gradient checkpointing to handle longer sequences effectively.
- Batch and Memory Management: Employed gradient accumulation and batch size adjustments to manage GPU memory efficiently, ensuring stable and effective training without overloading the system resources.
Evaluation and Output
- Model Saving and Evaluation: Configured periodic model saving every 50 steps with a limit on the number of saved checkpoints to manage disk space. The model outputs were evaluated on a separate test dataset to ensure the model's generalization capability across unseen medical dialogues.
Debugging and Logging
- Logging Configuration: Comprehensive logging was set up to track training progress and configurations, aiding in debugging and ensuring transparency throughout the model training process.
Environment and Software Details
To ensure optimal performance and compatibility, the sai1881/HiDoctor
model was developed and trained within a highly specified software and hardware environment. Below are the detailed specifications:
System Configuration
- Operating System: Linux 5.15.146.1 on Microsoft WSL2, tailored for high-performance computing tasks.
- Processor and Architecture: x86_64 architecture with 64-bit ELF, utilizing a robust multi-core setup.
- Memory: A total of 62.64 GB system memory with 57.74 GB available, ensuring sufficient resources for large-scale data processing and model training.
- Cores: 14 physical cores and 28 total cores, providing substantial parallel processing capabilities.
GPU and CUDA Details
- GPUs: Two NVIDIA RTX A4500 graphics cards, each equipped with 19.99 GB of memory and a compute capability of 8.6, which is ideal for deep learning and large model training.
- CUDA Version: CUDA 11.8, allowing for efficient exploitation of GPU capabilities in training and inference processes.
Software Versions
- Python Version: Python 3.10.12, supporting modern software libraries and frameworks.
- PyTorch Version: PyTorch 2.2.1+cu118, optimized for CUDA 11.8 to leverage GPU acceleration.
- Transformers Library Version: 4.41.2, used for managing pre-trained models and implementing custom training routines.
This environment provides a robust foundation for developing and training advanced machine learning models, such as sai1881/HiDoctor
, ensuring compatibility and performance optimization specific to the needs of medical dialogue generation and handling.