Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: apache-2.0
|
4 |
+
tags:
|
5 |
+
- text-generation-inference
|
6 |
+
- transformers
|
7 |
+
- ruslanmv
|
8 |
+
- llama
|
9 |
+
- trl
|
10 |
+
base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
11 |
+
datasets:
|
12 |
+
- ruslanmv/ai-medical-dataset
|
13 |
+
---
|
14 |
+
|
15 |
+
# ai-medical-model-32bit: Fine-Tuned Llama3 for Technical Medical Questions
|
16 |
+
[![](future.jpg)](https://ruslanmv.com/)
|
17 |
+
This repository provides a fine-tuned version of the powerful Llama3 8B Instruct model, specifically designed to answer medical questions in an informative way.
|
18 |
+
It leverages the rich knowledge contained in the AI Medical Dataset ([ruslanmv/ai-medical-dataset](https://huggingface.co/datasets/ruslanmv/ai-medical-dataset)).
|
19 |
+
|
20 |
+
**Model & Development**
|
21 |
+
|
22 |
+
- **Developed by:** ruslanmv
|
23 |
+
- **License:** Apache-2.0
|
24 |
+
- **Finetuned from model:** meta-llama/Meta-Llama-3-8B-Instruct
|
25 |
+
|
26 |
+
**Key Features**
|
27 |
+
|
28 |
+
- **Medical Focus:** Optimized to address health-related inquiries.
|
29 |
+
- **Knowledge Base:** Trained on a comprehensive medical chatbot dataset.
|
30 |
+
- **Text Generation:** Generates informative and potentially helpful responses.
|
31 |
+
|
32 |
+
**Installation**
|
33 |
+
|
34 |
+
This model is accessible through the Hugging Face Transformers library. Install it using pip:
|
35 |
+
|
36 |
+
```bash
|
37 |
+
!python -m pip install --upgrade pip
|
38 |
+
!pip3 install torch==2.2.1 torchvision torchaudio xformers --index-url https://download.pytorch.org/whl/cu121
|
39 |
+
!pip install bitsandbytes accelerate
|
40 |
+
```
|
41 |
+
|
42 |
+
**Usage Example**
|
43 |
+
|
44 |
+
Here's a Python code snippet demonstrating how to interact with the `ai-medical-model-32bit` model and generate answers to your medical questions:
|
45 |
+
|
46 |
+
```python
|
47 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
48 |
+
import torch
|
49 |
+
model_name = "ruslanmv/ai-medical-model-32bit"
|
50 |
+
device_map = 'auto'
|
51 |
+
bnb_config = BitsAndBytesConfig(
|
52 |
+
load_in_4bit=True,
|
53 |
+
bnb_4bit_quant_type="nf4",
|
54 |
+
bnb_4bit_compute_dtype=torch.float16,
|
55 |
+
)
|
56 |
+
model = AutoModelForCausalLM.from_pretrained(
|
57 |
+
model_name,
|
58 |
+
quantization_config=bnb_config,
|
59 |
+
trust_remote_code=True,
|
60 |
+
use_cache=False,
|
61 |
+
device_map=device_map
|
62 |
+
)
|
63 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
64 |
+
tokenizer.pad_token = tokenizer.eos_token
|
65 |
+
|
66 |
+
def askme(question):
|
67 |
+
prompt = f"<|start_header_id|>system<|end_header_id|> You are a Medical AI chatbot assistant. <|eot_id|><|start_header_id|>User: <|end_header_id|>This is the question: {question}<|eot_id|>"
|
68 |
+
# Tokenizing the input and generating the output
|
69 |
+
#prompt = f"{question}"
|
70 |
+
# Tokenizing the input and generating the output
|
71 |
+
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
|
72 |
+
outputs = model.generate(**inputs, max_new_tokens=256, use_cache=True)
|
73 |
+
answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
|
74 |
+
# Try Remove the prompt
|
75 |
+
try:
|
76 |
+
# Split the answer at the first line break, assuming system intro and question are on separate lines
|
77 |
+
answer_parts = answer.split("\n", 1)
|
78 |
+
# If there are multiple parts, consider the second part as the answer
|
79 |
+
if len(answer_parts) > 1:
|
80 |
+
answers = answer_parts[1].strip() # Remove leading/trailing whitespaces
|
81 |
+
else:
|
82 |
+
answers = "" # If no split possible, set answer to empty string
|
83 |
+
print(f"Answer: {answers}")
|
84 |
+
except:
|
85 |
+
print(answer)
|
86 |
+
|
87 |
+
# Example usage
|
88 |
+
# - Question: Make the question.
|
89 |
+
question="What was the main cause of the inflammatory CD4+ T cells?"
|
90 |
+
askme(question)
|
91 |
+
```
|
92 |
+
the type of answer is :
|
93 |
+
```
|
94 |
+
The main cause of inflammatory CD4+ T cells is typically attributed to an imbalance in the immune system's response to an antigen, leading to an overactive immune response. This can occur due to various factors, such as:
|
95 |
+
|
96 |
+
1. **Autoimmune disorders**: In conditions like rheumatoid arthritis, lupus, or multiple sclerosis, the immune system mistakenly attacks the body's own tissues, leading to chronic inflammation and the activation of CD4+ T cells.
|
97 |
+
2. **Infections**: Certain infections, like tuberculosis or HIV, can trigger an excessive immune response, resulting in the activation of CD4+ T cells.
|
98 |
+
3. **Environmental factors**: Exposure to pollutants, toxins, or allergens can trigger an immune response, leading to the activation of CD4+ T cells.
|
99 |
+
4. **Genetic predisposition**: Some individuals may be more susceptible to developing inflammatory CD4+ T cells due to their genetic makeup.
|
100 |
+
5. **Immunosuppression**: Weakened immune systems, such as those resulting from immunosuppressive therapy or HIV/AIDS, can lead to an overactive immune response and the activation of CD4+ T cells.
|
101 |
+
|
102 |
+
These factors can lead to the activation of CD4+
|
103 |
+
```
|
104 |
+
**Important Note**
|
105 |
+
|
106 |
+
This model is intended for informational purposes only and should not be used as a substitute for professional medical advice. Always consult with a qualified healthcare provider for any medical concerns.
|
107 |
+
|
108 |
+
**License**
|
109 |
+
|
110 |
+
This model is distributed under the Apache License 2.0 (see LICENSE file for details).
|
111 |
+
|
112 |
+
**Contributing**
|
113 |
+
|
114 |
+
We welcome contributions to this repository! If you have improvements or suggestions, feel free to create a pull request.
|
115 |
+
|
116 |
+
**Disclaimer**
|
117 |
+
|
118 |
+
While we strive to provide informative responses, the accuracy of the model's outputs cannot be guaranteed.
|