Triangle104 commited on
Commit
3db040b
1 Parent(s): b5d38a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -0
README.md CHANGED
@@ -17,6 +17,125 @@ base_model: Ellbendls/llama-3.2-3b-chat-doctor
17
  This model was converted to GGUF format from [`Ellbendls/llama-3.2-3b-chat-doctor`](https://huggingface.co/Ellbendls/llama-3.2-3b-chat-doctor) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
  Refer to the [original model card](https://huggingface.co/Ellbendls/llama-3.2-3b-chat-doctor) for more details on the model.
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Use with llama.cpp
21
  Install llama.cpp through brew (works on Mac and Linux)
22
 
 
17
  This model was converted to GGUF format from [`Ellbendls/llama-3.2-3b-chat-doctor`](https://huggingface.co/Ellbendls/llama-3.2-3b-chat-doctor) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
  Refer to the [original model card](https://huggingface.co/Ellbendls/llama-3.2-3b-chat-doctor) for more details on the model.
19
 
20
+ ---
21
+ Model details:
22
+ -
23
+ Llama-3.2-3B-Chat-Doctor is a specialized medical question-answering model based on the Llama 3.2 3B architecture. This model has been fine-tuned specifically for providing accurate and helpful responses to medical-related queries.
24
+
25
+ Developed by: Ellbendl Satria
26
+ Model type: Language Model (Conversational AI)
27
+ Language: English
28
+ Base Model: Meta Llama-3.2-3B-Instruct
29
+ Model Size: 3 Billion Parameters
30
+ Specialization: Medical Question Answering
31
+ License: llama3.2
32
+
33
+ Model Capabilities
34
+
35
+ Provides informative responses to medical questions
36
+ Assists in understanding medical terminology and health-related concepts
37
+ Offers preliminary medical information (not a substitute for professional medical advice)
38
+
39
+ Direct Use
40
+
41
+ This model can be used for:
42
+
43
+ Providing general medical information
44
+ Explaining medical conditions and symptoms
45
+ Offering basic health-related guidance
46
+ Supporting medical education and patient communication
47
+
48
+ Limitations and Important Disclaimers
49
+
50
+ ⚠️ CRITICAL WARNINGS:
51
+
52
+ NOT A MEDICAL PROFESSIONAL: This model is NOT a substitute for professional medical advice, diagnosis, or treatment.
53
+ Always consult a qualified healthcare provider for medical concerns.
54
+ The model's responses should be treated as informational only and not as medical recommendations.
55
+
56
+ Out-of-Scope Use
57
+
58
+ The model SHOULD NOT be used for:
59
+
60
+ Providing emergency medical advice
61
+ Diagnosing specific medical conditions
62
+ Replacing professional medical consultation
63
+ Making critical healthcare decisions
64
+
65
+ Bias, Risks, and Limitations
66
+ Potential Biases
67
+
68
+ May reflect biases present in the training data
69
+ Responses might not account for individual patient variations
70
+ Limited by the comprehensiveness of the training dataset
71
+
72
+ Technical Limitations
73
+
74
+ Accuracy is limited to the knowledge in the training data
75
+ May not capture the most recent medical research or developments
76
+ Cannot perform physical examinations or medical tests
77
+
78
+ Recommendations
79
+
80
+ Always verify medical information with professional healthcare providers
81
+ Use the model as a supplementary information source
82
+ Be aware of potential inaccuracies or incomplete information
83
+
84
+ Training Details
85
+ Training Data
86
+
87
+ Source Dataset: ruslanmv/ai-medical-chatbot
88
+ Base Model: Meta Llama-3.2-3B-Instruct
89
+
90
+ Training Procedure
91
+
92
+ [Provide details about the fine-tuning process, if available]
93
+
94
+ Fine-tuning approach
95
+ Computational resources used
96
+ Training duration
97
+ Specific techniques applied during fine-tuning
98
+
99
+ How to Use the Model
100
+ Hugging Face Transformers
101
+
102
+ from transformers import AutoModelForCausalLM, AutoTokenizer
103
+
104
+ model_name = "Ellbendls/llama-3.2-3b-chat-doctor"
105
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
106
+ model = AutoModelForCausalLM.from_pretrained(model_name)
107
+
108
+ # Example usage
109
+ input_text = "I had a surgery which ended up with some failures. What can I do to fix it?"
110
+
111
+ # Prepare inputs with explicit padding and attention mask
112
+ inputs = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True)
113
+
114
+ # Generate response with more explicit parameters
115
+ outputs = model.generate(
116
+ input_ids=inputs['input_ids'],
117
+ attention_mask=inputs['attention_mask'],
118
+ max_new_tokens=150, # Specify max new tokens to generate
119
+ do_sample=True, # Enable sampling for more diverse responses
120
+ temperature=0.7, # Control randomness of output
121
+ top_p=0.9, # Nucleus sampling to maintain quality
122
+ num_return_sequences=1 # Number of generated sequences
123
+ )
124
+
125
+ # Decode the generated response
126
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
127
+
128
+ print(response)
129
+
130
+ Ethical Considerations
131
+
132
+ This model is developed with the intent to provide helpful, accurate, and responsible medical information. Users are encouraged to:
133
+
134
+ Use the model responsibly
135
+ Understand its limitations
136
+ Seek professional medical advice for serious health concerns
137
+
138
+ ---
139
  ## Use with llama.cpp
140
  Install llama.cpp through brew (works on Mac and Linux)
141