Triangle104
commited on
Commit
•
80c1671
1
Parent(s):
89bafee
Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,25 @@ tags:
|
|
24 |
This model was converted to GGUF format from [`prithivMLmods/Llama-Doctor-3.2-3B-Instruct`](https://huggingface.co/prithivMLmods/Llama-Doctor-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
25 |
Refer to the [original model card](https://huggingface.co/prithivMLmods/Llama-Doctor-3.2-3B-Instruct) for more details on the model.
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
## Use with llama.cpp
|
28 |
Install llama.cpp through brew (works on Mac and Linux)
|
29 |
|
|
|
24 |
This model was converted to GGUF format from [`prithivMLmods/Llama-Doctor-3.2-3B-Instruct`](https://huggingface.co/prithivMLmods/Llama-Doctor-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
25 |
Refer to the [original model card](https://huggingface.co/prithivMLmods/Llama-Doctor-3.2-3B-Instruct) for more details on the model.
|
26 |
|
27 |
+
---
|
28 |
+
Model details:
|
29 |
+
-
|
30 |
+
The Llama-Doctor-3.2-3B-Instruct model is designed for text generation tasks, particularly in contexts where instruction-following capabilities are needed. This model is a fine-tuned version of the base Llama-3.2-3B-Instruct model and is optimized for understanding and responding to user-provided instructions or prompts. The model has been trained on a specialized dataset, avaliev/chat_doctor, to enhance its performance in providing conversational or advisory responses, especially in medical or technical fields.
|
31 |
+
Key Use Cases:
|
32 |
+
|
33 |
+
Conversational AI: Engage in dialogue, answering questions, or providing responses based on user instructions.
|
34 |
+
Text Generation: Generate content, summaries, explanations, or solutions to problems based on given prompts.
|
35 |
+
Instruction Following: Understand and execute instructions, potentially in complex or specialized domains like medical, technical, or academic fields.
|
36 |
+
|
37 |
+
The model leverages a PyTorch-based architecture and comes with various files such as configuration files, tokenizer files, and special tokens maps to facilitate smooth deployment and interaction.
|
38 |
+
Intended Applications:
|
39 |
+
|
40 |
+
Chatbots for customer support or virtual assistants.
|
41 |
+
Medical Consultation Tools for generating advice or answering medical queries (given its training on the chat_doctor dataset).
|
42 |
+
Content Creation tools, helping generate text based on specific instructions.
|
43 |
+
Problem-solving Assistants that offer explanations or answers to user queries, particularly in instructional contexts.
|
44 |
+
|
45 |
+
---
|
46 |
## Use with llama.cpp
|
47 |
Install llama.cpp through brew (works on Mac and Linux)
|
48 |
|