File size: 4,687 Bytes
d9b31fe e7c2043 d9b31fe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 |
---
license: creativeml-openrail-m
datasets:
- avaliev/umls
language:
- en
base_model: prithivMLmods/Qwen-UMLS-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- safetensors
- Unified Medical Language System
- Qwen2.5
- 7B
- Instruct
- Medical
- text-generation-inference
- National Library of Medicine
- umls
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen-UMLS-7B-Instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`prithivMLmods/Qwen-UMLS-7B-Instruct`](https://huggingface.co/prithivMLmods/Qwen-UMLS-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/Qwen-UMLS-7B-Instruct) for more details on the model.
---
Model details:
-
The Qwen-UMLS-7B-Instruct model is a specialized, instruction-tuned language model designed for medical and healthcare-related tasks. It is fine-tuned on the Qwen2.5-7B-Instruct base model using the UMLS (Unified Medical Language System) dataset, making it an invaluable tool for medical professionals, researchers, and developers building healthcare applications.
Key Features:
Medical Expertise:
Trained on the UMLS dataset, ensuring deep domain knowledge in medical terminology, diagnostics, and treatment plans.
Instruction-Following:
Designed to handle complex queries with clarity and precision, suitable for diagnostic support, patient education, and research.
High-Parameter Model:
Leverages 7 billion parameters to deliver detailed, contextually accurate responses.
Training Details:
Base Model: Qwen2.5-7B-Instruct
Dataset: avaliev/UMLS
Comprehensive dataset of medical terminologies, relationships, and use cases with 99.1k samples.
Capabilities:
Clinical Text Analysis:
Interpret medical notes, prescriptions, and research articles.
Question-Answering:
Answer medical queries, provide explanations for symptoms, and suggest treatments based on user prompts.
Educational Support:
Assist in learning medical terminologies and understanding complex concepts.
Healthcare Applications:
Integrate into clinical decision-support systems or patient care applications.
Usage Instructions:
Setup: Download all files and ensure compatibility with the Hugging Face Transformers library.
Loading the Model:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Qwen-UMLS-7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
Generate Medical Text:
input_text = "What are the symptoms and treatments for diabetes?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Customizing Outputs: Modify generation_config.json to optimize output style:
temperature for creativity vs. determinism.
max_length for concise or extended responses.
Applications:
Clinical Support:
Assist healthcare providers with quick, accurate information retrieval.
Patient Education:
Provide patients with understandable explanations of medical conditions.
Medical Research:
Summarize or analyze complex medical research papers.
AI-Driven Diagnostics:
Integrate with diagnostic systems for preliminary assessments.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen-UMLS-7B-Instruct-Q4_K_M-GGUF --hf-file qwen-umls-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen-UMLS-7B-Instruct-Q4_K_M-GGUF --hf-file qwen-umls-7b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen-UMLS-7B-Instruct-Q4_K_M-GGUF --hf-file qwen-umls-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen-UMLS-7B-Instruct-Q4_K_M-GGUF --hf-file qwen-umls-7b-instruct-q4_k_m.gguf -c 2048
```
|