Triangle104's picture
Update README.md
ed57f09 verified
metadata
license: creativeml-openrail-m
datasets:
  - avaliev/umls
language:
  - en
base_model: prithivMLmods/Qwen-UMLS-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
  - safetensors
  - Unified Medical Language System
  - Qwen2.5
  - 7B
  - Instruct
  - Medical
  - text-generation-inference
  - National Library of Medicine
  - umls
  - llama-cpp
  - gguf-my-repo

Triangle104/Qwen-UMLS-7B-Instruct-Q8_0-GGUF

This model was converted to GGUF format from prithivMLmods/Qwen-UMLS-7B-Instruct using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


Model details:

The Qwen-UMLS-7B-Instruct model is a specialized, instruction-tuned language model designed for medical and healthcare-related tasks. It is fine-tuned on the Qwen2.5-7B-Instruct base model using the UMLS (Unified Medical Language System) dataset, making it an invaluable tool for medical professionals, researchers, and developers building healthcare applications.

Key Features:

Medical Expertise:

Trained on the UMLS dataset, ensuring deep domain knowledge in medical terminology, diagnostics, and treatment plans.

Instruction-Following:

Designed to handle complex queries with clarity and precision, suitable for diagnostic support, patient education, and research.

High-Parameter Model:

Leverages 7 billion parameters to deliver detailed, contextually accurate responses. Training Details: Base Model: Qwen2.5-7B-Instruct Dataset: avaliev/UMLS Comprehensive dataset of medical terminologies, relationships, and use cases with 99.1k samples.

Capabilities:

Clinical Text Analysis:

Interpret medical notes, prescriptions, and research articles.

Question-Answering:

Answer medical queries, provide explanations for symptoms, and suggest treatments based on user prompts.

Educational Support:

Assist in learning medical terminologies and understanding complex concepts.

Healthcare Applications:

Integrate into clinical decision-support systems or patient care applications.

Usage Instructions:

Setup: Download all files and ensure compatibility with the Hugging Face Transformers library.

Loading the Model:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Qwen-UMLS-7B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name)

Generate Medical Text:

input_text = "What are the symptoms and treatments for diabetes?" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=200, temperature=0.7) print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Customizing Outputs: Modify generation_config.json to optimize output style:

temperature for creativity vs. determinism. max_length for concise or extended responses.

Applications:

Clinical Support:

Assist healthcare providers with quick, accurate information retrieval.

Patient Education:

Provide patients with understandable explanations of medical conditions.

Medical Research:

Summarize or analyze complex medical research papers.

AI-Driven Diagnostics:

Integrate with diagnostic systems for preliminary assessments.


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/Qwen-UMLS-7B-Instruct-Q8_0-GGUF --hf-file qwen-umls-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/Qwen-UMLS-7B-Instruct-Q8_0-GGUF --hf-file qwen-umls-7b-instruct-q8_0.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/Qwen-UMLS-7B-Instruct-Q8_0-GGUF --hf-file qwen-umls-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/Qwen-UMLS-7B-Instruct-Q8_0-GGUF --hf-file qwen-umls-7b-instruct-q8_0.gguf -c 2048