Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

By clicking "Agree", you agree to the License Agreement and acknowledge Writer's Privacy Policy.

Log in or Sign Up to review the conditions and access this model content.

Palmyra-Med, a powerful LLM designed for healthcare

Model Description

  • Developed by: Writer
  • Language(s) (NLP): English
  • License: Writer open model
  • Finetuned from model: Palmyra-X-003
  • Context window: 32768

Model Details

Palmyra-Med-70b-32k, created by Writer, builds upon the foundation of Palmyra-Med-70b and offers an extended context length and meets the needs of the healthcare industry. The leading LLM on biomedical benchmarks, with an average score of 85.87%, outperforming GPT-4, claude Opus, Gemini and Med-PaLM-2 base model and a medically trained human test-taker.

Resources and Technical Documentation:

Specialized for Biomedical Applications

Palmyra-Med-70B-32k is meticulously designed to meet the unique linguistic and knowledge demands of the medical and life sciences sectors. It has been fine-tuned on an extensive collection of high-quality biomedical data, ensuring it can comprehend and generate text with precise domain-specific accuracy and fluency.

Our system integrates the DPO dataset and a well-crafted fine-tuning recipe along with a custom diverse medical instruction dataset, making it highly adept at handling the specific needs of this field. Key components of our training pipeline include:

  • Policy Optimization: Utilizing Direct Preference Optimization to enhance the model's performance. DPO.
  • Fine-tuning dataset: Custom Medical Instruct dataset (Writer in-house build)

Intended Use

Intended Use Cases Palmyra-Med-70b-32k is intended for non-commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.

Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by Writer's Acceptable Use Policy and the Writer open model license. Use in languages other than English.

Note: Developers may fine-tune Palmyra-Med-70b-32k models for languages beyond English provided they comply with the Writer open model license and the Acceptable Use Policy.

Watermarks: All models built by Writer.com contain watermarks to detect and prevent misuse and illegal use.

Use with transformers

You can run conversational inference using the Transformers Auto classes with the generate() function. Let's see an example.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Writer/Palmyra-Med-70B-32k"

tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto",
    attn_implementation="flash_attention_2",
)

messages = [
    {
        "role": "system",
        "content": "You are a highly knowledgeable and experienced expert in the healthcare and biomedical field, possessing extensive medical knowledge and practical expertise.",
    },
    {
        "role": "user",
        "content": "Does danzhi Xiaoyao San ameliorate depressive-like behavior by shifting toward serotonin via the downregulation of hippocampal indoleamine 2,3-dioxygenase?",
    },
]

input_ids = tokenizer.apply_chat_template(
    messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)

gen_conf = {
    "max_new_tokens": 256,
    "eos_token_id": [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")],
    "temperature": 0.0,
    "top_p": 0.9,
}

with torch.inference_mode():
    output_id = model.generate(input_ids, **gen_conf)

output_text = tokenizer.decode(output_id[0][input_ids.shape[1] :])

print(output_text)

Evaluation Results

Palmyra-Med-70B-32k outperforms larger models like GPT-4, Gemini and Med-PaLM-1 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 85.9% despite having fewer parameters. Its strong performance in tasks like Clinical KG, Medical Genetics, and PubMedQA underscores its effective grasp of biomedical knowledge.

Performance on Biomedical Benchmarks

Palmyra-Med-70B-32k Performance

Palmyra-Med-70B-32k Performance Heat Map

we ran the needle-in-haystack for Palmyra-Med-70B-32k and the results are as follows:

Palmyra-Med-70B-32k Performance NIH

Following its evaluation on NIH, the Palmyra-Med-70B-32k model achieved almost perfect scores, highlighting its robust capability in efficiently processing extensive medical documents.

Medical Use Cases

Palmyra-Med-70B-32k excels in analyzing and summarizing complex clinical notes, EHR data, and discharge summaries, extracting key information to generate concise, structured summaries. It help enhance clinical decision-making by performing advanced clinical entity recognition, identifying key medical concepts such as diseases, symptoms, medications, procedures, and anatomical structures from unstructured text.

By leveraging its deep understanding of medical terminology, the model enhances information retrieval, data analysis, and knowledge discovery from EHRs, research articles, and other biomedical sources. These capabilities support applications like clinical decision support, pharmacovigilance, and medical research.

Bias, Risks, and Limitations

Palmyra-Med-70B-32k, despite leveraging high-quality data, may contain inaccuracies, biases, or misalignments and has not been rigorously evaluated in clinical trials or real-world healthcare settings.

It is advised not to use the model for direct patient care, clinical decision support, or professional medical purposes. Instead, its use should be confined to research by qualified individuals who understand its limitations. Palmyra-Med-70B-32k should not replace professional medical judgment, and adapting it for medical use would require extensive additional work, including thorough testing, guideline alignment, bias mitigation, human oversight, and regulatory compliance. Always consult a qualified healthcare provider for personal medical needs.

Citation and Related Information

To cite this model:

@misc{Palmyra-Med-70B,
  author = {Writer Engineering team},
  title = {{Palmyra-Med-70B: A powerful LLM designed for healthcare}},
  howpublished = {\url{https://dev.writer.com}},
  year = 2024,
  month = June 
}

Contact Hello@writer.com

Downloads last month
200
Safetensors
Model size
70.6B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Writer/Palmyra-Med-70B-32K

Finetunes
1 model
Quantizations
4 models

Spaces using Writer/Palmyra-Med-70B-32K 5

Collection including Writer/Palmyra-Med-70B-32K