Edit model card

GatorTron-Large overview

Developed by a joint effort between the University of Florida and NVIDIA, GatorTron-Large is a clinical language model of 8.9 billion parameters, pre-trained using a BERT architecure implemented in the Megatron package (https://github.com/NVIDIA/Megatron-LM).

GatorTron-Large is pre-trained using a dataset consisting of:

  • 82B words of de-identified clinical notes from the University of Florida Health System,
  • 6.1B words from PubMed CC0,
  • 2.5B words from WikiText,
  • 0.5B words of de-identified clinical notes from MIMIC-III

The Github for GatorTron is at : https://github.com/uf-hobi-informatics-lab/GatorTron

Model variations

Model Parameter
gatortron-base 345 million
gatortronS 345 million
gatortron-medium 3.9 billion
gatortron-large (this model) 8.9 billion

How to use

from transformers import AutoModel, AutoTokenizer, AutoConfig

tokenizer= AutoTokenizer.from_pretrained('UFNLP/gatortron-large')
config=AutoConfig.from_pretrained('UFNLP/gatortron-large')
mymodel=AutoModel.from_pretrained('UFNLP/gatortron-large')

encoded_input=tokenizer("Bone scan:  Negative for distant metastasis.", return_tensors="pt")
encoded_output = mymodel(**encoded_input)
print (encoded_output)

De-identification

We applied a de-identification system to remove protected health information (PHI) from clinical text. We adopted the safe-harbor method to identify 18 PHI categories defined in the Health Insurance Portability and Accountability Act (HIPAA) and replaced them with dummy strings (e.g., replace people’s names into [**NAME**]).

The de-identifiation system is described in:

Yang X, Lyu T, Li Q, Lee C-Y, Bian J, Hogan WR, Wu Y†. A study of deep learning methods for de-identification of clinical notes in cross-institute settings. BMC Med Inform Decis Mak. 2020 Dec 5;19(5):232. https://www.ncbi.nlm.nih.gov/pubmed/31801524.

Citation info

Yang X, Chen A, PourNejatian N, Shin HC, Smith KE, Parisien C, Compas C, Martin C, Costa AB, Flores MG, Zhang Y, Magoc T, Harle CA, Lipori G, Mitchell DA, Hogan WR, Shenkman EA, Bian J, Wu Y†. A large language model for electronic health records. Npj Digit Med. Nature Publishing Group; . 2022 Dec 26;5(1):1–9. https://www.nature.com/articles/s41746-022-00742-2

  • BibTeX entry
@article{yang2022large,
  title={A large language model for electronic health records},
  author={Yang, Xi and Chen, Aokun and PourNejatian, Nima and Shin, Hoo Chang and Smith, Kaleb E and Parisien, Christopher and Compas, Colin and Martin, Cheryl and Costa, Anthony B and Flores, Mona G and Zhang, Ying and Magoc, Tanja and Harle, Christopher A and Lipori, Gloria and Mitchell, Duane A and Hogan, William R and Shenkman, Elizabeth A and Bian, Jiang and Wu, Yonghui },
  journal={npj Digital Medicine},
  volume={5},
  number={1},
  pages={194},
  year={2022},
  publisher={Nature Publishing Group UK London}
} 

Contact

Downloads last month
1,626
Inference API
Unable to determine this model’s pipeline type. Check the docs .