Model Description
Llama2-MedTuned-13b represents an advanced version of the Llama2 13B model, fine-tuned for complex biomedical language processing. This model has been tailored through instruction tuning on a dataset comprising around 200,000 samples, specifically targeting biomedical NLP tasks such as Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI).
Instruction Tuning Procedure
This model is fine-tuned using a unique instruction-based approach, aimed at enhancing its performance on specific biomedical tasks. The process involved the employment of a meticulously curated dataset, designed to effectively align with the challenging requirements of biomedical and clinical NLP tasks.
Model Capabilities
Llama2-MedTuned-13b is adept at understanding intricate biomedical contexts and is particularly effective in executing NER, RE, and NLI tasks with higher accuracy. It is proficient in generating outputs that align well with the structured formats required for standard evaluation metrics in biomedical NLP.
Architecture
The Llama2-MedTuned-13b model is built upon the autoregressive transformer architecture of the original Llama2 13B model. This model preserves the core transformer layers and attention mechanisms, with specialised adjustments for enhanced performance in the biomedical language domain.
Citation
For using Llama2-MedTuned-13b in academic work or applications, kindly cite the following paper:
@misc{rohanian2023exploring,
title={Exploring the Effectiveness of Instruction Tuning in Biomedical Language Processing},
author={Omid Rohanian and Mohammadmahdi Nouriborji and David A. Clifton},
year={2023},
eprint={2401.00579},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 30