Text Generation
Transformers
PyTorch
English
gpt2
medical
text-generation-inference
Inference Endpoints
Files changed (1) hide show
  1. README.md +129 -1
README.md CHANGED
@@ -9,4 +9,132 @@ metrics:
9
  - accuracy
10
  tags:
11
  - medical
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - accuracy
10
  tags:
11
  - medical
12
+ base_model: stanford-crfm/BioMedLM
13
+
14
+ ---
15
+
16
+
17
+ # Model Card for Raidium MQG model
18
+
19
+
20
+ The model is introduced in the paper "Efficient Medical Question Answering with Knowledge-Augmented Question Generation".
21
+
22
+ Paper: [https://arxiv.org/abs/2405.14654](https://arxiv.org/abs/2405.14654)
23
+
24
+ MQG is is a transformer language model pre-trained on a series of medical textbooks, and medical questions generated by GPT-4. The weights are initialized with
25
+ [BioMedLM](https://huggingface.co/stanford-crfm/BioMedLM), then further pre-trained on those datasets.
26
+
27
+ The questions have been generated from prompt containing medical data from the textbooks.
28
+ They are available here: [ECNQA_generated_questions](https://huggingface.co/datasets/raidium/ECNQA_generated_questions).
29
+
30
+ MQG is designed to be fine-tuned for Medical Question Answering tasks.
31
+
32
+ ## Model Details
33
+
34
+ ### Model Description
35
+
36
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cdea59a9be5c195561c2b8/tMb8cNuV6ZYnjrnUC1Tg2.png)
37
+
38
+ In the expanding field of language model applications, medical knowledge representation remains a significant challenge due to the specialized nature of the domain.
39
+ Large language models, such as GPT-4, obtain reasonable scores on medical question answering tasks, but smaller models are far behind.
40
+ In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach.
41
+ We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model.
42
+ We show the benefits of our training strategy on a medical answering question dataset.
43
+ The study's findings highlight the potential of small language models in the medical domain when appropriately fine-tuned.
44
+
45
+
46
+ - **Developed by:** Raidium
47
+ - **Model type:** Transformer
48
+ - **License:** Aopache 2.0
49
+ - **Finetuned from model:** [BioMedLM](https://huggingface.co/stanford-crfm/BioMedLM)
50
+
51
+ ### Model Sources [optional]
52
+
53
+ <!-- Provide the basic links for the model. -->
54
+
55
+ - **Repository:** [https://github.com/raidium-med/MQG]
56
+ - **Paper:** [https://arxiv.org/abs/2405.14654](https://arxiv.org/abs/2405.14654)
57
+
58
+ ## Uses
59
+
60
+ ### Direct Use
61
+
62
+ MQG is trained using next-token-prediction on generated questions.
63
+ Therefore, it can be used out-of-the-box to generate potential answers for medical question answering tasks.
64
+ However, the generated questions might contain some errors, so it is advised to fine-tune the model on your dataset, and use the models to rank the potential answers.
65
+
66
+ ### Downstream Use
67
+
68
+ MQG can be fine-tuned for Medical Question Answering tasks.
69
+ For multiple choice questions, a classification head should be appended at the end of the model, to rank different proposed answers.
70
+
71
+ ### Out-of-Scope Use
72
+
73
+ This model should not be used for datasets outside medical tasks.
74
+
75
+ ## Bias, Risks, and Limitations
76
+
77
+ There is no guarantee that the model answers medical questions correctly. It should only be used for academic purposes, and not in clinical care.
78
+
79
+ ## Training Details
80
+
81
+ ### Training Data
82
+
83
+ The model is trained on a corpus of medical textbooks, and further pre-trained on generated questions: [ECNQA_generated_questions](https://huggingface.co/datasets/raidium/ECNQA_generated_questions).
84
+
85
+ ### Training Procedure
86
+
87
+ MGQ is trained using next-token-prediction on both datasets.
88
+
89
+ #### Training Hyperparameters
90
+
91
+ - **Training regime:** fp16 mixed-precision training. <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
92
+
93
+ ## Evaluation
94
+
95
+ ### Testing Data, Factors & Metrics
96
+
97
+ #### Testing Data
98
+
99
+ We tested the model on a medical question answering dataset, ECN-QA, based on the french medical residency examination.
100
+ It is composed of "single" and "progressive" questions (i.e a serie of multiple related questions).
101
+ It is a multiple-choice question dataset, containing 5 propositions for each question.
102
+
103
+ #### Metrics
104
+
105
+ We use the accuracy to evaluate the model on Medical Question Answering.
106
+
107
+ ### Results
108
+
109
+ See paper: [https://arxiv.org/abs/2405.14654](https://arxiv.org/abs/2405.14654)
110
+
111
+ ### Model Architecture and Objective
112
+
113
+ The model is based on BioMedLM's architecture, which is modified from GPT-2 architecture.
114
+
115
+ ### Compute Infrastructure
116
+
117
+ #### Hardware
118
+
119
+ The model was trained on the Jean-Zay supercomputer, on multiple nodes with 4 A100 gpus.
120
+
121
+ #### Software
122
+
123
+ Pytorch, DeepSpeed
124
+
125
+ ## Citation
126
+
127
+
128
+ **BibTeX:**
129
+ ```
130
+ @article{khlaut2024efficient,
131
+ title={Efficient Medical Question Answering with Knowledge-Augmented Question Generation},
132
+ author={Khlaut, Julien and Dancette, Corentin and Ferreres, Elodie and Bennani, Alaedine and H{\'e}rent, Paul and Manceron, Pierre},
133
+ journal={Clinical NLP Workshop, NAACL 2024},
134
+ year={2024}
135
+ }
136
+ ```
137
+
138
+ ## Model Card Contact
139
+
140
+ julien.khlaut at raidium.fr