TheBloke commited on
Commit
727aa3f
1 Parent(s): 7849b52

Updating model files

Browse files
Files changed (1) hide show
  1. README.md +42 -20
README.md CHANGED
@@ -8,6 +8,17 @@ tags:
8
  - medical
9
  inference: false
10
  ---
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  # medalpaca-13B-GGML
13
 
@@ -57,35 +68,46 @@ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](http
57
  Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
58
 
59
 
 
 
 
 
 
 
 
 
 
 
 
60
  # Original model card: MedAlpaca 13b
61
 
62
 
63
  ## Table of Contents
64
 
65
- [Model Description](#model-description)
66
- - [Architecture](#architecture)
67
- - [Training Data](#trainig-data)
68
- [Model Usage](#model-usage)
69
- [Limitations](#limitations)
70
 
71
  ## Model Description
72
  ### Architecture
73
- `medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
74
- It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
75
  The primary goal of this model is to improve question-answering and medical dialogue tasks.
76
 
77
  ### Training Data
78
- The training data for this project was sourced from various resources.
79
- Firstly, we used Anki flashcards to automatically generate questions,
80
- from the front of the cards and anwers from the back of the card.
81
- Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
82
- We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
83
- to generate questions from the headings and using the corresponding paragraphs
84
- as answers. This dataset is still under development and we believe
85
- that approximately 70% of these question answer pairs are factual correct.
86
- Thirdly, we used StackExchange to extract question-answer pairs, taking the
87
- top-rated question from five categories: Academia, Bioinformatics, Biology,
88
- Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
89
  consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
90
 
91
  | Source | n items |
@@ -119,7 +141,7 @@ print(answer)
119
 
120
  ## Limitations
121
  The model may not perform effectively outside the scope of the medical domain.
122
- The training data primarily targets the knowledge level of medical students,
123
  which may result in limitations when addressing the needs of board-certified physicians.
124
- The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
125
  It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.
 
8
  - medical
9
  inference: false
10
  ---
11
+ <div style="width: 100%;">
12
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
13
+ </div>
14
+ <div style="display: flex; justify-content: space-between; width: 100%;">
15
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
16
+ <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
17
+ </div>
18
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
19
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
20
+ </div>
21
+ </div>
22
 
23
  # medalpaca-13B-GGML
24
 
 
68
  Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
69
 
70
 
71
+ ## Want to support my work?
72
+
73
+ I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
74
+
75
+ So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
76
+
77
+ Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
78
+
79
+ * Patreon: coming soon! (just awaiting approval)
80
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
81
+ * Discord: https://discord.gg/UBgz4VXf
82
  # Original model card: MedAlpaca 13b
83
 
84
 
85
  ## Table of Contents
86
 
87
+ [Model Description](#model-description)
88
+ - [Architecture](#architecture)
89
+ - [Training Data](#trainig-data)
90
+ [Model Usage](#model-usage)
91
+ [Limitations](#limitations)
92
 
93
  ## Model Description
94
  ### Architecture
95
+ `medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
96
+ It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
97
  The primary goal of this model is to improve question-answering and medical dialogue tasks.
98
 
99
  ### Training Data
100
+ The training data for this project was sourced from various resources.
101
+ Firstly, we used Anki flashcards to automatically generate questions,
102
+ from the front of the cards and anwers from the back of the card.
103
+ Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
104
+ We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
105
+ to generate questions from the headings and using the corresponding paragraphs
106
+ as answers. This dataset is still under development and we believe
107
+ that approximately 70% of these question answer pairs are factual correct.
108
+ Thirdly, we used StackExchange to extract question-answer pairs, taking the
109
+ top-rated question from five categories: Academia, Bioinformatics, Biology,
110
+ Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
111
  consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
112
 
113
  | Source | n items |
 
141
 
142
  ## Limitations
143
  The model may not perform effectively outside the scope of the medical domain.
144
+ The training data primarily targets the knowledge level of medical students,
145
  which may result in limitations when addressing the needs of board-certified physicians.
146
+ The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
147
  It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.