TheBloke commited on
Commit
f4ae293
1 Parent(s): f2cc385

Initial GGML model commit.

Browse files
README.md ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - medical
9
+ inference: false
10
+ ---
11
+
12
+ # medalpaca-13B-GGML
13
+
14
+ This is GGML format quantised 4-bit, 5-bit and 8-bit GGML models of [Medalpaca 13B](https://huggingface.co/medalpaca/medalpaca-13b).
15
+
16
+ This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
17
+
18
+ ## Repositories available
19
+
20
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/medalpaca-13B-GPTQ-4bit).
21
+ * [4-bit, 5-bit 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/medalpaca-13B-GGML).
22
+ * [medalpaca's float32 HF format repo for GPU inference and further conversions](https://huggingface.co/medalpaca/medalpaca-13b).
23
+
24
+ ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
25
+
26
+ llama.cpp recently made a breaking change to its quantisation methods.
27
+
28
+ I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
29
+
30
+ ## Provided files
31
+ | Name | Quant method | Bits | Size | RAM required | Use case |
32
+ | ---- | ---- | ---- | ---- | ---- | ----- |
33
+ `medalpaca-13B.ggmlv2.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
34
+ `medalpaca-13B.ggmlv2.q4_1.bin` | q4_1 | 4bit | 8.14GB | 10.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
35
+ `medalpaca-13B.ggmlv2.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
36
+ `medalpaca-13B.ggmlv2.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
37
+ `medalpaca-13B.ggmlv2.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
38
+
39
+ ## How to run in `llama.cpp`
40
+
41
+ I use the following command line; adjust for your tastes and needs:
42
+
43
+ ```
44
+ ./main -t 8 -m medalpaca-13B.ggmlv2.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
45
+ ```
46
+
47
+ Change `-t 8` to the number of physical CPU cores you have.
48
+
49
+ ## How to run in `text-generation-webui`
50
+
51
+ GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
52
+
53
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
54
+
55
+
56
+ # Original model card: MedAlpaca 13b
57
+
58
+
59
+ ## Table of Contents
60
+
61
+ [Model Description](#model-description)
62
+ - [Architecture](#architecture)
63
+ - [Training Data](#trainig-data)
64
+ [Model Usage](#model-usage)
65
+ [Limitations](#limitations)
66
+
67
+ ## Model Description
68
+ ### Architecture
69
+ `medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
70
+ It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
71
+ The primary goal of this model is to improve question-answering and medical dialogue tasks.
72
+
73
+ ### Training Data
74
+ The training data for this project was sourced from various resources.
75
+ Firstly, we used Anki flashcards to automatically generate questions,
76
+ from the front of the cards and anwers from the back of the card.
77
+ Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
78
+ We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
79
+ to generate questions from the headings and using the corresponding paragraphs
80
+ as answers. This dataset is still under development and we believe
81
+ that approximately 70% of these question answer pairs are factual correct.
82
+ Thirdly, we used StackExchange to extract question-answer pairs, taking the
83
+ top-rated question from five categories: Academia, Bioinformatics, Biology,
84
+ Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
85
+ consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
86
+
87
+ | Source | n items |
88
+ |------------------------------|--------|
89
+ | ChatDoc large | 200000 |
90
+ | wikidoc | 67704 |
91
+ | Stackexchange academia | 40865 |
92
+ | Anki flashcards | 33955 |
93
+ | Stackexchange biology | 27887 |
94
+ | Stackexchange fitness | 9833 |
95
+ | Stackexchange health | 7721 |
96
+ | Wikidoc patient information | 5942 |
97
+ | Stackexchange bioinformatics | 5407 |
98
+
99
+ ## Model Usage
100
+ To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information.
101
+ Inference
102
+
103
+ You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task:
104
+
105
+ ```python
106
+
107
+ from transformers import pipeline
108
+
109
+ qa_pipeline = pipeline("question-answering", model="medalpaca/medalpaca-7b", tokenizer="medalpaca/medalpaca-7b")
110
+ question = "What are the symptoms of diabetes?"
111
+ context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss."
112
+ answer = qa_pipeline({"question": question, "context": context})
113
+ print(answer)
114
+ ```
115
+
116
+ ## Limitations
117
+ The model may not perform effectively outside the scope of the medical domain.
118
+ The training data primarily targets the knowledge level of medical students,
119
+ which may result in limitations when addressing the needs of board-certified physicians.
120
+ The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
121
+ It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.
medalpaca-13B.ggmlv2.q4_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b94d83282594f5c5f233b0e3d8ce197a5b3a53638b2729a475a1b0eb1ffeb2bb
3
+ size 8136777088
medalpaca-13B.ggmlv2.q4_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc0b6b65024af2731a83c3c03f8054ed8282d8a8d00d6d1431ccdbc752b17be0
3
+ size 9763709568
medalpaca-13B.ggmlv2.q5_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93d962df2ac16c61bdb8c9fb09cf614d294eb22785ea970c5ff80cb6faf7d27e
3
+ size 8950243328
medalpaca-13B.ggmlv2.q5_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c01290e9fe8dbec85c6f18c70b4fe1c0095cc2326856c74a5dda76ca0de35a1
3
+ size 9763709568
medalpaca-13B.ggmlv2.q8_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1e1263d22aeaa7fe18a3ee9cba89fedb1a8cc072d83ab9f6a864c53feb7a0ff
3
+ size 14644507008