sethuiyer commited on
Commit
37c251f
1 Parent(s): 9fc126d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -15,8 +15,6 @@ pipeline_tag: text-generation
15
 
16
  # MedleyMD
17
 
18
- # NOTE: Experimental
19
-
20
  ![logo](https://huggingface.co/sethuiyer/MedleyMD/resolve/main/logo.webp)
21
 
22
 
@@ -24,6 +22,8 @@ MedleyMD is a Mixure of Experts (MoE) made with the following models using [Lazy
24
  * [sethuiyer/Dr_Samantha_7b_mistral](https://huggingface.co/sethuiyer/Dr_Samantha_7b_mistral)
25
  * [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
26
 
 
 
27
  ## 🧩 Configuration
28
 
29
  ```yaml
@@ -40,6 +40,18 @@ experts:
40
 
41
  ```
42
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  ## 💻 Usage
44
 
45
  ```python
@@ -69,7 +81,7 @@ generation_kwargs = {
69
  messages = [{"role":"system", "content":"You are an helpful AI assistant. Please use </s> when you want to end the answer."},
70
  {"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
71
  prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
72
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
73
  print(outputs[0]["generated_text"])
74
  ```
75
 
 
15
 
16
  # MedleyMD
17
 
 
 
18
  ![logo](https://huggingface.co/sethuiyer/MedleyMD/resolve/main/logo.webp)
19
 
20
 
 
22
  * [sethuiyer/Dr_Samantha_7b_mistral](https://huggingface.co/sethuiyer/Dr_Samantha_7b_mistral)
23
  * [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1)
24
 
25
+ These models were chosen because `fblgit/UNA-TheBeagle-7b-v1` has excellent performance for a 7B parameter model and Dr.Samantha has capabilities of a medical knowledge-focused model (trained on USMLE databases and doctor-patient interactions) with the philosophical, psychological, and relational understanding, scoring 68.82% in topics related to clinical domain.
26
+
27
  ## 🧩 Configuration
28
 
29
  ```yaml
 
40
 
41
  ```
42
 
43
+ ## GGUF
44
+ GGUF is available to download.
45
+ 1. [medleymd.Q4_K_M](https://huggingface.co/sethuiyer/MedleyMD-GGUF/resolve/main/medleymd.Q4_K_M.gguf) [7.2GB]
46
+ 2. [medleymd.Q5_K_M](https://huggingface.co/sethuiyer/MedleyMD-GGUF/resolve/main/medleymd.Q5_K_M.gguf) [9.13GB]
47
+
48
+
49
+ ## Ollama
50
+
51
+ MedleyMD is now available on Ollama. You can use it by running the command ```ollama run stuehieyr/medleymd``` in your
52
+ terminal. If you have limited computing resources, check out this [video](https://www.youtube.com/watch?v=Qa1h7ygwQq8) to learn how to run it on
53
+ a Google Colab backend.
54
+
55
  ## 💻 Usage
56
 
57
  ```python
 
81
  messages = [{"role":"system", "content":"You are an helpful AI assistant. Please use </s> when you want to end the answer."},
82
  {"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
83
  prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
84
+ outputs = pipeline(prompt, **generation_kwargs)
85
  print(outputs[0]["generated_text"])
86
  ```
87