JordiBayarri
commited on
Commit
•
b002e98
1
Parent(s):
f79b2aa
Update README.md
Browse files
README.md
CHANGED
@@ -24,8 +24,8 @@ pipeline_tag: question-answering
|
|
24 |
---
|
25 |
<p align="center">
|
26 |
<picture>
|
27 |
-
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/
|
28 |
-
<img alt="prompt_engine" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/
|
29 |
</picture>
|
30 |
</p>
|
31 |
<h1 align="center">
|
@@ -34,11 +34,12 @@ Aloe: A Family of Fine-tuned Open Healthcare LLMs
|
|
34 |
|
35 |
---
|
36 |
|
37 |
-
|
|
|
38 |
|
39 |
Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.
|
40 |
|
41 |
-
# Aloe-8B
|
42 |
|
43 |
|
44 |
|
@@ -63,7 +64,7 @@ Complete training details, model merging configurations, and all training data (
|
|
63 |
- **Developed by:** [HPAI](https://hpai.bsc.es/)
|
64 |
- **Model type:** Causal decoder-only transformer language model
|
65 |
- **Language(s) (NLP):** English (capable but not formally evaluated on other languages)
|
66 |
-
- **License:** This model is based on Meta Llama 3.1 8B and is governed by the [Meta Llama 3 License](https://www.llama.com/llama3_1/license/). All our modifications are available with a [CC BY 4.0](https://creativecommons.org/licenses/by
|
67 |
- **Base model :** [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)
|
68 |
- **Paper:** (more coming soon)
|
69 |
- **RAG Repository:** https://github.com/HPAI-BSC/prompt_engine
|
|
|
24 |
---
|
25 |
<p align="center">
|
26 |
<picture>
|
27 |
+
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/vg1jG1OgqP7yyE0PO-OMT.png">
|
28 |
+
<img alt="prompt_engine" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/vg1jG1OgqP7yyE0PO-OMT.png" width=50%>
|
29 |
</picture>
|
30 |
</p>
|
31 |
<h1 align="center">
|
|
|
34 |
|
35 |
---
|
36 |
|
37 |
+
|
38 |
+
Llama3.1-Aloe-Beta-8B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in two model sizes: [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B) and [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B). Both models are trained using the same recipe.
|
39 |
|
40 |
Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.
|
41 |
|
42 |
+
# Aloe-Beta-8B
|
43 |
|
44 |
|
45 |
|
|
|
64 |
- **Developed by:** [HPAI](https://hpai.bsc.es/)
|
65 |
- **Model type:** Causal decoder-only transformer language model
|
66 |
- **Language(s) (NLP):** English (capable but not formally evaluated on other languages)
|
67 |
+
- **License:** This model is based on Meta Llama 3.1 8B and is governed by the [Meta Llama 3 License](https://www.llama.com/llama3_1/license/). All our modifications are available with a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license, making the Aloe Beta models **compatible with commercial use**.
|
68 |
- **Base model :** [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)
|
69 |
- **Paper:** (more coming soon)
|
70 |
- **RAG Repository:** https://github.com/HPAI-BSC/prompt_engine
|