Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ pinned: false
|
|
10 |
We are the LLM team at EPFL (Swiss Federal Institute of Technology), led by Prof. Antoine Bosselut from the [NLP lab](https://nlp.epfl.ch) and Prof. Martin Jaggi from the [MLO lab](https://www.epfl.ch/labs/mlo/).
|
11 |
|
12 |
<p align="center">
|
13 |
-
<img src="
|
14 |
</p>
|
15 |
|
16 |
|
@@ -18,7 +18,7 @@ Our latest project is **MEDITRON**, currently the best open-source medical Large
|
|
18 |
|
19 |
We've publicly released the weights for [Meditron-70B](https://huggingface.co/epfl-llm/meditron-70b) and [Meditron-7B](https://huggingface.co/epfl-llm/meditron-7b) on Huggingface.
|
20 |
|
21 |
-
- ๐ฆพ **GitHub
|
22 |
|
23 |
- ๐ **Paper**: [MEDITRON-70B: Scaling Medical Pre-Training For Large Language Models](https://arxiv.org/abs/2311.16079) (pre-print)
|
24 |
|
|
|
10 |
We are the LLM team at EPFL (Swiss Federal Institute of Technology), led by Prof. Antoine Bosselut from the [NLP lab](https://nlp.epfl.ch) and Prof. Martin Jaggi from the [MLO lab](https://www.epfl.ch/labs/mlo/).
|
11 |
|
12 |
<p align="center">
|
13 |
+
<img src="medllama.jpeg" width="35%">
|
14 |
</p>
|
15 |
|
16 |
|
|
|
18 |
|
19 |
We've publicly released the weights for [Meditron-70B](https://huggingface.co/epfl-llm/meditron-70b) and [Meditron-7B](https://huggingface.co/epfl-llm/meditron-7b) on Huggingface.
|
20 |
|
21 |
+
- ๐ฆพ **GitHub**: [epfLLM/meditron](https://github.com/epfLLM/meditron) and [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM)
|
22 |
|
23 |
- ๐ **Paper**: [MEDITRON-70B: Scaling Medical Pre-Training For Large Language Models](https://arxiv.org/abs/2311.16079) (pre-print)
|
24 |
|