Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
aadityaΒ 
posted an update May 10
Post
2922
Okay, I can post now, Yayy!!
Sharing the OpenBioLLM-70B as the first post :)

Introducing OpenBioLLM-Llama3-70B & 8B: The most capable openly available Medical-domain LLMs to date! πŸ©ΊπŸ’ŠπŸ§¬

Outperforms industry giants like GPT-4, Gemini, Meditron-70B, Med-PaLM-1, and Med-PaLM-2 in the biomedical domain. πŸ₯πŸ“ˆ 🌟

OpenBioLLM-70B delivers SOTA performance, setting a new state-of-the-art for models of its size. OpenBioLLM-8B model and even surpasses GPT-3.5, Gemini, and Meditron-70B! πŸš€

Today's release is just the beginning! In the coming months, we'll be introducing:

- Expanded medical domain coverage 🧠
- Longer context windows πŸ“œπŸ”
- Better benchmarks πŸ“ˆπŸ†
- Multimodal capabilitiesπŸ–₯οΈπŸ©ΊπŸ“ŠπŸ”¬

Medical-LLM Leaderboard: openlifescienceai/open_medical_llm_leaderboard

More detail : https://huggingface.co/blog/aaditya/openbiollm

good

Does it outperform Meta-Llama-3-70B-Instruct on this medical llm benchmarks though?

Β·

https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B above the leaderboard you have mentioned "All results presented are in the zero-shot setting" but if I'm not mistaken, BioMistral 7B score on pubmedQA is 77.5 after SFT. [refer: https://arxiv.org/pdf/2402.10373]

could you please confirm if OpenBioLLM-8B results are zero shot or after SFT ?

Β·

Yes, the Results are zero-shot.