OPT Collection OPT (Open Pretrained Transformer) is a series of open-sourced large causal language models which perform similar in performance to GPT3. • 12 items • Updated Nov 21 • 4
Sapiens Collection Foundation models for human tasks. Code: https://github.com/facebookresearch/sapiens • 72 items • Updated Sep 18 • 48
Chameleon Collection Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR. • 2 items • Updated Jul 9 • 27
MelodyFlow Collection MelodyFlow: High Fidelity Text-Guided Music Generation and Editing via Single-Stage Flow Matching • 7 items • Updated Oct 23 • 16
LayerSkip Collection Models continually pretrained using LayerSkip - https://arxiv.org/abs/2404.16710 • 8 items • Updated Nov 21 • 45
NVLM 1.0 Collection A family of frontier-class multimodal large language models (LLMs) that achieve state-of-the-art results on vision-language tasks and text-only tasks. • 2 items • Updated 2 days ago • 49
Zebra Logic Bench Collection ZebraLogic Bench: Testing the Limits of LLMs in Logical Reasoning • 4 items • Updated 25 days ago • 4
AI2 Safety Toolkit Collection Safety data, moderation tools and safe LLMs. • 6 items • Updated 25 days ago • 3
SciRIFF Collection Data and models to enhance instruction-following for scientific literature understanding. • 9 items • Updated 25 days ago • 8
Reward Bench Collection Datasets, spaces, and models for the reward model benchmark! • 5 items • Updated 25 days ago • 8
Tulu V2.5 Suite Collection A suite of models trained using DPO and PPO across a wide variety (up to 14) of preference datasets. See https://arxiv.org/abs/2406.09279 for more! • 44 items • Updated 25 days ago • 14
OLMoE Collection Artifacts for open mixture-of-experts language models. • 13 items • Updated 25 days ago • 27
Molmo Collection Artifacts for open multimodal language models. • 5 items • Updated 25 days ago • 289