Overview

OLMo-2 is a series of Open Language Models designed to enable the science of language models. These models are trained on the Dolma dataset, with all code, checkpoints, logs (coming soon), and associated training details made openly available.

The OLMo-2 13B Instruct November 2024 is a post-trained variant of the OLMo-2 13B model, which has undergone supervised fine-tuning on an OLMo-specific variant of the Tülu 3 dataset. Additional training techniques include Direct Preference Optimization (DPO) and Reinforcement Learning from Virtual Rewards (RLVR), optimizing it for state-of-the-art performance across various tasks, including chat, MATH, GSM8K, and IFEval.

Variants

No Variant Cortex CLI command
1 Olmo-2-7b cortex run olmo-2:7b
2 Olmo-2-13b cortex run olmo-2:13b
3 Olmo-2-32b cortex run olmo-2:32b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexhub/olmo-2
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run olmo-2
    

Credits

Downloads last month
1,047
GGUF
Model size
32.2B params
Architecture
olmo2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.